Faicey is now integrated into the mindX agency system, enabling the creation of personalized interface expressions (faces) for agents. Faicey combines prompt, agent, dataset, model, and persona to generate modular, customizable UI/UX systems that adapt to each agent's identity.
Faicey is a UI/UX AIML modular response system that creates personalized interface expressions for AI agents. It's conceptualized as a way to give agents a "face" - a modular, customizable interface that reflects their persona and capabilities.
Reference: https://github.com/faicey
Related Projects:
The FaiceyAgent (agents/faicey_agent.py) is responsible for:
A Faicey expression emerges from the combination of:
The system includes default UI modules:
POST /faicey/expressions
Content-Type: application/json
{
"agent_id": "my_agent",
"persona_id": "expert_persona_123",
"prompt": "You are a helpful assistant",
"agent_config": {...},
"dataset_info": {...},
"model_config": {...}
}
GET /faicey/expressions?agent_id=my_agent
GET /faicey/expressions/{expression_id}
GET /faicey/expressions/agent/{agent_id}
PUT /faicey/expressions/{expression_id}
Content-Type: application/json
{
"ui_modules": [...],
"customization_options": {...}
}
GET /faicey/expressions/{expression_id}/ui-config
Returns a UI configuration object ready for frontend consumption.
from agents.faicey_agent import FaiceyAgent
from agents.memory_agent import MemoryAgent
from agents.persona_agent import PersonaAgent
from utils.config import Config
Initialize
memory_agent = MemoryAgent()
persona_agent = PersonaAgent(agent_id="persona_manager", memory_agent=memory_agent)
faicey_agent = FaiceyAgent(
agent_id="faicey_agent",
memory_agent=memory_agent,
persona_agent=persona_agent
)
Create expression from persona
result = await faicey_agent.create_expression_from_persona(
agent_id="my_agent",
persona_id="expert_persona_123",
prompt="You are an expert assistant",
agent_config={"capabilities": ["reasoning", "code_generation"]},
model_config={"provider": "gemini", "model": "gemini-pro"}
)
Get UI configuration for frontend
ui_config = await faicey_agent.export_expression_ui_config(result["expression_id"])
Faicey expressions are stored in:
data/faicey/faicey_registry.jsondata/faicey/expressions/{expression_id}.jsondata/faicey/modules/module_registry.jsonFaicey integrates with the mindX agency system through:
Faicey expressions include a skills system that tracks agent capabilities:
Each skill has:
skill_id: Unique identifiername: Skill namecategory: Skill category (capability, expertise, rendering, model)description: Skill descriptionlevel: Proficiency level (1-10)enabled: Whether the skill is activeconfig: Skill-specific configurationFaicey includes Three.js integration for 3D wireframe rendering:
Configuration:
Wireframe Features:
Usage:
import FaiceyThreeJSRenderer from './components/FaiceyThreeJS';
const renderer = new FaiceyThreeJSRenderer(containerElement, threejsConfig);
// Create wireframe shapes
renderer.createWireframeBox(1, wireframeConfig);
renderer.createWireframeSphere(1, 32, wireframeConfig);
renderer.createWireframePlane(2, 2, wireframeConfig);
Wireframe Configuration:
{
"enabled": true,
"line_width": 1,
"wireframe_color": "#00a8ff",
"show_vertices": true,
"show_edges": true,
"vertex_size": 0.05,
"material": {
"type": "LineBasicMaterial",
"color": "#00a8ff",
"transparent": true,
"opacity": 0.8
}
}
Faicey includes support for advanced Three.js examples from the official Three.js documentation:
const decal = renderer.createDecal(
targetMesh,
position,
rotation,
scale,
texture
);
Reference: https://threejs.org/examples/#webgl_decals
const material = renderer.createBumpmapMaterial(
texture,
bumpMap,
normalMap,
{ bump_scale: 1.0, normal_scale: { x: 1, y: 1 } }
);
Reference: https://threejs.org/examples/#webgl_materials_bumpmap
const pointCloud = await renderer.loadPCD('path/to/file.pcd', {
point_size: 1.0,
point_color: '#00a8ff'
});
Reference: https://threejs.org/examples/#webgl_loader_pcd
const fatWireframe = renderer.createFatWireframe(geometry, {
line_width: 5.0,
line_color: '#00a8ff'
});
Reference: https://threejs.org/examples/#webgl_lines_fat_wireframe
const wireframeMesh = renderer.createWireframeMeshWithMaterial(geometry, {
wireframe_color: '#00a8ff',
wireframe_linewidth: 2
});
Reference: https://threejs.org/examples/#webgl_materials_wireframe
// Video texture
const videoTexture = renderer.createVideoTexture(videoElement);
const material = renderer.createVideoMaterial(videoTexture);
// Webcam texture
const webcamTexture = await renderer.createWebcamTexture({
webcam_constraints: { video: true, audio: false }
});
Reference: https://threejs.org/examples/#webgl_materials_video_webcam
const morphMesh = await renderer.createMorphTargetMesh(
geometry,
morphTargets,
{ morph_influence: 1.0 }
);
// Update morph target
renderer.updateMorphTargetInfluence(morphMesh, targetIndex, influence);
Reference: https://threejs.org/examples/#webgpu_morphtargets_face
The speech inflection system provides complete facial animation for speaking, listening, and seeing modes using WebGPU morph targets.
import FaiceyThreeJSRenderer from './components/FaiceyThreeJS';
import FaiceySpeechInflection from './components/FaiceySpeechInflection';
// Create morph target mesh
const morphMesh = renderer.createMorphTargetMesh(geometry, morphTargets);
// Initialize speech inflection system
const speechSystem = await renderer.initializeSpeechInflection(morphMesh, {
alphabet: 'english',
viseme_blend_duration: 0.1,
eye_blink_interval: 3.0
});
// Speaking mode
await speechSystem.startSpeaking("Hello, I am an AI agent", audioUrl);
// Listening mode
speechSystem.startListening();
// Stop
speechSystem.stopSpeaking();
speechSystem.stopListening();
{
"speech_inflection": {
"enabled": true,
"alphabet": "english",
"tone_system": null,
"viseme_blend_duration": 0.1,
"eye_blink_interval": 3.0,
"listening_ear_animation": true,
"speaking_eye_tracking": true,
"features": [
"text_to_speech_animation",
"phoneme_to_viseme_mapping",
"eye_movements",
"eyebrow_expressions",
"ear_animations",
"listening_mode",
"speaking_mode",
"audio_synchronization"
]
}
}
The system uses comprehensive phoneme-to-viseme mappings stored in data/faicey/phoneme_viseme_map.json:
Morph targets are defined in data/faicey/morph_target_definitions.json:
POST /faicey/speech/speak
Content-Type: application/json
{
"text": "Hello, I am an AI agent",
"audio_url": "https://example.com/audio.mp3",
"alphabet": "english",
"tone_system": null
}
POST /faicey/speech/listen
Activates listening mode with ear and eye animations.
POST /faicey/speech/stop
Returns to idle mode.
File: data/faicey/phoneme_viseme_map.json
Contains comprehensive mappings:
File: data/faicey/morph_target_definitions.json
Contains:
The system converts text to phonemes using:
Each phoneme maps to a viseme (visual mouth shape):
The system generates an animation timeline with:
The animation loop: