Generate Data
Solutions to common issues users may encounter. Includes diagnosis tips and recommended fixes to ensure a smooth experience.
What It Does
The Generate Data button analyzes your audio and creates a special layer called Syncface Data.
This layer drives your character’s lip movements, automatically syncing them to the sound of your voice or dialogue.
Key Benefits
Automatically analyzes your audio to create lip-sync data
Saves hours of manual keyframing
Generates natural, expressive mouth movements
Works with any voice recording or spoken dialogue
How To Use It
Make sure your composition contains at least one audio layer
After importing your
.ai
file, click the Generate Data buttonA new layer called Syncface Data will be added to your composition
Play your comp to check if your character is reacting to the audio
Pro Tip: If your character isn’t reacting after generating data, use the Link Data button to manually connect the animation.
Understanding the Data Layer
The Syncface Data layer includes three slider controllers that help fine-tune the lip sync:
Low RMS
Controls how soft or quiet audio affects lip movement.
Lower values make the character more responsive to faint sounds. Higher values filter out soft noise.High RMS
Controls how loud audio affects lip movement.
Lower values exaggerate mouth movement. Higher values make movements more subtle and realistic.Viseme Count
Defines how many lip shapes are used during animation.
Usually best left at the default setting, but you can reduce or increase it if needed.
These controls give you precise, real-time influence over how expressive or reserved the animation feels.
Fine-Tuning Your Animation
For whispered dialogue: Lower your Low RMS value
For shouted dialogue: Raise your High RMS value
For more subtle lip movements: Raise both Low and High RMS
For exaggerated animation: Lower both Low and High RMS
Join our Community Forum
Any other questions? Get in touch