Hi there! I created my main character model using 2 meshes for his face: one is the regular face, and the second one is a moustache. I'm using the text speech lip sync method.
Both of them have expressions and phonemes properly set, yet when I use the tokens in Dialogue: PlaySpeech, I only get the face mesh to do the expression and lip sync, but not the moustache mesh. They both have the same number of blendshapes properly set and they work when I use an object: shapeable node to establish the expression, but not when I use Dialogue, which would be the most proper way to handle the lip sync.
Is there a way to sync both meshes?
Comments
I would even tho' advice you to have this script as part of Adventure Creator, since a lot of 3D Modelers prefer to have several facial details a separate models for better handling their proper blendshapes and textures.