Hi there,
I am trying to understand how to simply get a correct lip sync over an audio file with AC, unfortunetely I failed to generate data with Sapi, and lipsync doesn't work with the Pamela and Papayago data files generated. The blend shapes just doesn't animate at all. My file extension is renamed as .txt files and matching with the audio file as it should.
I watched to the physic demo files generated with Sapi and it's pretty different from mine. Here is how my data looks (generated from Papayago) :
MohoSwitch1
-1 rest
2 rest
1 etc
3 AI
6 etc
8 MBP
10 AI
etc..
Any idea of what I am doing wrong ?
Comments
Clicking "Revert to default" in that Phonemes editor will cause AC to change them to a default approximation, but you'll probably have to tweak them a bit more.
About phoneme editor for matching with the phonemes from the files, I tried different things : adding the ones in the file to the default generated ones like this :
B/M/P/MBP/ V
AY/AH/IH/EY/ER/A/I/AI/E
G/O/OO/OH/W/U/
SH/R/Z/SF/D/L/F/TN/K/N/NG/H/X/FV/etc
UH/EH/DH/AE/IY/rest
I also tried just with the ones found in the file, and different other tests, so it doesn't looks to be my problem, the mouth just doesn't animate.
I tried on my own character the Sepi files you generated for Brain from the Physic demo and it works. Lip Sync also works with text to speech.
Does it need any special exporting options from Pamela and Papagayo, like a specific framerate or something I missed ? I tried 24 fps and 60, but same result, just higher lign numbers in the text file, but no animation. I'll keep digging, thanks again
If you're up for a bit of debugging, the code to convert the text file to AC phonemes starts at line 408 in Dialog.cs. Placing a Debug.Log ("Found frame: " + frame); statement in line 435 should cause the Console to list found phonemes as they're generated. Anything appear if you try that?
There is still no Blendshapes playing. The -1 line doesn't looks to change anything with or without it.
I will try agin with Pamela file soon, may be something could be different.
Thank you for your time
Have you set your Lipsync to affect GameObjects (in the Speech Manager), and mapped your phonemes in the Player's Inspector? After using the Shapeable script on your SkinnedMeshRenderer, you also have to assign the Phonemes group in your Player Inspector for lipsyncing to take effect.
I did setup the Speech Manager to affect portraits and gameObjects, and mapped the phoneme group from the Shapeable to the Player script. (Lip sync works with Text to Speech and with Brain Sapi file you generated)
I'll try to start a new scene/project with my character only and make some more tests.
First it didn't work at all because my character's hotspot label was different from my character's name. The name of the lipsync file has to follow the hotspot label (if you changed it) even though the sound file's name is the same as the one provided by the speech manager.
For exemple you have a character called "Bob" and you change the hotspot's label to "SpongeBob". When you gather speeches AC will tell you the first speech would be Bob1.
Your sound should be named Bob1.wav and your lipsync file SpongeBob.txt.
When I corrected this, lipsync worked with the option "from speech text" only.
I had to modify dialog.cs (in "AdventureCreator/Scripts/Speech") to make it work with papagayo.
if (shape == searchText)
to
if (searchText.Contains(shape))
I am just a CG artist so I don't understand why but it worked, maybe some fucking space in the strings... I tried to "debug.log" the two variables and they seems to be the same but when I debug.log (shape==searchText) it always return false.
Anyway, now it is working ! (I just had to change the Process speed to 0.26 in the Speech Manager)
If anyone understand why I'm curious to understand and maybe the dialog.cs file could be corrected in a proper way in the next AC update.
Last thing with papagayo, use the export option to generate the txt file. not save as
Thanks for the answer and the fix !