Forum rules - please read before posting.

External speech & lipsync issue

Hi there,

I am trying to understand how to simply get a correct lip sync over an audio file with AC, unfortunetely I failed to generate data with Sapi, and lipsync doesn't work with the Pamela and Papayago data files generated. The blend shapes just doesn't animate at all. My file extension is renamed as .txt files and matching with the audio file as it should.
I watched to the physic demo files generated with Sapi and it's pretty different from mine. Here is how my data looks (generated from Papayago) :

MohoSwitch1
-1 rest
2 rest
1 etc
3 AI
6 etc
8 MBP
10 AI

etc..

Any idea of what I am doing wrong ?

Comments

  • What do your lipsync / phoneme settings in your Speech Manager look like?  If you've changed which external app you use to generate the phonemes, you'll have to modify the phonemes editor to match the output of the new files.  Those "AI", "MPP" etc words all need to be assigned to a phoneme shape, which you can do in the Speech Manager's Phoneme editor.

    Clicking "Revert to default" in that Phonemes editor will cause AC to change them to a default approximation, but you'll probably have to tweak them a bit more.
  • Thank you for your reply,

    About phoneme editor for matching with the phonemes from the files, I tried different things : adding the ones in the file to the default generated ones like this :

    B/M/P/MBP/ V
    AY/AH/IH/EY/ER/A/I/AI/E
    G/O/OO/OH/W/U/
    SH/R/Z/SF/D/L/F/TN/K/N/NG/H/X/FV/etc
    UH/EH/DH/AE/IY/rest

    I also tried  just with the ones found in the file, and different other tests, so it doesn't looks to be my problem, the mouth just doesn't animate.
    I tried on my own character the Sepi files you generated for Brain from the Physic demo and it works. Lip Sync also works with text to speech.
    Does it need any special exporting options from Pamela and Papagayo, like a specific framerate or something I missed ? I tried 24 fps and  60, but same result, just higher lign numbers in the text file, but no animation. I'll keep digging, thanks again
  • No special options needed, so far as I'm aware.  Though I wonder if the reverse order of the first couple (-1, 2, 1) has anything to do with it.

    If you're up for a bit of debugging, the code to convert the text file to AC phonemes starts at line 408 in Dialog.cs.  Placing a Debug.Log ("Found frame: " + frame); statement in line 435 should cause the Console to list found phonemes as they're generated.  Anything appear if you try that?
  • With DebugLog line added to the code and after the Papayago file being converted by Monobehavior for some line-ending format issue, logs finally gaves some numbers (Found frame: 2, Found frame: 0, Found frame: 3,  etc). 24 logged numbers for the 24 phonems
    There is still no Blendshapes playing. The -1 line doesn't looks to change anything with or without it.
    I will try agin with Pamela file soon, may be something could be different.
    Thank you for your time
  • If the numbers are being logged, it sounds like they're being read as expected.

    Have you set your Lipsync to affect GameObjects (in the Speech Manager), and mapped your phonemes in the Player's Inspector?  After using the Shapeable script on your SkinnedMeshRenderer, you also have to assign the Phonemes group in your Player Inspector for lipsyncing to take effect.
  • Thanks again.

    I did setup the Speech Manager to affect portraits and gameObjects, and mapped the phoneme group from the Shapeable to the Player script. (Lip sync works with Text to Speech and with Brain Sapi file you generated)
    I'll try to start a new scene/project  with my character only and make some more tests.
  • edited October 2015
    Hello ! I don't know if it will be helpfull but I had some issues with lipsync too. 

    First it didn't work at all because my character's hotspot label was different from my character's name. The name of the lipsync file has to follow the hotspot label (if you changed it) even though the sound file's name is the same as the one provided by the speech manager.
    For exemple you have a character called "Bob" and you change the hotspot's label to "SpongeBob". When you gather speeches AC will tell you the first speech would be Bob1.
    Your sound should be named Bob1.wav and your lipsync file SpongeBob.txt.

    When I corrected this, lipsync worked with the option "from speech text" only.
    I had to modify dialog.cs (in "AdventureCreator/Scripts/Speech") to make it work with papagayo.
    I changed the line :

    if (shape == searchText)
    to
    if (searchText.Contains(shape))

    I am just a CG artist so I don't understand why but it worked, maybe some fucking space in the strings... I tried to "debug.log" the two variables and they seems to be the same but when I debug.log (shape==searchText) it always return false.

    Anyway, now it is working ! (I just had to change the Process speed to 0.26 in the Speech Manager)
    If anyone understand why I'm curious to understand and maybe the dialog.cs file could be corrected in a proper way in the next AC update.


    Last thing with papagayo, use the export option to generate the txt file. not save as :D




  • @mammouth31: Are you using the latest AC release?  I thought I'd solved the hotspot name issue.
  • Update: Never mind - I've recreated the "Hotspot name" issue and fixed it.  v1.49 will require you to rename, in your example, the lipsync file back to "Bob".
  • I'm using v1.48b
    Thanks for the answer and the fix !
Sign In or Register to comment.

Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Welcome to the official forum for Adventure Creator.