Forum rules - please read before posting.

Am I misunderstanding Expressions and Lip Syncing?

I'm using Synty's models with the faceplates and texture expressions/phonemes.

In the Speech Manager I have Lip Syncing set to From Speech Text (I have no audio files yet) and Gam Object Texture.

I've added a 'Lipsync texture' component to my player's root GameObject and assigned the faceplate mesh (I did edit the script to have it accept a non-skinned MeshRenderer, as mine isn't skinned and this shouldn't cause a problem as far as I can tell) and have the textures assigned to the phoneme slots.

I have expressions setup on my player, though that seems to be more intended for a UI portrait as I don't see how it'd know about the faceplate model.

I'd like to be able to update my character models' expressions using the AC expression system, swapping out the faceplate's texture with the assigned value in the player component. I'd also like to get a facsimile of lip syncing from my entered Dialogues without having audio files. The lip syncing stuff LOOKS like it's setup right to me, but I get nothing when the player is "speaking" his lines. And it seems I might just be misunderstanding expressions entirely, maybe?

Comments

  • Let's see to the lipsyncing first, if that's OK, since that's the underlying system at play.

    The Speech Manager looks correct, as does the Lip Sync Texture component for the most part. Good point regarding letting it reference a non-skinned Mesh Renderer - I'll look into making this change official.

    We can debug if phonemes are being correctly processed by adding the following to the top of that script's SetFrame function:

    Debug.Log (gameObject.name + " set frame: " + textureIndex);
    

    Does the Console get spammed with messages when the character now speaks?

    Assuming so, what shader does the face-plate's material make use of? The component is assuming it's main texture property is named "_MainTex", but this will need to be amended if it doesn't have this.

  • Thanks, Chris! I do see the logs from LipSyncTexture, with the frame index varying as player dialogue lines are shown onscreen, so that's something!

    I'm using a standard URP material in Unity 6, and I was under the impression "_MainTex" was still the correct property to use, though I see no results if I try "Albedo" or "BaseMap" either.

  • Not sure, when it comes to the Lit shader. Did you try underscores in front? Could be _BaseMap.

    A standard Unlit shader, which IIRC does use _MainTex, should be cross-compatible. Give it a try to see if that's being written to.

  • Well I'm not sure what happened, but I shuffled materials and shaders around a bit, and although I'm back on URP/Lit I now have the face plate updating as the character talks. ¯\_(ツ)_/¯ But at least it's working?

    However, despite having the "Reset expression character expression with each line" checkbox enabled in the Speech Manager (and indeed, even without it), the character's lip sync seems to stay on the last used phoneme when done speaking. Maybe THIS is due to me not having expressions setup right?

  • The "Reset expression" option is unrelated to lipsyncing - the lipsync should revert to the first frame once speech ends.

    Could it be that this is the case, and the "A" mouth shape is being shown? The first-defined frame in your phoneme set should typically be a closed-mouth shape.

    Try swapping your 0 and 2 phoneme strings and textures - does that improve things?

  • You were correct. I reset the phonemes to the recommended (which puts BMP first) and assigned my textures as closely as possible and it works pretty decently for not having audio!

    Is the idea that it's okay to add extra phonemes as needed now that my base is setup as recommended? For example, I have textures for L and W, so I could move W out from the G/O/OO/OH/W group and have it on its own and add a new one for L. It just expected the closed mouth to be in that first slot was all. But, ostensibly, the more phonemes I define the more accurate the lip syncing would be represented?

    I also noticed weird lip-syncing behavior with the forced wait at the end of each line (to give the player time to read) while also using a [continue] inside a Dialog action to trigger an animation mid-sentence: The rest of that dialog line doesn't seem to get lip synced, though, admittedly, it may just be that my short bit of text ("Not again!") isn't enough to trigger frame changes? I can grab a recording of it if that'd be helpful, but I also probably need to put a bit more due diligence in before handing that one off. I'm just glad to be moving again on the lip-syncing!

    Anyway, I think I'm good for now unless you could shed more light on the [continue] or anything to clear up my understanding on Expressions. Thanks, Chris!

  • Is the idea that it's okay to add extra phonemes as needed now that my base is setup as recommended?

    Beside the use of frame 0 as a closed mouth shape, the default set is really more of a convenience to help you make use of it more quickly. You're free to make any changes to non-zero frames as necessary.

    But, ostensibly, the more phonemes I define the more accurate the lip syncing would be represented?

    Adding more frames can certainly help, but the "From Speech Text" mode is always going to have a limit to how accurately it can represent the mouth motion, as it's not based on the audio.

    I also noticed weird lip-syncing behavior with the forced wait at the end of each line (to give the player time to read) while also using a [continue] inside a Dialog action to trigger an animation mid-sentence

    That may be a bug. If you can elaborate on this with steps/details, I'll give this a look.

Sign In or Register to comment.

Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Welcome to the official forum for Adventure Creator.