Page 1 of 1

Dialogues and Blenshapes

Posted: Sun Oct 03, 2021 4:04 am
by fallingstarint
Hello there,

My 3D avatar (made with VRoid) uses blendshapes to animate his face. I have "neutral", "angry", "smiling" but also mouth opening blendshapes (A, E, I, O, U...), which all of them can be tweaked using the blendshape component and setting each parameters to 100 for maximum intensity.

I would like to use these blendshape when NPC lines appears but I don't know what would be the route to take for this.

1 ) Set the parameter "angry" to 100 when the line is an angry one
2) Set the A,E,I,O,U mouth shapes along the words written in the NPC's line (that may be a stretch I guess).

Any idea? Thank you in advance!

Re: Dialogues and Blenshapes

Posted: Sun Oct 03, 2021 8:52 am
by Tony Li
Hi,
fallingstarint wrote: Sun Oct 03, 2021 4:04 am1 ) Set the parameter "angry" to 100 when the line is an angry one
I recommend writing a cutscene sequencer command for that. Cutscene sequences have a short tutorial series. If you're a tiny bit comfortable with scripting, custom sequencer commands are pretty easy to write. Here's a rough example of one to give you can idea of what it might look like:

Code: Select all

public class SequencerCommandSetBlendshape : SequencerCommand
{
    void Awake()
    {
        string blendshape = GetParameter(0);
        int value = GetParameterAsInt(1);
        speaker.GetComponent<YourVRoidScript>().SetParameter(blendshape, value);
        Stop();
    }
}
You'd then theoretically use it in a dialogue entry node's Sequence field, like:
  • Sequence: SetBlendshape(angry, 100); {{default}}
(The {{default}} tells the node to also play the Dialogue Manager's Default Sequence, which typically delays for a duration based on the text length.)
fallingstarint wrote: Sun Oct 03, 2021 4:04 am2) Set the A,E,I,O,U mouth shapes along the words written in the NPC's line (that may be a stretch I guess).
This is called lipsync. You can read some general info about lipsync in the Dialogue System here. The Dialogue System doesn't do lipsync itself, but it supports third party lipsync, typically using sequencer commands. Lipsync typically uses a system to preprocess voice-acted audio to determine what mouth shapes they correspond to. An asset called SALSA also has a 'TextSync' extension that can determine mouth shapes from the dialogue text instead of audio.

Re: Dialogues and Blenshapes

Posted: Tue Oct 05, 2021 7:22 am
by fallingstarint
Tony, thank you, you are a honestly a pride to the Unity's community for your assistance! I really appreciate it.

Re: Dialogues and Blenshapes

Posted: Tue Oct 05, 2021 9:04 am
by Tony Li
Glad to help!