Since the game uses generic "lip flaps" (as kyuven put it) instead of having the mouth movements match any one particular language, it's never bothered me that much.
Since the game uses generic "lip flaps" (as kyuven put it) instead of having the mouth movements match any one particular language, it's never bothered me that much.
"Ul'dah can keep their dusty markets, and their streets paved in silver and gold.
Limsa Lominsa keep your pirates, and your ships covered in musty mold.
My loyalty lies with Gridania, with the Moogles and the tree spirits of old." -The Forky Conjurer
Gets even worse these days when people want more and more detail. When the mouths only had a few points of articulation, going back and adjusting it was feasible (just not practical since barely anyone cared).X? Uh, no.
XIII I believe had done so, though. In this modern day and age, it should be feasible to go through and adjust lip-syncing, but it's a tedious task that requires time. Time, time, time. Which is one thing MMOs don't have very much of when it comes to the production pipeline, at least in regards to when content is going through it's localisation stages.
These days they're talking about using motion capture for mouth movements. Which...yeah isn't going to create the best environment for redoing lip movements to meet the script.
Again this mostly attributed to limitations with the characters' mouths.
The models are frighteningly close to the uncanny valley, so it's really hard to make them look natural. Again, because mouth articulation is an incredibly detailed process.
So...... Do you work in this business? You seem very familiar with this subject. I mean I don't wanna offend, you already taught me a lot, I'm just curious.Gets even worse these days when people want more and more detail. When the mouths only had a few points of articulation, going back and adjusting it was feasible (just not practical since barely anyone cared).
These days they're talking about using motion capture for mouth movements. Which...yeah isn't going to create the best environment for redoing lip movements to meet the script.
It's something you can easily google or something you pick up on after playing games for a while and notice details like these. TV Tropes has articles for things like this too.
Unless I'm lipsyncing for my life I can deal with it. I'm usually noticing dodgy graphics issues during cutscenes anyway. XD
I would rather have the main story line voice work completed.
Everyone out of the way, animator coming through. Let's clear up some misconceptions.
1: This game isn't lipsynced even in Japanese aside from certain scenes (such as the ending cutscenes in the 2.55 patch). However, the mouth movements stop closer to when the talking stops in Japanese, while in English characters may talk without the mouth moving and vice versa.
2: FFX wasn't lipsynced in English, however the actors tried to match the Japanese mouth movements with the English translation. This is what led to some of the dialogue coming off as strange - Yuna's actress for example is often ignorantly called bad because some of her scenes seem awkward, but she was really just trying to match Yuna's mouth flaps.
3: Most AAA Square Enix games starting with Kingdom Hearts are lipsynced to English though, for example FFXIII. If the budget is lower, English lipsync is one of the first things to get cut, as was the case with KH Re: Chain of Memories(there was also another complication in that case though).
4: There are two primary ways to lipsync. You can animate mouth flaps and have actors try to roughly match them, or record voices first and animate to the voices. The latter is more expensive and professional.
5: However there are multiple degrees of quality in lipsync of 3D models - it is NOT true that animators have to painstakingly animate the whole jaw by hand in every lipsynced cutscene like some people have said. This is an almost entirely procedural process where animators only have to move some simple sliders around for each movement - how automated it is depends on the game. The FF13 series for example had almost completely automated lipsync so every random NPC was lipsynced at all times. The more automated the process, the more mechanical and lifeless the character looks however. The most time-consuming facial animation in FF14 is seen in the Hildibrand quests and aforementioned ending sequence - these are some of the few places where they actually animated things by hand.
the mouth movements in this game are just random, different types of mouth movements that don't sync up to any audio. This is also true for your character whenever you type something in linkshell etc. You say something like "hi!" and your chara makes about 5-6 extra syllables. lol
So really, in the voiced cutscenes, they're moving their mouths much in the same way they are in the un-voiced ones...randomly generated movements.
I don't work with SE, but being familiar with the animation industry, I can almost certainly say that this was done for simplicity sake. This isn't a console release, like FFXIII, where the English localization actually went back into the motion capture studio to re-record mouth movements to match the English audio. This is an MMO, and as such, they cut corners where they can.
If they did as requested and synced the mouth movements to the audio, which one would it be? Remember, everyone in the world uses the same game files, just different "text" based on the language of choice. If they sync to JP, English, French and German lips would still be off...etc for the others.
Syncing all 4 languages would mean much more development time...heck just doing motion capture for ONE language would be time consuming enough. They would have to go in and actually change the game files...meaning each language would need to have their own client..or have extra files in there somewhere. It's not as easy as just selecting the audio option in your settings. Pretty sure this is why they left it the way it is currently....with random mouth movements that don't match up to any language.
Just a little tidbit: in version 1.0, the only selectable audio was English, with the subtitle text changing according to client language (even for the Japanese, imagine that!). Since only English was recorded, the lip movements DID match their speech.It's most visible in the starting cutscenes for the three cities (Gridania's has a few closeups of Yda when she's talking). Take a look on youtube if you're bored.
On the subject of actually animating lip movements themselves, this is a fairly simple process for a big company like SE. They need only set up their motion capture studio for facial capture, clean up the data, and apply it to the 3D models. If done right, there is hardly any manual adjustments that need to be made.
|
![]() |
![]() |
![]() |
|
Cookie Policy
This website uses cookies. If you do not wish us to set cookies on your device, please do not use the website. Please read the Square Enix cookies policy for more information. Your use of the website is also subject to the terms in the Square Enix website terms of use and privacy policy and by using the website you are accepting those terms. The Square Enix terms of use, privacy policy and cookies policy can also be found through links at the bottom of the page.