top of page

PBR Secondary Research

Facial Animation in Games

Facial Animation of L.A. Noire (MotionScan)

L.A. Noire (2011), is a game Developed by Rockstar, containing some of the most complex facial animations ever featured in games to date.

Goombastomp Article on MotionScan.

'MotionScan' is the name of the facial capture software used when development of L.A. Noire was underway. Belonging to the Depth Analysis, sister company to Team Bondi. "Every character in Noire uses MotionScan. Over 400 actors were filmed during the games production."

A 360 degree capture of the actor/actress was captured using 32 cameras, which shot at a resolution of 2K x 2K; meaning one second of footage (30FPS) came to around 1GB. Due to this, cost of production for L.A. Noire skyrocketed, with one camera costing $6,000. $192,000 for the facial capture set-up, on top of the amount of space needed on the disc for the footage gathered, explains the 3 discs the game shipped with.

While MotionScan is a revolutionary piece of software, capable of creating facial animations that are faithful to each subtle movement on an actor's face - costs of capturing and using these animations proves far too expensive and impractical for implementation into other games.


 

Late 90's / Early 2000's Facial Animation - Final Fantasy VII, VIII, X ​

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

FFVIII ( 43:18 - 44:13) saw an increase in quality, bringing fully modelled faces into it's CG cutscenes. Eyes and mouths were no longer textures - allowing for more complex expressions to be portrayed. I feel FVIII broke the barrier that FFVII couldn't with it's cutscenes; while VII portrayed emotion, it doesn't do it to the standard that VIII does.

FFVII's facial animations during (CG scenes) were just textures on the model's face that would change appropriately. An example of this would be the scene in which Sephiroth kills Aris.

 

This scene from late into the game shows off mouth movement which accompanied with subtitles.

Final Fantasy X (FFX) was Square Enix's debut Final Fantasy game on the PS2, featuring proportionate character models, alongside facial animation and speech. Before FFX, previous Final Fantasy games (from the PS1 era - namely FFVII and FFIX) were cartoony in their appearance, with the shorter, blocky character models, which didn't allow for facial animation within regular cutscenes.

While the facial animation in this game was better than previous instalments in the series, it still falls flat due to lack of expression and emotion - emotional scenes feel awkward and wooden. In-game facial animations only feature minimal brow movement; little to no eye movement; and puppet-like mouth animations. The character models do switch to a more detailed version whenever a close up is made however; models with a more detailed face can be seen during these close-ups. 

Though, as a counterpoint, the full CG scenes in FFX (like previous instalments) do offer more in the way of facial expression, nothing that will blow anyone away (these days, at least). Compared to the in-game animations, eye movement can be seen in these scenes, along with more features of the face (laugh lines, more articulate brows, wider smiles etc), not to mention hair simulation. 

Late 90's / Early 2000's Facial Animation - Sonic Adventure

Early 90s and 2000s games really show just how far facial animation has come in the past 18 - 20 years.

Sonic Adventure (1998) for the Dreamcast, had some interesting facial expressions and lip syncing; being Sega's first mainline Sonic the Hedgehog game to feature full 3D. The fact that these expressions were so exaggerated, and the lip syncing so terrible, may be the reason some remember this game.

Character's mouths would appear as they're trying to move in more than one direction at once, ending up looking like a jumbled mess. This, paired with the eyebrows also moving at the same time (in many cases), made the faces appear as they're made of rubber with all the stretching going on.

Despite the expressions being so exaggerated, this could very well fit into the art direction of the game. It's not exactly a realistic game, after all, cartoon-like art direction calls for cartoon-like movement.

Face rig used for Bayonetta with animated example

The Facial Features of Bayonetta

Bayonetta's (2009) facial animations were all done by hand, using controllers for the facial rig, seen in the blog post and video below. While the facial animations were all done by hand, motion capture was used for Bayonetta's movement in the game.

The Face of Bayonetta, a blog post by Masanori Takashima, lead facial animator for Bayonetta, details the process he used in order to create the intricate facial animations for the lead character. 

Masanori describes that he was in charge of creating the facial controls as well as facial animations throughout production. He details: "the first step is to set the character's design.", meaning  the facial animator need to know exactly who the character they're dealing with is; are they sassy; sexy; cute; chicken? Once the animator knows these intricacies, only then can they begin animation.

Masanori goes on to describe that the team focused much more on Bayonetta's character, as they wanted to convey her "sense of femininity and grace" correctly - "she should easily be able to kill a man with her eyes alone.". When these character traits are solidified, Masanori can then work alongside director, Hideki Kamiya, to achieve their standard of animation. 

Seeing how important a role the character's personality plays while animating facial features is a crucial piece of information that I have learnt from this research. 

Konami's Fox Engine: Behind the Scenes

Similar to L.A. Noire, Konami also uses motion capture in their production of Metal Gear Solid V: The Phantom Pain (2015). Behind the Scenes footage showcases their studio, in which they acquire photos of the actors; these photos are then used to create facial animations as well as full 3D models for use in the game. 


 

Gunfire Games: Darksiders III

Being a smaller studio, Gunfire Games doesn't use motion capture technology and instead, animate by hand. Their most recent game, Darksiders III (2018) delivers convincing facial animations and lip syncing; which serves as a testament to the abilities of Gunfire's animators.

With such a This shows that animation in games is not contained just to motion capture and that compelling character animations can still be created by hand.

Facial Animation in Film

Lord of the Rings

Lord of the Rings (LotR) uses motion capture technology throughout the franchise for numerous characters, namely Gollum.  The video below goes into detail of the creation process for Gollum, as well as the goals set for the character; how Gollum should act, his interactions with other characters etc. 

Weta Digital is the company responsible for creating Gollum's onscreen VFX for the LotR and Hobbit films.

Andy Serkis performed both voice and motion capture for Gollum in LotR. Described in the video below, "Andy's physical appearance on set became a key point in how Gollum was developed as a character." Being able to capture Andy's performances was the breakthrough that was needed to push the character even further. 

 

 

 

 

 

 

 

 

 

 

 


During the production of the Hobbit films, technology had advanced since the creation of LotR; more details could be added to Gollum. These consisted of lighting effects and the movement of skin against his bones. Research into lighting effects was conducted - how light will act when bouncing off Gollum's eyes, skin and hair. Subsurface Scattering is the lighting technique used in order to give Gollum the "fleshy, believable, appearance you see in real life or on screen." This technique bounces some light off the character's skin while also having some light be absorbed into the skin, to then be released at different angles. 

More facial features were added; fair hairs on his face, for instance. These changes allowed for a more realistic portrayal of Gollum through the use of "proper muscle systems, skeletal systems, facial expressions." 

A new system named "Tissue" was created to simulate happenings in the skeleton of a character and push them outwards to the muscles and skin. "With a character like Gollum, who has little clothing on, you can see the muscles and ribs moving under the skin."


 

Animator's Survival Kit

Why I Chose the Animator's Survival Kit

The Animator's Survival Kit is a book written by Richard Williams. I chose this book as a basis for my research because it splits each aspect of animation up - describing them in detail. This book was recommended to me by my tutors - another in the class was also using this book for research. 

 

The book goes over the principles of animation (timing, squash and stretch, exaggeration, anticipation etc) as well as more complex topics (walking, facial expressions, who the character is - personality)  - the author also talks about personal experiences whilst working in animation. Topics in the Animator's Survival Kit are easy to follow as they've been written informally while staying informative. 

Expressions/Stretching the Face

This section describes that the stretching of the face can be easily over-looked when going about expressions. 

Examples given are fright and shock, where working from the eyes downward and visa-versa helps to build an exaggerated expression with the use of facial stretching. Other examples consist of chewing and also makes sure to note "Who is chewing?", which gives the idea that different characters will chew different from one-another (depending on social status, personality, physical appearance etc). 

unnamed (1).jpg
unnamed.jpg

Lip syncing + expressions

The Animator's Survival Kit provided me with information on facial animation. Richard Williams (author) describes that lip syncing/phrasing should not have the mouth be animated for every sound - rather, the mouth should hit the main pronunciations and 'blur' over the other syllables. "We don't arr-tick-yoo-lateh every little syllable, letter and pop."

How This Could Impact My Work (Animator's Survival Kit)

Knowing I don't have to articulate each syllable a person speaks will definitely help during animating as this will make the motions more fluid - less artificial and more human. The mouth won't be flapping randomly while speech plays over the animation. 

Animating from the eyebrows downward (brows > eyes > mouth/jaw) or visa-versa for expressions or reactions applies anticipation and exaggeration (anticipation through the build up to the final, exaggerated expression). With doing facial animation, I will make sure to use this technique in future animations to add more character.

© 2017 by Alex Robinson. Proudly created with Wix.com

bottom of page