
Specialist Project Area
Intentions for this Project
My intentions for the Specialist Project Area is to further develop the facial animation I was working on in the previous, PBR unit, with the addition of character movement (body language). The reason why I want to do this is so that I can I can take it into a games engine; such as Unreal, to be used as a cut-scene. With this project I want to create multiple near finished products.
Aspirations for the end of this project are for me to have a presentable piece of artwork that will be worthy of display within my portfolio. By the end of this project I want to have a better grasp on animating character movement as well as body language. I feel as though I am weaker in this area of animation - being able to animate a character fluidly, together with their facial expressions, is my largest aspiration for this project. Creating staging that complements the context of the animation as well as utilizing different camera techniques as much as possible. For example, pan shots, slow zooms in/out, focusing on near/far objects.
Project 1: Creating a Cut-scene for Unreal Engine
The first project I will produce will be a cut-scene either between two characters, or a monologue, with the character fighting against their thoughts. This monologue could be presented as the character looking into a mirror or with another character off screen.
Research that needs to be done before I begin is that I need to go back to the basics of Unreal Engine 4 as well as research the pipeline for bringing animations and props into UE4. I will also need to know how to use and manipulate cameras in UE4. I am going to also need to look at how to create a cut-scene in Unreal. As this project will be used to further advance my animation skills, I will need to find a place where I can obtain 3D props for free. I don't want to waste time creating props when I could use that time for animation.
My aspirations for the end of this project are to create a cut-scene that will be presentable within my portfolio.
(01/03/19) Research
During research I will be using Google to find out how I will export/import animations and models for Unreal Engine 4, as well as how to animate cameras in UE4. I will also be searching for free 3D assets to be used in the animation's environment.
Search terms will include: "Free 3d assets"; "Export animation blender to unreal"; "Import animations unreal engine 4"; "Import model unreal engine 4".
Research for this project will be based on mostly how to import parts of my work into Unreal Engine 4 as well as where I can source props or how to create camera motions for Unreal/inside of Unreal.
Research will be as follows:
-
Resources for free props
-
Importing animations to UE4
-
Importing props/stages into UE4
-
Creating camera animations in UE4
Each piece of research will be paired with some tutorials as visual reference (if appropriate).
3D Prop Resources
I have identified multiple sites that offer downloads for free 3D props, including: Turbosquid, cgtrader and Free3D. Of these sites, Free3D offers the largest selection of props, with all of them being single props only (bins, barrels, shovels etc). From what I've seen on Free3D, there aren't any free downloads available for scenes/stages. Turbosquid's free models are dwarfed by Free3D but are seemingly of higher quality. There seem to be more buildings in Turbosquid's library compared to Free3D, so this would be a good place to download any buildings. The last site is cgtrader, like Turbosquid these models seem to be of a high quality. It might be a bit of a chore finding exactly what I need though, as some priced models are also listed when looking for free models. However, the first page features a free low-poly asset pack as well as a corner-shop type building, surrounded with trees and a bus stop.
I believe that combining models from these sites, I will be able to create a presentable stage.
Unreal Engine 4: Importing Animations
Unreal's official docs have all of the information I needed for importing animation to UE4 from Maya and Max. These docs go through the entire workflow for importing an animation in UE4, beginning with exporting the animation from your software of choice (Maya, Max); then covering importing the animation either with or without it's skeletal mesh.
This video goes over the exporting of animation (Maya) and importing of the animation to UE4. This tutorial uses a slightly different method than that used in Unreal's docs. Animation is baked to the mesh which isn't mentioned in Unreal's docs and mentions that you don't want to export any of the curves (coloured splines) - only the joints. The rest of the tutorial is similar to Unreal's docs page: Import the mesh without the skeleton first, then import the skeleton with the animation on it and apply to the mesh.
This tutorial is a longer, more informative walk-through on how to import animations to UE4. They go through the process of using the Retarget Manager to retarget the animations of the imported rig to the default UE4 model. After this, more animations are imported and named appropriately.
This tutorial goes more over retargeting animations to a similarly proportioned mesh but still briefly covers the basics of importing animations into UE4. In the future, knowledge of being able to retarget animations may be useful, as I now know that I won't have to re-animate a similar gesture for a second rig.
Out of these two tutorials, I find that the first one had the most useful information for what I was looking for - the second containing information I didn't expect to find and may find useful in the future.
Unreal Engine 4: Importing Props/Stages
This video explains how to import models (without skeletal meshes) into UE4. A point is made to not have 'Skeletal Mesh' selected in the 'FBX Import Options' window (as the model doesn't have a skeleton to begin with). Auto Generate Collision is selected to automatically give the model collision when it is imported. 'Import Materials' and 'Import Textures' are deselected; the model is imported to the Geometry folder. Textures are imported to the 'Textures' folder after the mesh has been imported.
If any preview of a mesh is double clicked, it will open a 3D preview of the mesh.
This video goes through exporting models in 3DS Max and importing them in UE4. Export the model as an '.fbx' file. In the export settings 'Embed Media' is selected and all other options are ignored. Embed media has the .fbx file includes all textures applied to the model within the .fbx file. When importing the model to UE4 you wont have to reassign any textures.
New files are made in the UE4 project to assist with finding any models, textures, animations etc. In this case a 'Models' file and 'CardboardBox' file is created.
I need to make sure 'Export Smoothing Groups' is enabled when exporting in 3DS Max; which is explained later in the video.
If any edits are made to the textures or materials, click apply and save so the changes are applied to the model and textures.
Creating Camera Animations in Unreal Engine 4
Tutorial detailing how to create a camera animation in UE4. Right clicking in the content browser > Miscellaneous > Camera Anim will create a 'new camera anim'. The Matinee window will pop up when the camera is selected, which acts as the timeline for the camera animation. A movement track is added to track the camera's movement; the track is then split into 'Translation and Rotation'. Frames are deleted except those that apply to the axis that is to be manipulated.
'Enter' is used to add another key to the timeline.
The curve editor above the timeline is used to animate the camera in this tutorial, I need to look into whether I can animate the camera with it's axis in UE4 or not. Depending on what I find I might need to animate the camera beforehand in Maya; meaning I will need to look into if I can import camera animation.


Searching specifically for creating cutscenes in UE4 brought me to this video. After questioning whether it was possible in the last piece of research, I found that it is possible along with a video explaining how to go about animating a cutscene.
The first step that was made was to create a 'Camera Actor' by right clicking and selecting it from the menu. This will place a camera in the scene. Matinee is then opened from the top of the view-port and a camera group is added to the window. Camera groups add a movement and an FOVAngle track to the Matinee window. Using the gizmo of the camera, it is moved in the viewport and a keyframe is added by pressing 'Enter'.
A box trigger is used to trigger the animation of the camera moving, so when the player touches it, the animation applied to the camera will play out. For the animation to work, blueprints need to be set up so the camera animation will trigger. When a second camera is added to the scene and animated after the first one, the view will automatically switch between cameras. I was questioning how I would swap between camera views for the animation and if I would have to render out one camera view, then the other and so on. With using the camera in Unreal, I will be able to have them swap in real-time as long as I animate the cameras accordingly.
Using the Research
In this section I will be using the research I gathered above in order to become familiar with the process of bringing animations and meshes into Unreal Engine. Animating cameras and re-assigning skeletons is another goal for this section.
(01/03/19) Unreal Engine 4: Importing Props/Stages
For this exercise I will be importing the assets I created for the Team Animation Project into Unreal Engine 4. I will be using the processes I have identified while researching how to import meshes into UE4.
In the video below I go through importing meshes into Unreal Engine 4.
Conclusion
During this exercise I intended to identify a method of bringing models I created for the Team Animation Project to Unreal Engine 4. This was done so I could practice the workflow for importing meshes to Unreal from 3DS Max.
This exercise was successful as I managed to achieve the goal of taking the models from 3DS Max to Unreal Engine. The only issue I encountered was that the scale of the Duck and Chicken models were a lot smaller than I anticipated. In the future I will make sure to correct any scaling issues before exporting the models.
Next I plan to take animations I've previously created in Blender to Unreal Engine 4.
(01/03/19) Unreal Engine 4: Importing Animation
Compared to importing models, importing animations to Unreal was a major headache. I first tried to import one of my Mery Rig animations from Maya and found that Mery doesn't have a skeleton, only curves - which Unreal doesn't support. This meant I couldn't export any of her joints so Unreal could apply them to her mesh. I could import her mesh, however it would import each part of her separately (left leg, right arm, right, leg etc). I tried selecting them all and dragging them into the view port. The parts aligned themselves correctly, however she was missing her torso.
I did manage to achieve a result when exporting animations from Blender, however, the entirety of the animation wasn't imported - just the shape key animation (facial animations). I did also manage to import the skeletal animation of another project - this time I was left with no shape key animation. After trying to export the shape keys for this second animation, I found that they would import in Unreal, however, the animation for them was not there
This video shows shape key animation working in Unreal.
Other issues I came across was that the character mesh was the correct size while the skeleton was not. Meaning I couldn't see the animation as it was too small to view. I tried changing the project file's units in Blender from 1.0cm to 0.01cm as suggested by answers on multiple forums; this didn't change anything.
The picture to the right shows the tab that the animation should be playing in. Seeing as though the scale was incorrect, the animation is too small to be viewed. The timeline would work and has the correct amount of frames.

After another 2 hours of trying to get an animation into Unreal, I finally managed it. I found a tutorial on how to import animations to Unreal from Blender; after following it I found that it didn't work and Unreal would crash. I checked the video's description and found that the method no longer worked.
I looked around more to see if I could identify a solution and found one on the Unreal forums. One of my problems was that when I changed the unit size to 0.01, I didn't rescale the rig by x100 in each of the X, Y, and Z axis. After doing this I exported again and the animation still wouldn't play in Unreal. I went back into Blender, applied Pose Mode and exported. This worked and the animation played in Unreal.
Another thing to note about this process is that I need to change the project's scale before applying any animation to the Armature; if I rescale (x100) the armature after animation, the locations of the animation will be different. I made a simple animation that only took 10 minutes to produce. The only issues with this are the eye texture doesn't apply to the model for some reason.
Conclusion
In conclusion, I managed to find a method of taking animations from Blender to Unreal Engine 4. I spent a significant amount of time doing trial and error with the rig and came across a post on the Unreal forums that solved the issue of bone animations not playing or being visible in UE4. With this knowledge I will be able to create animations for UE4 in Blender at a much faster rate; I know to apply the scale before animating the armature (0.01 World Scale, then x100 on each axis of the rig) for the animations to work in UE4. This may only apply for bone rigs and not curve rigs, so I will need to identify a method of getting curve rig animations into UE4, if at all possible.
I managed to achieve my goal of importing both shape key animation and bone animation to Unreal Engine 4. I found the whole process of using Blender for this to be a little tedious and will have to try taking animations from Max/Maya into Unreal in the future to compare workflows.
Now that I know the settings I need to use in order to have my exported animations play in UE4, I will try to create a stage and get the key poses for the character complete - then get them into Unreal. This will be a future project for after camera animation testing in UE4.
Next, I will aim to create camera animations in Unreal Engine and see if I can have one camera cut to another in real time.
(01/03/19) Unreal Engine 4: Animating Cameras
The last thing I wanted to achieve for this part of my research was to animate a camera inside of Unreal. To do this I loosely followed the research I conducted yesterday, using the Matinee system to animate the cameras. I had trouble having the animation of the model play out at the same time as the cameras and found that I needed to add the Skeletal Mesh to the Matinee timeline. Once the skeletal mesh was added, I had to select the animation track, add a key frame to it (Enter). Once a frame was added a box containing all the animations on the skeleton appears, clicking one will add it to the timeline.

Screenshot of the animations that can be added to the Matinee timeline.
This is the tutorial I referenced for animating the character's skeletal group in the Matinee editor
With the animation of the character now in the Matinee timeline, that animation will play when the timeline is scrubbed.
In order to animate the cameras, I added a New Camera Group in the Matinee editor by selecting the camera in the view port, right clicking in the Matinee window and selecting the option. I was then able to key the camera (in Matinee window) and animate it in the view port.
Character and camera animation working when scrubbing timeline in Matinee window
Conclusion
For this task, I wanted to animate cameras zooming in on the animation I previously brought into Unreal. This exercise was done so that I could learn how to animate cameras in Unreal and have my animation play at the same time the cameras moved. This was important to do because without learning that I needed to add the animation to the Matinee editor as a SkeleMeshGroup - the animaton wouldn't have played while the cameras moved.
I found the Matinee editor tricky to use at first as sometimes I wasn't able to select a new group although I was selecting the camera or model in the scene. Once I understood how to add new groups (camera group; skele mesh group) I was then easily able to apply the animations of the skele mesh in the Matinee editor. Animating the cameras was simple, just like any other program when animating (move in scene, key, move again etc).
Now that I know I need to add the animation for the rig to the Matinee editor for it to play in the scene (when 'Play' is pressed), having my animations play in the view port of UE4 will be a much faster process and I can see myself using this method again in the future.
Next I plan to source audio for the cutscene I plan to animate and then begin animation on the cutscene. This animation will eventually be brought into Unreal.
(02/03/19) Sourcing Audio for my Animation
As I am going to be animating a scene that will either require two characters or an inner monologue, I will require an audio clip that will complement this idea. In order to find an audio clip I searched on the 11secondclub and found a clip I think will be perfect for this concept. The current March Competition's audio clip I think is perfect for what I will am aiming for.
Transcript of audio clip:
Voice 1: You’re having a breakdown, a stress response. Your power is kicking in to save you. It created me. You did.
Voice 2: And you’re British?
Voice 1: Like I said, I’m your rational mind.
The audio comes from the film: Legion. As this appears to be a conversation between a person and their 'rational mind', this clip fills the monologue concept I proposed at the start of this page.
Conclusion
During this task, I planned to obtain audio that will fit the idea I had for the cutscene. This was done so that I could begin the animation process for the cutscene. I needed this audio in order for me to reference when the character should move or when new key poses should be added.
I didn't look around much when sourcing an audio clip for the animation. I feel like I will need to find more clips in order to have multiple to fall back on in the case of this audio not working out. I can find more audio on the 11secondclub or even take snips of audio from videos and animate those out of context.
In the future I plan to source more audio clips so I have a larger library to choose from, rather than just going with this one clip. Next I will produce a storyboard for this audio clip
(04/03/19) Storyboarding & Stage Concept
Here I will post storyboard & stage concepts for the above mentioned audio clip. As 'power' is mentioned, I get the idea that the animation should be set in a fantasy-esque environment. Using the resources on cgtrader, a site I found yesterday that hosts free 3D resources, I downloaded a pack of assets containing: rocks, trees, boxes, weapons, gems, and chests.
Storyboard Concept



This is the storyboard concept I've created for the above dialogue. It shows, in brief detail, the initial surroundings of the character with gems coming out of the wall. Crates and barrels also surround the character. I have a feeling the actions on 6 and 6.1 wont work, after listening the the audio, the person speaking is out of breath and sounds as though they're in pain. I might need to change the motion and expressions here; I'll try what I've put on the storyboard before I decide however.
Conclusion
During this exercise, I intended to create a storyboard for the audio I sourced in the previous post so that I could use it as reference during the animation process. This storyboard will be necessary for referencing the character's poses as well as camera angles and shot types.
This exercise was a success as I have managed to produce the storyboard. I noticed that frames 6 and 6.1 might not work as when I listened to the audio again, I found that these motions might not line up with the way this line is spoken. The line is spoken somewhat slowly with pained breathing before the speech - I may need to change the poses for these frames.
Next I intend to create the stage for the animation using the assets I downloaded from cgtrader.
(06/03/19) Creating Stage for the Cut-scene
(06/03/19) I will be using the assets from the Stones and Buried Treasure pack to create a scene for the animation. I'm intending for the backdrop to be a curved wall of rigid stone with some of the gems protruding from these stones.
The character will be sat with their back to the wall, supporting themselves with some crates and barrels. I would like to have the gems glow and light their immediate area (wall behind them, maybe the character's face somewhat). This is more of an afterthought for the animation as I should focus solely on animating before anything else. Though if I have time for this, it's definitely something I'd like to add to the animation.
After creating the concept stage for the animation, I found that the rig I was using was much bigger than the stage. The assets I downloaded were tiny compared to the rig I wanted to use. I came to the conclusion that changing the scale of the stage before exporting it was the best way to solve this problem; rather than exporting it then scaling it with the rig in the scene.
I found that a 0.5 scale for the stage seemed to be the correct size, as with this scale the model looked in place - the only issue was that the boxes and barrels were too small. I quickly scaled them up a bit so they wouldn't look so small against the character rig. I found scaling objects in Blender to be a pain, sometimes only one object would scale even though I had everything selected. Scaling in Blender is different to Max, in that I will have to press 'S' and type numbers rather than just simply selecting everything and dragging the mouse from the center of the gizmo.
When I tried to put the model in Pose Mode, an error would appear, meaning Pose Mode couldn't be entered and I couldn't start posing and animating the model. I could have accidentally scaled the model without realising, or there are too many objects in the scene. If i'm unable to solve this issue I will likely have to use a pre-built stage; before this I will need to see if I can import the stage to UE4. Using a pre-built stage means that I will be able to focus on animation from the start; they're easily sourced from the sites I found before (cgtrader, free3d).

Conclusion
During this exercise, I intended to create the stage I will use for the cutscene animation using the assets I downloaded from cgtrader. This was done so I could begin the animation process for the cutscene.
The creation process for the stage was a success, I managed to put the assets together to create a cave environment. I encountered issues with 'Pose Mode' not working so I was unable to begin animating. I plan to look for ways to solve this issue so that I can begin animating; if I'm unable to find a solution I will use a pre-built stage in place of the one I made - as I believe the issue is with the stage I created.
As of 07/03/19 I have been unable to solve the issue of Pose Mode failing to work, though I will continue looking for a solution. If I am unable to find a solution, I will use a pre-built stage and redo the storyboard to fit with the new environment. In the meantime I will begin planning for the other outcomes for the Specialist Project and revisit this at a later date.
(08/03/19) Rough Animation Test
Today I will be producing a rough animation for the cut-scene. This animation will consist of just key frames with little to no in-betweens.
Before I started the key poses, I created a much simpler stage out of a few of the assets I downloaded and a modified cube. I extended the sides of the cube outwards to create more room in the stage, then extruded it forwards, down the sides so the camera wouldn't look into the void when doing side shots.
After I completed the first key pose and tried a test render to see how it would look, I found out that the entire render was white. I looked around, some answers suggested that the camera was pointing to the void, so I adjusted the Focal Length. This didn't solve the issue. I tried using completely new cameras, deleting any lights in the scene and re-opening the project - all which also didn't work.

Thinking the issue was with the stage, I went looking for a pre-built stage to use. I found one I liked, downloaded and imported it in Blender. I wanted to do a test render to see how it looked with the textures enabled. Doing this made my PC slow down so much it practically died - so I had to restart the PC. After deciding not to use a pre-built stage and just make one of my own again, this time much simpler. As you can see above, I kept the crates, barrels and chest; opting to remove the rocks that make the wall and use an edited cube instead. I came to the conclusion that maybe a simpler looking stage might be better to use in the end, as long as the animation is complete, more intricate staging can wait.
After trying to solve this issue for close to an hour, I selected all of the objects in the scene (except for the rig) and exported them as an .fbx file following this step-by-step (Scrolling down to the section for Blender). I then imported them into the rig's file and did a test render, which worked and rendered as usual. I then copied the NURBS Curve information for the pose to the new file. I never figured out what was wrong with the other file but now I am able to render without any issues. Once this was done I started putting in the key poses and camera movement for the animation. I added the audio I was going to use for the cut-scene to Blender after doing the third pose, although it should have been there from the start, I forgot and quickly added it. The audio was added so I could reference when a pose should change from one to the other. Audio used was sourced from the 11secondclub; their current competition.

Shot of a working render.
Putting in the rest of the key poses went smoothly with few issues. I used the storyboard I created previously as reference for the camera positioning and tracking. The only thing I had issues with were the constraints of this rig. I found out that the eyes couldn't close and the hands couldn't either, making some poses lose impact. I feel like in the final animation this will leave an impact on the quality.
When all the key poses were complete I went to start rendering the animation, only to find out that it was rendering in white again. Looking at the Blender files I had open, the one I was animating in was the original file - the one that rendered in white. Searching around for a solution, I came across the 'Append' feature, which allows me to select objects and animations and attach them to a selected file. I finally found the solution by following this tutorial which showed me exactly how to append my animations and objects to the file with a working render.


There is some semi-complete motion during the middle and end of the animation, with the head slowly lowering in the middle, and a head and eye roll at the end. I will add further detail to these in the second iteration of the animation.
Conclusion
During this exercise I wanted to complete the rough key poses for the cut-scene animation so that I could use this as a base while animating the rest of the project - adding in-betweens at a later date. This is important because each key pose is now completed and I am now able to add in-betweens where ever needed.
I was successful in completing this exercise as I had managed to add key frames to the animation from the storyboard I had created. Throughout animating I came across a couple of large problems; I identified solutions to these problems I had (exporting multiple selected objects and taking animation from one file to another - append); solutions were in the form of tutorials found online. All together, this exercise took approximately 7 hours to complete, with the bulk of that time used to solve issues I encountered along the way.
Next I plan to add in-betweens to the cut-scene to bring it further to completion. This is planned to be done tomorrow (09/03/19).
Cut-scene: Iteration 1
(10/03/19) Cut-scene Improvements
During this session I will be improving the cut-scene animation further by adding more in-between frames and smoothing most movement.
When I opened the project, I had trouble finding out where to start as I'm used to just animating straight through, as opposed to pose-to-pose. I didn't know if I wanted to produce the rough facial animations first, or add in-betweens. I played around with the lip syncing a little and decided to move on to adding some in-betweens before doing facial animation.
For in-betweens, I decided to add motion to the character's head so I could have him frantically look around for the voice that's speaking to him. I also removed the frames that were having him snap from the first pose to the second - just to have a bit of motion (this part is not complete as any in-betweens sill need adding).
I'm thinking of changing the order of the directions for the head movements, maybe putting the up motion first. Changing it to: up, right, left - as opposed to right, left, up.

Character looking around for the voice.

Lip syncing and broken arm rotation.
Whilst animating I had a few problems with the lip syncing on the word 'British'. I think my problem was that I was trying to animate each syllable instead of opting for a movement that would flow and work with the word. In the end I used a mouth motion that looks more like he's saying 'brush'. There is some wonky jaw movement while he talks too that I will work out to make sure everything looks up to standard.
I wanted to add in-betweens to the hand motion that is made when 'British' is said; when I went to do this the arm's rotation was messed up. The arm would extend to the side and rotate at an impossible angle. I didn't solve this when producing the second render of this animation, but I will make sure to correct this in the next render.
I also had the eyebrow raise normally instead of going from one pose to the other - the head still jolts to the next pose, however.
Conclusion
During this session I aimed to add more in-betweens to the animation to bring more flow to the motion of the character. I also wanted to add rough facial animations for the character's speech. This is important because achieving this will bring the animation closer to completion and give me a clearer idea of how the final animation will look.
I managed to achieve what I set out to do during this exercise. I had managed to add almost all the in-betweens to the animation (ones at the beginning and some towards the end still need either adding or improving). Issues I encountered were in the facial animation and the arm motion on the word 'British'. I managed to come to a result with the lip syncing, but failed to correct the arm rotation.
Next I plan to take the animation I have now into Unreal Engine 4. Once I have results from UE4, I will complete the animation and take it back to UE4 for the last time.
(14/03/19) Complete Animation
During this session I will aim to complete the cut-scene animation I have previously been working on. I will aim to further smoothen motions in the animation and correct some errors I touched on in the previous post.
Issues mentioned in the previous post were fixed while continuing the animation process. I had some trouble fixing the broken arm rotation and resorted to deleting the key frames that were placed and redoing the motion. New motions were also added to the shoulders when he's shocked (raised shoulders) and when he's breathing heavily (lowered shoulders).
Lights were also added to the scene for this render in order to add some shadows as well as soft lighting. The backdrop colour was slightly darkened so that the lights wouldn't make it too bright - the reflections were also turned down on it's surface as to not have any white circles distract from the animation.


Zero Specular Intensity (left) vs. 1.0 Specular Intensity (right, creates white circle above the character).

To the left is the fixed arm motion. I added most of the motion to the hand to have his sway somewhat as it's being held up.
I also had the eyebrow raise slowed some more and I tried to have the eye motion slowed as well, but found that it didn't really look in place - so I decided to leave it as is.
Some head movement was added to the start of the animation to give the character some life as I didn't want him sat there completely still.
I also changed the timing on the frames for his arms, legs and head for when he first reacts to the voice. Having them all react at different times made for a more believable 'shocked' response. In order to do this, I had the head move first accompanied by his mouth opening wider - this was followed by a slightly slower arm movement. The legs move at the same time as the arms and complete their motion last.

Final Render
Cut-scene Animation Reflection
For this project I set out to create a cut-scene animation consisting of a monologue or conversation with an off-screen character - I chose to go with a monologue for this animation. While I still need to take this animation to Unreal Engine 4, I will write my thoughts on the finished animation here.
In terms of animation, I believe I have achieved what I set out to create with this project. I think I could have pushed myself a little further, however - in that I could have had the character move more around the scene (instead of sitting the whole time). This may have been more achievable with a different choice of dialogue, as with this dialogue it implies the character is wounded in some way.
Referring back to my research (sourcing audio for the animation), I didn't look around for other dialogue choices as I said I might do. I wanted to begin animating to this dialogue right after finding it as I thought it was perfect for what I wanted to do. I didn't want to dedicate more time to sourcing audio and creating storyboard and stage concepts as I wanted to dedicate that time to animation.
During this project I was able to resolve many issues with the animation, two of the largest being the 'Pose Mode' issue I was having with my previous scene, and the rendering issues I was having (where the entire scene would render in white). Although I searched around for solutions for a lengthy amount of time, I was able to find solutions to both of these problems in the end. I created a new, simplified stage, for the animation which fixed my 'Pose Mode' issues and I had to use the 'Append' feature for the first time to fix my rendering issues.
Other issues consisted of irregular movements in the animation (i.e. broken arm movements, lack of flow between poses and lip syncing). These issues were solved by either tweaking key frame placements or completely re-animating the motion, which didn't take long at all.
Reapplying the audio in video editing software allowed me to change when the character first reacts to the voice. While I was animating I found it a little difficult to place where the reaction should happen, if it should be on the voice talking, a second or two after etc. The reaction in the Blender project file is timed on the voice speaking, whereas the reaction for the render is a split second after the voice first talks.
Overall I am pleased with the results of this animation, however there are a few small things that sill bother me. When the character lowers their head to begin gasping for air, their mouth stays in the 'thinking' position for too long, then snaps to the breathing motion. It wouldn't take long to re-render this part of the animation, then replace the broken motion in the render above (as it is only 20 frames at most).
Next I plan to take this animation into Unreal Engine 4 using the methods I identified previously during my research of Project 1. I will be referring back to the tutorials I used while taking previous animations from Blender to Unreal Engine 4.
Taking the Cut-Scene to Unreal Engine 4
Now that the animation is complete it's time for the final step of this project - taking the animation to Unreal Engine 4. Referring back to the research I conducted before animating, I made sure to export the skeleton and the mesh separately as .fbx files, then import them in Unreal.
In Unreal as I tried to import the mesh, I encountered some issues. Importing the mesh failed due to: multiple duplicate mesh names, so the import aborted. I tried to import the skeleton after this in order to see if that would work, however the import of the skeleton also failed. This time it was due to multiple root bones. Due to these issues I was unable to import the animation I had made in Blender to Unreal Engine 4.


I believe I am facing these issues due to the type of model I am using for the animation. Originating from 11secondclub, it is a model solely made for animation and not for game engines. The Eleven Rig uses animation curves in place of bones to animate; as I didn't use the character's skeleton to animate, Unreal won't import any animations for the model.
In order to solve the issues I'm facing with bringing my animation to Unreal, I could attempt to identify a way to apply the animations that are on the curves to the bones of the character.
Project 1: Conclusion
During this project I set out to create an animation and take the finished animation into Unreal Engine 4. During this project I have learnt: how to take character (bone) animations from Blender to Unreal Engine 4; how to import models in Unreal Engine 4; how to take shape key animations from Blender to Unreal Engine 4; and how to animate cameras in Unreal.
Issues Encountered During the Project & Solutions
Throughout the process of this project I have encountered many issues, ranging from rendering issues to importing issues in Unreal. One of the first issues I encountered was with the first stage I had created for the animation. After creating it, I quickly found that it wouldn't be suitable for use due to the amount of objects in the scene; the design also wasn't that nice to look at either. I scrapped this stage in favour of using a simplified version - using an edited cube as the backdrop, keeping the crates, chests, barrels and stones around the character. This stage looked much cleaner compared to the first stage and was much easier to manage without as many individual objects in the scene.
Later during the project, after I had made all the key poses for the animation, I had rendering issues. These issues consisted of everything I tried to render being rendered in white. In order to begin identifying the issue, I opened the rig in another Blender project and tested rendering again - rendering in the new project worked as intended. In the project I animated in, I tried removing all the lights in the scene and rendered again; this didn't solve the issue. I searched around using Google to try to identify a solution and came across the 'Append' feature for Blender. This allowed me to take all the keyframes from one project and apply them to the other. Appending the keyframes from the broken scene to the new scene solved the render issues I was facing.
Other issues encountered in the project were all during the animation process. Such as lip syncing issues or the character's arm performing unintended or impossible rotations. Lip syncing issues were fixed by reanimating the sequences; the arm was also fixed by reanimating.
Final Outcome of the Project
The final outcome of this project was a 14 second animation (seen above) that I had planned to take into Unreal Engine 4, however, due to problems with the rig I was unable to fulfill this objective.
Although I was unable to bring the final animation to Unreal, I was able to successfully identify ways to take objects, skeletal and shape key animation from Blender to Unreal Engine 4. When bringing the animations into Unreal, I learnt how to use the Matinee Editor to have my character's animations play at the same time as a camera animation. I came out of this project with knowledge of bringing animations and objects into Unreal - knowledge I didn't have prior to this project.
Project 2: Facial Motion Capture - Maya to Unreal Engine 4
For the second project of the Specialist Subject Area, I will be focusing on creating a facial motion capture animation in Maya and taking it into Unreal Engine 4. This animation will consist of a short sentence; possibly 3 - 5 words in length. My sole aim is just to have the face animated for this, so I may only need the rig's head.
Research that needs to be conducted for this consists of: identifying what equipment I will need; what software i'm going to be using; what rigs I can use for this project; and the workflow/pipeline. As this will be my first time using motion capture, I will need to know how to set it up in Maya.
My aspirations for this project are to create a short sequence of facial animations using motion capture in Maya (3 - 5 word lip sync). I want this animation to look as close to realistic as I can make it - which will be tough due to lack of experience with motion capture animation.
Research
During research I will be using Google to find methods of creating motion capture. These motion capture methods could be tutorials, overviews of software, software showcases, software websites etc. I plan to look mainly into MotionBuilder and Maya for this project, as I want to become more familiar with Maya. If I find that Maya/MotionBuilder don't work for me, I will look into other software such as Blender - or software that is dedicated to facial motion capture.
Search terms will include: "Facial motion capture in Maya (and MotionBuilder)"; "Facial motion capture software"; "Markerless facial motion capture"; "Blender facial motion capture".
Research for this project will consist mostly of finding out how to do motion capture, specifically for facial animation, in Maya.
Research will be as follows:
-
Identify how to do motion capture in Maya/Blender.
-
Can any rig be used?
-
Identify how to apply motion capture to a model in Maya.
-
Will I need MotionBuilder for this project?
-
Identify the pipeline (Maya > Unreal).
(07/03/19) Applying Motion Capture to Any Rig (Maya)
This tutorial shows how to apply motion capture to a HumanIK rig, focusing more on body movement than facial animation.
While this is interesting, I'm looking more for facial motion capture rather than full body motion capture.
(15/03/19) A_Face MoTrack 0.1 Addon for Blender
While I was looking for methods of facial motion capture, I came across this addon for Blender. A_ Face Motrack 0.1 takes a pre-recorded video and applies motion capture data to a rig with shape keys.
The screen layout of Blender was set to 'Motion Capture' and the video was opened with the 'Movie Clip Editor'. The video appears in the left view port with the 3D View being to the right, tracking markers are added by clicking 'Create/Reset Markers' to the left of the video. The markers are aligned with those on the man's face - when the video is played, the created markers in Blender follow those on his face. Small adjustments need to be made throughout the tracking process. The first video was the 'Database' video, which just shows the man's head moving in each direction.
The size of the markers can be increased by clicking the '+ (plus)' icon to the immediate left of 'Create/Reset Markers'.
The second video to be added is the 'Action' video, in which the man makes multiple facial expressions. Markers are added and lined up with those on his face again. It seems like when the markers lose alignment with those on a face, the smaller squares in each corner will disappear which notifies the user that it is not aligned anymore.
Once the videos are tracked, they are then 'Generated' which bakes the motion capture to the model, creating key frames on the timeline.
(15/03/19) iClone Faceware Facial Mocap
Video Overview
This tutorial goes over using the Faceware plugin for iClone 7 and gives a general look at how the software works out of the box. Even without changing anything about the character in the view port (strength sliders for example), the software gives a highly detailed result when using both mounted cameras and webcams.
Obtaining iClone & Faceware
iClone offers a 30 day free trail that can be downloaded at any time, whereas Faceware's free trail has to be requested. Faceware offers four different variants: Faceware Analyser, Faceware Retargeter, Faceware Live and Faceware Live for iClone. For what I aim to produce, I will be needing Faceware Live for iClone.
Project 2: Using Facial Motion Capture
Before using iClone and Faceware I intended to use MotionBuilder to apply motion capture to some rigs that I had previously sourced from the internet. After this I would take them into Maya to tweak any keyframes and then export the animations for use in Unreal Engine 4.
I downloaded MotionBuilder 2018, as I already had Autodesk Maya 2018 installed, so I thought MotionBuilder needed to be the '2018' version too. When I tried to import a model into MotionBuilder 2018, I got an error telling me that the .fbx file isn't supported and needed to be rolled back to a previous version. Due to this error, I installed MotionBuilder 2019 and was able to open the models; however I found another issue


In order for me to import the model(s) and work in MotionBuilder, I followed a tutorial on YouTube as this was my first time using the software.
Once the models were imported their size was too small compared to when they were previously opened in Maya, all of their textures were also missing. Other issues with the models were also found, like with the Eleven Rig, these models used curve splines to animate the rig - instead of animating straight on the bones. Models I use for animating in Blender also wouldn't import here due to being '.pmx' files. I tried exporting from Blender as '.fbx' files, doing this made the models look odd on importing them into MotionBuilder.
The tutorial I followed used 'fclone motion capture data'. This means that if I wanted to follow this workflow, I would need to use fclone for motion capture, as well as have a model that has 'facial deform shapes'.
I later found that I'd possibly have to use facial markers when creating the facial mocap animations and with not having a reliable rig to use or anything to use for markers, I shelved the idea of using MotionBuilder and Maya for this project.
Next I plan to investigate different software that can produce motion capture animations. Just by looking up 'Facial motion capture at home' on Google, I have identified two possibilities being 'i-Clone' and 'Faceware'. I intend to use these next in an attempt to achieve real-time motion capture facial animations.
Project 2: Using iClone & Faceware
In the following posts, I will be experimenting with the trial versions of iClone and the Faceware Plugin for iClone. I aim to create a marker-less facial motion capture animation sequence and to hopefully export it so I can take it into Unreal.
Brief History of iClone (Releases)
iClone 1.0 released in December 2005, using previous assets from CrazyTalk (a 2D facial animation program by Reallusion) for the facial mapping. iClone 1.0 was capable of real-time animation as well as rendering and creation of avatars from photographs. Other major releases consist of: iClone 2.0 (2007), iClone 3.0 (2008), iClone 4.0 (2009), iClone 5.0 & 5.5 (2011, 2012), iClone 6.0 (2014), and the stable release of iClone 7 in 2017.
Testing iClone & Faceware
When setting up the project, I had some trouble opening all the plugins (Faceware Profile and Motion LIVE) needed for marker-less facial motion capture. I was able to open iClone perfectly fine using the Reallusion Hub. Looking at the plugins, they only had the option to 'Purchase' or 'Uninstall'. I found that I could open the Faceware Profile plugin (this plugin allows for use of a webcam for facial capture) from the program's files by using its application file. However I was still having trouble opening Motion LIVE.

I soon found out that there is a 'Plugins' drop down menu in iClone that listed Motion LIVE and this is how I was able to access the plugin. Motion LIVE allows for manipulation of what parts of the face are trackable for motion capture. Strength sliders can also be manipulated here, allowing for adjusting how much tracking influence is put onto a part of the face (100% on a slider is the baseline, 50% on the jaw for example, would make it so the jaw doesn't open as much).
Once I had all the plugins open, I dragged a model from the left side panel into the view port to load in into the scene. I then needed to calibrate a pose in Faceware (this is done to calibrate the markers that track facial features) and connect Faceware to iClone. In order to achieve this, I found a tutorial on Reallusion's FAQ site that showed me how to do this.
When first trying to perform the facial motion capture, the model's face was stretching in impossible directions when I wasn't even moving any facial features. I looked around in the settings in Motion LIVE as well as the camera settings and found the issue was within the camera settings. In the settings, my camera was set to 'HeadCam_', meaning the software was thinking I was using a head mounted camera for motion capture, not a static webcam. Changing the 'Face Tracking Model' in Faceware Realtime to 'StaticCam_' greatly improved the expressions of the character in the 3D view.


Incorrect camera settings (needed to be set to 'StaticCam_'

Test of tilting my head upwards with StaticCam (left) and HeadCam (right)
iClone & Faceware Test
Below is a test video of me trying iClone with the Faceware Plugin for iClone. The results aren't that great, possibly due to poor lighting and a ~6 year old webcam. The mouth doesn't really open all that much, save for the times I fully open my mouth; even with 'StaticCam_' selected the eyebrows were also hit and miss.
As I was using trail versions of the software, I am unable to record motion capture using iClone & Faceware and would need to upgrade to the full versions if I wanted to do so. This unfortunately means I wont be able to use these programs to bring facial motion capture to Unreal Engine 4.
I did however manage to achieve some results with this software, though admittedly I was imagining to achieve more impressive results.
Project 2 Conclusion
My intentions when planning for Project 2 were to create a facial motion capture animation that I would be able to bring into Unreal Engine 4. I would be able to bring the animation into Unreal Engine by using past experience gained on Project 1, where my main goal for that project was to take a cut-scene into Unreal. I wanted to be able to create motion capture animation at home to test the quality of what I could produce with limited resources, as well as to gain experience in software I hadn't used previously (iClone 7, Faceware). Gaining experience in these software if important because, if in the future I need to work with facial mo-cap I already have some knowledge of iClone 7 and Faceware.
This project wasn't as successful as I had hoped it would be. I was unable to use MotionBuilder due to a lack of motion capture data as well as any appropriate rigs. I was able to somewhat follow a tutorial I found on MotionBuilder, but was unable to produce what they had due to a lack of resources going into the project. I did, however, identify an alternative to use during this project, being iClone 7 and the Faceware plugin for iClone 7. While I wasn't able to achieve the goal I set out for, I was able to come to a result during this project; I was able to demonstrate real-time facial motion capture using iClone 7. Due to only being able to use the trial versions of iClone 7 and Faceware, I was stuck previewing the motion capture and was unable to record any motion capture animations. This means that I'm unable to export any of the animations I would have created using the software and bring them into Unreal Engine 4.
Final Outcome of Project 2
The final outcome of this project was a preview of facial motion capture using the trial versions of both iClone 7 and the Faceware Plugin for iClone. Although I wasn't able to fully reach my goals for Project 2 and take motion capture animations to Unreal Engine, I was able to produce a result using iClone and Faceware - even if it was just a preview of motion capture. As this was my first time using this software, I was able to gain some experience using iClone/Faceware. Although my knowledge of this software is still limited, I am able to set up and preview a motion capture project within iClone. This project is relevant due to motion capture being a massive part of modern game and film production; this project was used in order to gain some experience in creating mo-cap animations.
Next I plan to move onto the next project and create a facial rig inside of 3DS Max. Before I begin the creation process, I need to research the workflow for facial rigging.
Project 3: Facial Rigging
My original intentions for this project was to use Maya for facial rigging. However, due to time restraints I am aiming to use either CAT bones and Morph Targets in 3DS Max, or using Blender to rig a model's face. My aspirations for this project are to be able to create a facial rig with a controller. Although I do already have prior knowledge of rigging, I believe it is one of my weakest areas in 3D and I would like to take the opportunity to improve my rigging abilities.
Research
Research methods used for this project will be the same as the previous two, using the internet as a means to identify methods of facial rigging. As I plan to use 3DS Max as a means of rigging, I will be using CAT Bones as the main method of rigging. I will also need to find tutorials for weighting the bones on my rig.
Research topics are as follows:
-
3DS Max/Blender face rig tutorials
-
3DS Max/Blender weighting tutorials
Facial Rigging in 3DS Max (CAT Bones)
This tutorial goes over facial rigging in 3DS Max using CAT Bones. During the tutorial, morphs are set to sliders so that when the slider's value is changed, the morph will animate accordingly. This is what I would like to achieve during this project, however, I don't think I will be able to with the time remaining. Compared to the other video I've found, this one doesn't go over weighting of bones, which is a weak area for me when it comes to rigging.
Creating a Basic Face Rig in Blender
Blender tutorial for facial rigging. This tutorial goes over the basics of rigging a character's face, using a limited bone count (12 each side, 22 total). This tutorial also shows the weighting process, which when it comes to rigging is my weakest area. During the tutorial the model on the left of the thumbnail is used, compared to the models I have, this model has more detail and a higher poly count. I may need to source a more detailed model if I want to achieve a more expressive rig.
Facial Rigging in Blender
For facial rigging in Blender I followed the video above. Due to the type of model I used for the rigging process, I ran into quite a few problems. The model I was using wasn't as high poly as the one used in the video and some of the facial features were separate textures that would hover above the face. When first parenting all the bones using 'Automatic Weighting', I ran into an issue where the weighting wouldn't be applied due to: 'Heat weighting failed to find a solution for one or more bones'.
I was able to find a solution to the issue I was facing by looking on blenderartists.org and came to a post that suggested that the mesh of the model's head wasn't completely symmetrical. Before I began rigging the face, I deleted everything below the neck of the model, which made the neck look jagged; I added some polys to the neck to fix this. Looking back on this, I decided to delete the neck entirely to see if this would fix the automatic weighting issues.
Below are two GIFs showing failed (left) and working (right) automatic weighting. During this GIF the model's neck is still here, meaning the whole mesh isn't symmetrical. After deleting the neck as well as the headset, automatic weighting worked and the bones influenced the mesh. All that was left to do was correct the weighting of each bone.


Fail Automatic Weighting (left) Automatic Weighting working (right). However, the right gif does leave some geometry behind.
I also had problems with the 'Symmetrize' option when mirroring the bones to the other side of the head. When following the above tutorial, I would end up with ~40 bones, sometimes ~60 instead of 24 (12 bones were in place on the left side of the head, duping should give 24). I found that the left bones were being duped and staying on that side of the head, then I would use 'Symmetrize' and add another 12 bones to the right side of the head, giving my rig 36 bones (in some cases more).
In order for me to solve this issue, I had to delete the remaining duplicated bones that were hidden underneath the original bones.

GIF of duplicated bones still being on the left side of the character's face.
After fixing the above issues, I found other issues with the eyebrows. This model's eyebrows were textures placed just in front of the head - meaning the bones I placed didn't manipulate them. I found that the middle bone of the eyebrow was long enough to touch the eyebrow, which manipulated it. To attempt to solve this issue, I tried to make the bones on the edges of the eyebrow long enough to touch the eyebrow. This didn't solve the issue at first, however I found that I had to re parent the bones after changing their length.
Moving on to painting weights for the face rig, I found that the eyebrow bones were influencing the model's fringe as well as hair around the sides and back of the head. Playing around with the settings in the 'Brush' options, I found a setting called 'Clean', which removes Vertex Group Assignments which aren't required. Using the 'Clean' option removed all weighting for the bone and I was able to weight the bone manually.


Messed up jaw weighting. Moving the jaw a small amount doesn't look too bad however.
Whilst painting weights, I found I was making a lot of the areas on the face red. After referring back to the tutorial above, referencing how they painted, I found that in their case green was the most common weighting - using very little red.
After seeing this, I went back to redo some of the weighting around the: mouth, jaw and eyes.
The weighting i'm having the most trouble with are the eyes and jaw. It seems that no matter how I paint the weighting of the eyes, some polygons will always overlap, causing motion to look like a mess. As for the jaw, I've tried using more green and yellow weighting, however this doesn't change that fact that the movement just looks bad.
I'm thinking of using Shape Keys as a compromise instead of using the bones for the movement of the mouth and eyes. Instead of using the jaw bone, I can have shape keys that open and close the mouth. The lip bones seem to work rather well, they are, however, not perfect.
After further attempting to get the weighting correct for the eyes, I decided to stop attempting weighting them for the time being; focusing on creating shape keys for the eyes so they could close. Whilst I was creating the shape keys, I initially created the transform for only one eye. Although it came out looking quite good, I wouldn't be able to create the same transform for the other side of the face manually.
As a possible work-around, I tried mirroring the Shape Key, something I attempted previously on the PBR module. Unfortunately, I came to the same conclusion as I did during PBR - the model's hair would mirror instead of the eye movement. After searching on the internet why this might be the case, I found that the symmetry of the model may be the cause - if the model isn't completely symmetrical, mirroring Shape Keys won't work. Seeing as though bones were already parented to the model and weighting was already applied, deleting half of the head and applying symmetry to it would mean re-rigging at least half of the face.
Not wanting to re-rig the face, I deleted the Shape Key for the eye and made another, manipulating both eyes at the same time to assure that I got the most symmetrical movement for both eyes. In the future, when creating shape keys for eyes, I need to make sure to manipulate both eyes at the same time.
Project 3 - Conclusion
My intentions for this project were to improve my rigging abilities by creating a facial rig using either CAT Bones and Morph Targets in 3DS Max, or using Bones and Shape Keys Blender to rig the model. If rigging in 3DS Max, I would have user the Darrel Model Lee provided me during my PBR Facial Animation project. Going into this project, I knew that the end result wouldn't be perfect, as my rigging abilities are quite weak; I feel choosing to rig a face with my rigging abilities is quite ambitious. Choosing to rig for this project will help me to achieve stronger rigging abilities - if I am aspiring to be an animator, I believe rigging is an important skill to have.
Final Outcome of Project 3
The final outcome of this project was a rigged head inside of Blender, using bones for the eyebrows, eyes, lips and jaw (12 bones each side, 22 total). While I did choose to use Shape Keys for the eye movement, the bones still remain on the rig and could be used for some subtle movement (however the weighting around the eyes isn't perfect). Overall this was a successful project, I was able to rig a model's face to the best of my current abilities. Seeing as though I was facing major roadblocks whilst rigging character models for the Team Animation Project, having to hand over rigging to Faiz, I believe I have made some progress in my rigging abilities.
Weighting the bones was the most challenging part of this project, with the eyes and jaw giving me the most problems. Using the bones to close the rigs eyes didn't feel natural at all, there would be inconsistencies between both eyes and each closing and opening of the eye. I also couldn't get the weighting right for the eyes or jaw - the eyes would look as though they were melting whenever I'd move the bones down. This could be due to the fact the eyes were connected to the head, as well as there not being enough polys to manipulate. When working with models like this in the future, I will look for a way to subdivide the model to give more polys to work with, as well as separating the eyes from the head. In order to solve the issues I was having with the eye weighting, I used a Shape Key to close and open them instead.
When weighting the jaw bone, no matter the strength of the weighting, nothing seemed to really change. Whenever I'd move the bone, the movement would always look off. Again, this is possibly due to poly count - especially around the sides of the mouth, where there aren't many polys. This makes the sides of the mouth diagonal instead of curved when open.
Specialist Unit Conclusion
Throughout the Specialist Project Area unit, I conducted three mini projects in order to develop new skills relating to animation. I aimed to do this through: bringing my animations into Unreal Engine 4, creating motion-capture animations, and creating a face rig.
Project 1
Intentions for the first project were to learn more about Unreal Engine 4, how to bring animations I've created into the games engine. I began this project by researching ways to import animations from Maya > Unreal. I intended to use Maya at the beginning of project 1, but later decided to use Blender as I was more familiar with the program and had access to more rigs at the time. This change in software meant that I had to find a pipeline for taking animations from Blender > Unreal, a process I had a lot of issues with. At first, imported animations and models were too small to be viewed in Unreal, they did import however, so I knew it was only a scaling issue I was facing. After looking for a solution, I found that I needed to change the scale of the Blender Project File to 0.01cm and rescale the model and armature x100 - then export the armature in pose mode for animations to work in Unreal.
With Project 1, I learnt quite a lot about bringing animations to Unreal Engine 4, as well as animating in Blender. I was able to identify solutions to issues I faced in both Blender (rendering issues, using append to solve issues, scaling issues) and Unreal (scaling issues, some import issues). I would say I'm closer to identifying my animation style, I feel a lot more comfortable with stylised rigs such as the Eleven Rig (who I did have problems with - eyes not closing, fingers not working). With rigs such as Eleven, I feel more free in choices when it comes to facial expressions and the amount of exaggeration I can apply to the character. If the character was more realistic, more exaggerated features or gestures may feel out of place.
Overall I'm pleased with the outcome of Project 1. Although I wasn't able to take the animation into Unreal Engine 4 as first intended, I was able to come out of the project with knowledge of Unreal Engine - as well as coming closer to identifying my animation style.
Project 2
With project 2, I originally aimed to use MotionBuilder (MB) and Maya for applying facial motion capture data to a rig, then taking those animations to Unreal Engine 4. Unfortunately, I ran into many issues using MB and was unable to fulfill my original intentions. Despite this, I was able to identify an alternative to using MB and Maya, in iClone 7 and the Faceware Plugin for iClone 7.
iClone 7 and Faceware together allowed for markerless facial motion capture using any camera. While working on this project I was hoping to use these software in place of MB and Maya, in order to bring animations into UE4. However, I found that with the trial versions of iClone 7 and Faceware, I was unable to record any motion capture data, so I was unable to take any mocap into Unreal. Issues encountered were having trouble finding the Faceware LIVE plugin location; I also had issues with the camera at first (having it set to HeadCam_ instead of StaticCam_). HeadCam_ made iClone think I was wearing a mounted camera on my head, so the tracking was different as I was using a StaticCam.
Despite not being able to bring mocap animations into Unreal, I view this project as a success. I was able to learn about and use pieces of animation software (iClone & Faceware) that I previously had no knowledge of. I was also able to achieve realtime, markerless mocap animation, even if the animation was only a preview.
Project 3
I found project 3 to be the hardest of the mini projects that I conducted during the Specialist Project Area. With rigging being one of my weaknesses in 3D I found it very challenging to rig a model's face, which is possibly one of the harder areas of a model to rig. I had numerous issues setting up the bones, with duplicate bones being left under the original ones after mirroring them; however this was solved by moving and deleting the dupe bones. Parenting the bones to the mesh using 'Automatic Weighting' was another issue. I found I had to delete the neck and headset for all of the bones to work. This was due to a symmetry error in the mesh, if the mesh wasn't symmetrical, the bones weight wouldn't influence the mesh.
After automatic weighting as applied I cleared most weights in 'Weight Paint' mode, choosing to paint my own weights. This proved problematic, as the eyes of the model were attached to the head, making it hard to have the bones close the eyes. In order to solve this, I opted to use a 'Shape Key' for the eye animations. The jaw was another issue, due to the lack of polygons on the face, it was difficult to have everything move in a 'normal' manner. For example, when the jaw bone was moved, the character's mouth would take on an odd shape, being more jagged than round. Unfortunately I was unable to solve the weighting issues of the jaw.
I was, however, able to paint weights successfully for the eyebrows and lips, though it did take a lot of trail and error.
Reflecting on this project, I have learnt more about rigging in Blender, as well as painting weights for rigs. I feel more comfortable with my rigging abilities, though my weight painting does need a lot more work before I can feel satisfied with my rigging. I was able to complete this project with an outcome of a simple facial rig. In the future I will look into possible subdividing the mesh before rigging, or looking for other models with higher poly counts.
Final Thoughts
Overall, the Specialist module has allowed me to explore many different software (Unreal, iClone, Faceware) that I previously had no prior knowledge of, as well as improve some of my weaker areas in 3D. I will be able to use the skills gained in Unreal during future projects in order to bring in animations from Blender. I also understand some of the Maya > Unreal pipeline, admittedly not as much as the Blender > Unreal pipeline. I now have some limited knowledge of iClone and Faceware and the motion capture animation that the software is capable of. I have also improved my rigging skills by rigging a characters face in Blender; I plan to use this rigging experience in future projects.