This week I’ve been working on the player facial rig. Setting up bones, bone controls, jiggle bits for the hair, etc. We used a placeholder audio track to test phonemes, but I’m overall happy with the range of emotion achieved!
Model & Facial Rig
We’re still hammering out some details regarding the player model. One problem is about when she speaks. We know she should talk readily and freely, but still be free to tear ass around a level. One idea is to have a 3d portrait that narates/inflects as needed. Should it hide while silent or in combat? Should the portrait stay true to old school tech? The version shown here has been smoother over for ease of rig testing, but would that stand out too much?
The face contains bones for the brow, eyelids, cheeks, cheekbones, nostrils, and mouth.
For the hair, each “strand” uses 2-4 skinned bones, 2 dummy objects, 1 IK solver, and a spring modifier. Bones are linked to the head. One dummy is made a child of the the other dummy and given a spring modifier. The IK solver is then linked to the spring object. The parent dummy and the first bone in the chain are linked to the head. When the head moves, the strands jiggle in tandem without being able to move too far.
Only the skinned bones are exported with the facial rig. Everything else is there to drive motion.
Our voice actress is officially on board as of this week. She’s a very strange and unique soul. She brings something hard to describe, and harder to emulate. We’ve made this character specifically with her in mind. Unfortunately, this isn’t even her voice! It is a placeholder as we haven’t had a chance to record any tracks. So please check out the video, but keep in mind it will be very different in the coming weeks.
Audio sample from FreeSound.org