SE - Obviously, the scope of Half-Life 2 is huge: An intricate facial animation engine, the fully interactive physics engine you spoke of. How did you guys tackle all these features? How did you decide who did what and the order they would be developed in?
Josh - Well, one of the systems that I didn't cover was the Source engine with our material. As you can see, we have a really robust shader language that we made and that was another part of saying "You know, we want a realistic looking world and just basic light maps aren't going to cut it. We need specularity. We need bump mapping and [more robust features]." And so that was one of the other systems that we signed up for. And in each one of those cases we had senior programmers onboard to tackle those things. Like I said, Jay Stelly worked a lot on the physics and said "Okay, I'm going to take this as far as I can." And Ken took facial animations and said, "Okay, we're really going to drive this." Gary McTaggart [Software Developer], he really dug in, and he used to work for 3DFX, he this incredible 3-D guy. And obviously everyone works on everything but those guys really pushed things forward.
Ken - Most of the developers at Valve do many different tasks. Besides the several people working on the facial animations, there's several people working on physics, several working on graphics, [and] other people work on networking. There's some people [that are] kind of specialized but everybody at Valve certainly understands all other areas of the engine. So everybody is involved in game development and design. There's nobody that does "pure" technology. Everyone does something that goes into the game. Every single programmer will have some input to a monster, some input to effect. We don't do anything that isn't directly applied to something we want to show on the screen and something that we want the user to experience.
Even if [an engineer] is a graphics oriented person who's is just doing low level interfaces and pixel shaders [they] are going to work closely with the artist to get the material how the artist envisioned it. Because everything is so tightly integrated, the sound has to work with the visuals has to work with the movement has to work with the AI, nobody can really isolate themselves. Everyone just works in a really large group. We still use the KABAL process that we did with Half-Life 1. Even people outside the Half-Life 2 group are still very aware of what we are doing internally and they help out. Actually, the facial system is done by somebody outside of the Half-Life 2 group as well as [the Half-Life 2 group] sharing the [Half-Life 2] graphics with other projects.
SE - Going back to the facial animation engine: How did Valve decide that they wanted facial expressions to be focused on so heavily when something like that is typically an afterthought thrown together in other games?
Ken - Oh, that was total Half-Life 1. When we did Barney and the scientist we weren't really sure how players would react [to them]. We thought, "Well, it's kind of an experiment" but we didn't spend all that much time in it we just thought it would be a fun thing to do and if players liked it then great! Well, players responded to it really, really well. They got excited and they wanted more of Barney and the Scientist way more then we ever thought they would. And we're like "Oh wow, this is so cool!" So for Half-Life 2, one of the goals was making our characters look much more believable. Take the things we kind of hinted at in Half-Life 1 and really push all that technology. Make them good looking, make them reactive, make them very human, make them very personable. Show their emotions, show their mental states and just try to make them much more sympathetic characters and much stronger villains as well.
SE - And the faces were animated using forty different muscles calculations?
Ken - Yes, there's roughly forty different muscles. There are actually different kinds of controls in the face. For eyes, for cheeks for mouths, for eyebrows, for head movements as well. They're all based on [research done on] facial expressions by Paul Ekman in the mid-70's who was actually doing clinical analysis on patients with mental disorders and trying to diagnose them. What he did was come up with a clinical grammar of how people move their faces. Not really their expressions but one level below that: what are the muscle groups and how do all those muscle groups interact. Like, if you do this thing with your mouth then you can't do this other thing with your mouth also. And so he came up with this whole taxonomy for how to move the face and all the rules about the face. We actually found out about this from Ken Pillman who's actually been out several times to visit Valve. He had found the work himself and done the research on it and he pointed us to it and we followed Dr. Paul Ekman's work and pushed it to this point.
SE- How long did the entire development process for the facial expressions take?
Ken - That took probably a solid year and a half to two years because of several revisions. As we changed art styles the technology constraints changed and so the design constraints changed. The very first version of Half-Life 2 were more iconographic and as the whole team evolved through a more photorealistic, or photo natural, style we found that the technology restraints became a little tighter and certain issues became more important. Exactly how the eyes work became [focused on] because as we go more and more realistic, your brain [is able] to pick out more flaws. We had to find exactly the point where we had characters who looked good, looked believable, but didn't look too real that all of the things that we can't do [would come to player's attention.] We don't do correct [subcutaneous] reflection. No one really does. We don't do skin deformation exactly like human skin does. We do something close. But as you get more and more "real" all these little flaws become more and more obvious. We had to find an art point that matched the technology and that had all the characteristics we want but didn't look creepy and didn't look distorted but just looked natural and you just accept it.
SE - So is the facial engine finished and ready?
Ken - Actually, it's been finished for about two years now.
SE - And how big of a development team is dedicated to doing facial expressions?
Ken - Umm...gosh, it's mostly...well, it's a whole bunch of different people. Not only are there the programmers involved but there's also a couple different groups of artists. There's a group that actually build the models and [they] have to design the polygons in the face to make sure the face can move correctly and then also make all the core facial actions. Once you have that, then an animator will need to animate it. Right now, all four of our animators have done work that you saw on [the gameplay demo]. Most of the Half-Life 1 people, a few people have moved on to different kinds of projects but most everyone is there and we have a lot of new people.
SE - I'm sure a lot of Half-Life 1 fans will be happy to hear that. How big is the entire Half-Life 2 team?
Ken - Oh, off the top of my head, I'd have to say about 30 people or so. We have about 8 programmers on Half-Life 2 and the rest are a mix of level designers, artists, animators, texture artists and sound people.