Monthly Archives: December 2011

An Uncanny Conundrum

So I create little animated movies and post them on YouTube. Does this mean I’m a filmmaker?

Uh. . .no. I have never envisioned myself as a filmmaker. Even with this recent foray into machinima, I’ve always thought of myself as a storyteller first, a writer second, and person who posts little movies on YouTube third. Even the term “machinima director” never really grew on me; I just use it because it’s the commonly accepted catchall phrase for anyone who creates little animated movies and posts them on YouTube.

By repeating the same thing three times in the above two paragraphs, I hope I’ve successfully established that I consider myself nothing more than a person who posts little animated movies on YouTube. This is for the benefit of any “real” filmmakers who might chance upon my work, follow the breadcrumbs to this blog, and wonder what the heck I’m doing. I feel I need a record for posterity, an “official” response to all the animators and moviemakers who realize I am not one of them.

Nor do I strive to be.

However, I am borrowing their tools. And that puts me in a strange conundrum. Kind of like the “uncanny valley” all animators avoid on pain of death. More on that in a minute. But first, I’d like to share some thoughts I’ve had lately about this thing I do: which is create little animated videos and post them on YouTube.

A year ago, I’d never even heard the word “machinima.” In fact, I have a record of the exact day I added it to my vocabulary. Here is the text of an email I sent to my cousin on February 3, 2011:

“Okay, now I’m even more depressed. I can’t have an original idea for shit. It’s even worse than I thought. They have a NAME for it:

I give up.


See, I thought I’d come up with a really cool new concept. I’d been playing Sims 2 for a while, and by that time had built several legacy families and neighborhoods. So I knew about the video capture options built into the game. But my results were always pixelated and unwatchable, and for that reason, I never gave it much thought. Then, by pure chance, I learned how to change the quality settings of my in-game camera. And just that quickly, Sims 2 became one more storytelling tool in my arsenal.

About that same time, I was enrolled in a college sociology class. One of our assignments was to compile a written portfolio using terms from each chapter. One I chose was “anomic suicide.” I won’t bore you with all the details of how a class assignment turned into an extracurricular machinima project, but out of this circumstance my video “Hello” was born. It was well-received among Sims 2 machinima directors and even stirred a modicum of interest in mainstream viewers.

So the wheels in my head started turning. Here was a tool I could use from the privacy of my own home, without a lot of expensive software or training, to experiment with alternate forms of storytelling and interact with potential readers of my novels. I never labored under the misapprehension that EA Games (owner of all Sims 2 copyrights) would tolerate outright commercial use of their product. But this was just for fun anyway, right?

But the thing snowballed before I knew what was happening. Other writers recognized the marketing potential in animated short films or book trailers about their novels. And suddenly Sims 2 was no longer an option. In order to pursue this new direction, I had to find a commercial platform that would allow me to own the copyrights of any short film I created—whether my video work met professional filmmaking standards or not.

Now here I am, knee deep in tutorials teaching me to use iClone and a professional 3D modeling and animation tool called 3DS Max. Odd, how someone like me arrived in this place. But I do enjoy the work, and the earning potential seems to expand every day.

However, this plunks me down right in the middle of an ongoing 3D animation dilemma–how to avoid the “uncanny valley.”

Wikipedia defines the “uncanny valley” in these terms: “The uncanny valley is a hypothesis in the field of robotics and 3D computer animation, which holds that when human replicas look and act almost, but not perfectly, like actual human beings, it causes a response of revulsion among human observers. The “valley” in question is a dip in a proposed graph of the positivity of human reaction as a function of a robot’s human likeness.”

With Sims 2, this wasn’t an issue because EA Games and Maxis bore all responsibility for the animation. To quote Chrystie Bowie (with her permission, of course): “I don’t have a problem with Sims2 because they’re still so stylized.” Yes, Chrystie. Exactly. And this is what I couldn’t get so many uninitiated viewers to understand.

Now I have the opposite problem. With iClone, much of what is possible is far too “realistic” for most machinima aficionados. Mind you, these are the same aficionados who slammed “The Polar Express” and are now doing the same thing with “TinTin.” Neither of those movies evoke any kind of “revulsion” in me because of their animation. In fact, my husband and I watched “Polar Express” on TV last night, and I found the rendering nothing short of amazing. My husband, while initially unimpressed, eventually said that he “got used to it pretty quick” and saw no problem with the animation at all. So does the uncanny valley actually lie in the eye of the beholder?

Again, I’m going to quote Chrystie. This was correspondence we shared through private email, but I thought her insights were too germane to leave buried in my inbox. I’ve compiled several messages, edited only for continuity. Chrystie, please let me know if this arrangement doesn’t jive with your original intent.

(From Chrystie Bowie:)

For me, it’s all about the eyes. If they aren’t right, I immediately feel like I’m looking at a zombie. Now, I don’t have a problem with Sims2 because they’re still so stylized so I didn’t freak out like [some] did. With your iClone, the look was much more real-life but Finn had a live spark in his eyes so I immediately took to him. I’ll be interested to see if his eyes will track when you get him moving.

I think the Uncanny Valley is also more prevalent with older individuals. My generation is more resistant to the phenomenon and even resents animation that is on the crude side. I believe that as we get better and better at animation and simultaneously get more and more used to seeing it, the uncanny valley will become more shallow. Of course, look at live-action movies and shows that have explored the concept. AI Artificial Intelligence was basically about just that … the orgas discriminated against the mechas because of the Uncanny Valley. Then in Star Trek TNG, there were several episodes where Lt. Commander Data wasn’t taken seriously by other life forms because he was an android. Keep creating, keep making the medium more ubiquitous … because I personally would like to see a day when an author’s books can be translated to the screen with almost 100% faithfulness. Right now, that’s not possible because of the needs of human and animal actors and the cost of props and sets. If CGI characters played them and if there were realistic settings rendered by computers, the possibilities are endless.

You can definitely use my comments. If you want I’ll send you other musings about this subject that I’ve made in the past including how CGI animations would create possibilities for child characters in stories to be better represented because it would negate time restrictions due to child labor laws. Also, how the Uncanny Valley may be linked to the same mindset as racism and other forms of discrimination, that part of the psyche that makes people fear the unknown. It used to be a survival instinct but now it causes problems in modern society.

Well said, Chrystie. In fact, I think I shall adopt this position as my own when it comes to the matter. And because I don’t aspire to be a filmmaker and have no career to jeopardize by getting too close to the edge, I will brave the uncanny valley and risk disdain by “real” animators. To be honest, grousing from the animation community reminds me a bit of the sour grapes spewed by the traditional publishing industry when ebooks threatened to hijack the market. Now with ebooks outselling bound copies in nearly every venue, we’re hearing a completely different tune from New York’s Big 6 publishers. There’s a lesson in that, I think

So here’s my official statement about the uncanny valley: I want my animation lifelike, just as I prefer realistic art over the abstract and prose over poetry. Therefore I will continue to produce little animated films and post them on YouTube, and hopefully a few authors will benefit from a low-budget book trailer that captures the essence of their novel. Sometimes I will get the animation right, sometimes I will miss the mark. Eventually technology will bridge the uncanny valley and moviegoers will grow accustomed to the look and feel of machinima. Until then, all I can do is keep moving forward with this and see where the path may lead.


Meet Finn

The countdown is on. Three days until my iClone trial version expires, and I don’t intend to waste a single minute.

For the past couple of days I’ve been playing around with a new character. Meet Finn Wilde, star of Amanda Borenstadt’s novel Sygyzy:

Finn Wilde

Yesterday Amanda gave me the thumbs-up for this “actor”. I learned from making the Stonehaven machinima that casting is critical–the face we choose for the video will become the face of the novel for many people. I’m quite taken with this fellow–I wish he were real. I would definitely be a fan.

The process of character creation begins–at least for me–with a conversation. Amanda and I emailed back and forth a few times discussing her vision for the cast. Months earlier, she mentioned that the role of Finn would ideally be given to a young Russell Crowe. No problem, I thought. . .until I started hunting for high resolution photos of a young Russell Crowe.

It has never been my intention to hijack a real actor’s face for my project. But selecting a real person as a model for a character is the quickest and easiest way for a writer to communicate their ideas to me. I had planned to use a photo of young Russell Crowe to “skin” the iClone puppet, then modify bone structure so it didn’t look enough like Russell Crowe to invite a lawsuit. Alas, no usable photos of this man in his youth seem to exist on the Net. Yes, I did find some early pictures, but nothing of a quality I can work with. Face mapping photos must be high resolution, full face frontal with even lighting (otherwise one side of the puppet’s face will be darker than the other) and no teeth showing. Mug shots would be ideal.

After days of searching, I finally discovered that Russell Crowe has a lookalike. A young actor named Ben McKenzie has often been compared to Crowe, as in the photo below:

So I revised my search and found this one of Ben:

which I altered in my graphics editor to become an iClone texture:

. . .

I imported this texture image into iClone and began the process of creating Finn Wilde.

Below is an embedded link to a 30-second “audition” of this character. Don’t expect to be wowed. . .at least not at first. Once I explain what’s important about the scene, I’m sure my excitement over iClone will make more sense.

Like before, because I rendered this clip from a trial version of iClone, that awful, ugly watermark is stamped all over the video. Also, please note that the bleedthrough (tattering) around the edges of the shirt are related to an improper bone movement I made with the right shoulder. You can’t really see the bad placement in the video, but let’s just say it was another lesson learned. 😉

If you watch carefully, you’ll see that as the camera pans around him, one corner of his mouth twitches upward, then a slow smile spreads across his face. This would be absolutely impossible with Sims 2.

In Finn’s mouth are top teeth (one of the complaints about Sims was that only their bottom teeth showed.) They are dazzling white in this clip, but I can turn them any color I want by simply moving a slider in iClone. The amazing thing, at least to me, is that I had full control over this facial expression. I created it “from scratch” by working with a face key. This allowed me to first move each muscle group that I thought should be involved in a smiling animation, then go back into the “detail” panel and fine tune each feature.

This is where iClone really starts flexing its muscle. In the screenshots below, you’ll see the four-viewport window of Milkshape, where I’ve imported a Sim head as well as an iClone puppet head. The thing I want you to notice is the number of white dots in each mesh. Each white dot is what’s called a “vertice.” Each vertice is assigned to a “bone” that controls its movement when animating.

Click to enlarge.

Click to enlarge.

Click to enlarge.

Click to enlarge.

With only a glance, you should be able to note that the iClone head has a quite a few more vertices than the Sim head. Why does this matter?

Connected to each dot in the mesh is a line. These lines form tiny triangles that make up the mesh. Each triangle is called a “face,” or a polygon (poly.) If a mesh has lots of faces, or a “high poly count,” it requires much effort for a computer to render that graphic on the screen. Sims 2 is a game, and it’s played in “real time.” In other words, when you click on a character and tell it to go pee, you expect it to go pee immediately. If the Sims 2 bodies had a high poly count, it would take so much time for the computer to render each frame of their movement that you’d get what’s called “lag.” Lag is when the computer seems to freeze and think about what it should do next while the game actually continues at normal speed in the background. Say, for instance, you were playing a hunting game that required you to shoot a moving target. If the objects in the game had such high poly counts that they created lag, you might aim at the target, but by the time you had it in your crosshairs on screen, the game would register it being on the other side of the meadow. You’d miss your shot every time.

Game designers learned long ago that if their product only appeals to people with high-end gaming systems, they’ll exclude most of their market. So to combat the issue of lag and outrageous system requirements, they create most objects in their game as “low poly” items. Yes, you sacrifice detail. But when you aim and shoot at that low-detail deer, he actually falls down.

XBox and similar systems are made specifically for gaming. Therefore polycount is not an issue, and those fantastic, lifelike graphics are possible. For those of us who prefer computer interfaces, we must sacrifice some of the stunning visuals. Unless, of course, we’re fortunate enough to own a very expensive computer with a high-end graphics card and processor. Dear Santa. . . .

Sims 2 machinima directors are notorious for using custom content. Dissatisfied with the cartoonish appearance of the base game, they’ve crammed their “downloads” folder with items created by the community itself, not Maxis. Most of this custom content has a much higher polycount and was never intended for regular gameplay. It was for filmmaking only. As a director, I learned quickly that filming in real time rendered choppy, laggy animation that no video editor could remedy. A trick of the trade? Film in slow motion. Veeeerrrrryyyy slloooooooow motion, as in minus-10X the normal speed.

So if I knew all these workarounds for lag, why couldn’t I just mesh a whole Sim and make it do whatever I want?

To some degree, this has been done. New feet, new bodybuilder types, and fat meshes have all been created by the community. Yet they still use the same skeleton, which has a set number of bones. The game simply will not recognize a different bone hierarchy. So we might retexture faces and heads, but there’s no point designing a new mesh because no matter how many bones we add in Milkshape, the game will only animate the ones Maxis created. This is why I could do absolutely NOTHING about goofy Sim expressions, floating teeth, or half the stuff non-gamers complained about.

iClone is different. There is no “game” anywhere in it. The puppets have zero autonomy. There are no behavior algorithms constantly running in the background. But the visual detail is second to none. Look again at those screenshots of the head meshes. iClone has assigned bones to each and every one of those little dots and given us full control over how they’re used.

Doesn’t this create lag? Damn right it does. My poor laptop can’t even process a full 3D scene plus an avatar. It simply freezes. And yes, I can see this becoming a problem in the future. But iClone saw this coming and built “workarounds” directly into the software. During set creation and script-building, we have the option of turning off pixel shaders (a huge resource hog) and even viewing the set as a wireframe. This eliminates lag altogether during this phase. Then, when it’s time to film the scene, iClone provided this handy-dandy “by frame” option that renders each frame completely before moving to the next. This takes forever, but once it’s done, playback is flawless.

Now. . .I have other characters to create and audition for Amanda. Tom is next. He’s how I will spend my afternoon. 🙂