Lip Sync Coming to Second Life… And Why This Could Be the Most Important News All Year

This is the best news I’ve heard in a while in regards to Second Life. Lip sync is going to be available to anyone using the official Second Life client. Its also possibly one of the most important innovations to come to SL in years.

Mike Monkowski, former IBM speech group programmer and currently in the IBM semiconductor development group, has been diligently incorporating lip sync code he developed independently into the Second Life client for over 6 months. According to an announcement from Mike a couple of days ago, his lip sync code has been added into the official Second Life client.

The code for basic lip sync has been included in the latest Release Candidate viewer, however it is disabled by default. To enable it, you first have to enable the "Advanced" menu by pressing Ctrl-Alt-D (all three keys together, but not to be confused with Ctrl-Alt-Delete). Then in the Advanced menu select Character, then Enable Lip Sync (Beta). Whenever someone uses voice chat and you see the green waves above the avatar’s head, you should also see the avatars lips move. When an avatar has an attachment covering its head, though, the attachment is not animated. Sorry, furries

Mike
SL: Mm Alder

I want to give some background as to why lip sync in Second Life is potentially so very, very important in the long term.

The ability to communicate via voice within the SL client was added last year. While heralded by many as a way to open up communications in the virtual world, it was derided by others. They believed the arrival of voice chat would warp the balance of social interactions in Second Life, creating a schism between those who used regular text chat and those who communicated by voice. Whether or not that has come to pass is worth a long article all on its own.

For those of us that have taken up using SLVoice on a regular basis, one of the biggest complaints has been the fact that avatars stand physically mute and unresponsive while an entire group of people talking to each other via voice can have an active, raging discussion.

This ties into something called the Uncanny Valley . When something looks and acts very nearly human, but doesn’t succeed in duplicating the slight unconscious body language cues that our brains look for, many people can feel a revulsion towards the thing. While this is most often associated with robots that try to look and act human, this can also happen in reverse. A person who is injured via accident or health problem can receive damage that prevents their faces or bodies from moving naturally. To the general observer, we can instantly tell "something" is wrong with these victims, even if we cannot immediately put a name to what we unconsciously have already recognized.

The concept of the Uncanny Valley has grown increasingly important over the last few years as robotics engineers around the world build increasingly sophisticated machines to interact with people. As people interact with smart, interactive service machines over the coming decades, the ones that try to look the most like us could be the largest failures. That is, until systems designers and human behaviorists learn what our cues are and build that into our coming artificial brethren. Human beings are not as complicated as we would like to think.

Its not just in robotics that this has relevance, however. The Valley has much more immediate pertinence to virtual worlds. Hundreds of millions of people are already interacting with virtual entities that act exactly like humans but look like computer game characters. Each other.

Whether you are in World of Warcraft, Second Life, or Habbo Hotel you are interacting with other people through virtual entity interfaces. Some of these look more real than others, and the closer to real you get, the closer to the edge of the Valley you move.

Because of our capacities for empathy and communication, we are able to work around the social limitations this introduces. But still, its not enough. Many of us have very good friends that we’ve met through virtual worlds that live all over the world. I’ve been lucky to meet a handful in person, but otherwise our entire experience of knowing each other is through the digital bodies that we have crafted for ourselves. If I am typing in chat or talking via SLVoice with other people, I generally focus on their faces. But watching a "thing’s" face with its non-responsive eyes, dead skin, and an unmoving mouth mostly doesn’t let me connect the voice I hear or the words I read to a human identity. My rational brain makes that connection, but the inner social monkey part of my brain can’t.

To reference the oft-referenced Snow Crash , I want to bring up the character of Juanita Marquez. Her specialty in the development of the Metaverse is the facial animation of avatars. She recognizes that the importance of human social interactions in virtual worlds hinges on the ability of people to communicate like we do in person. In Snow Crash, most of the early engineers overlook the importance of this but, once she has made her contribution, it revolutionizes the way people in the Metaverse interact. Even the engineers.

Second Life isn’t the first "virtual world" to do at least some form of lip sync for human speakers that are driving avatars. According to Mike Monkowski himself, even his work is a basic kind of lip sync. That does not diminish its importance though. Second Life is by far the most creatively open virtual world platform that has yet been built. Though it does not have the widespread gamer adoption of a game like World of Warcraft, the ability for its users to have so much creative control within the world makes its long-term importance much greater than any other.

To make a prediction: the addition of lip sync to Second Life will fundamentally change the way people perceive their relationships with each other when they communicate using avatars and voice. When you are talking to your friends, or talking to a group, this will increase the realism of the experience. By making the suspension of disbelief easier, it allows people to become more deeply associated with their avatar as an extension of their own person.

I have been interested in this line of thought for quite a while. Late last year I released what is, arguably, the most advanced voice-activated lip sync simulator in Second Life. The product is called Motor Mouth and uses the Linden Scripting Language(LSL) to carefully manipulate a series of facial animations that are built into SL to create relatively life-like mouth and lip movements when the user talks using SLVoice. So many copies of Motor Mouth have sold that I firmly, firmly believe that people desire an increased realism when interacting with avatars. My creation works as well as is possible within the confines of Second Life. The first time I tried Mike’s beta lip sync client however, I was astounded. I knew immediately that his work would put Motor Mouth into the trash heap of history, and I am absolutely thrilled to throw out the garbage.

Aside from the massive change that integrated lip sync in Second Life will bring, it has important implications for the thousands of people interested in creating machinima in SL. Characters on-screen barely register an emotional impact if you can’t see them express emotion themselves. Seeing a character actually talking, even if its only a rough approximation of mouth and facial movement in relation to the words being spoken, adds an important and necessary level of engagement with the story trying to be told.

As a prime example, please watch "Jill’s Song ". The video is included be
lo
w. While this amazingly be
autiful video was not made in Second Life, it clearly shows us a direction that we need to move towards as quickly as possible in terms of avatar animation. Everything in this piece was done right, from the voice acting to the expression of emotion on the characters’ faces. I *connected with this character and felt my own memories of a lost love rise and fall in rhythm with the story being told. Except the end! Every great story has a twist, and this one threw me for a loop.

I would like to congratulate Mike Monkowski for an astounding addition to the Second Life experience. He did this completely on his own, and I can’t thank him enough. He has fundamentally changed the budding metaverse in a way that should be remembered for a very, very long time.

Leave a Reply

Your email address will not be published. Required fields are marked *