Ai Live, LiveMove is apparently the application in the Wii SDK that allows designers to simply record their movements as input data for the Wii. Here's a video that demonstrates it.
Here is a video of the demonstration, and hot damn is that some cool shit. Of course, thinking about it after seeing it, I can't imagine another way to do it that seems even remotely practical. Why wold you do it any other way? Nonetheless, looks like the entire process of making games just got a lot more fun.
What I think is especially neat is that by designing the input parameters this way, the designer is forced to know from the get-go if the move feels the way he wants it to feel. It's pretty easy, even trivial, to shut your eyes, imagine frying and egg, fishing, or roping a calf with a lasso, and then perform the action in a way that just feels right.
Perhaps forcing designers to come at the experience design from this perspective - literally aping the players interactions to develop the games control scheme - is the genius element behind the Wii design.
I'm envious that you'll probably get a chance to develop for Wii before I will, but that does look flavin sick. I agree that the experiential context of "programming" the UI is revolutionary in its own right.
Posted by: Patrick | October 17, 2006 at 04:02 PM
Player acceptance of 3D user interfaces will depend on evaluations of usability and performance, and further guidelines for effective interfaces. How do you want to 'toss' a saved profile or ‘flip’ through a chat window?
You can also test and evaluate free space interactive 3D actions using Virtools plus its VR package.
I can't wait to design new experiences for the Wii!
Posted by: Juan Ramon | October 17, 2006 at 04:10 PM
That is really cool.
I'm curious what kind of tolerance they allow for. What if my "chicken dance" is more or less animated than the designers recorded one. What if it drifts over time? (i.e. get fatigued, etc)
Anyhow. very cool
Posted by: Kim | October 17, 2006 at 08:07 PM
Kim: Having tried it at E3, and having seen all different sizes of people doing it, I am pretty sure it scales the motion. If the different components of each motion are scales relatively too one another, then the motion works. This means that the way Galactus fries an egg and the way I fry an egg are the same. Of course, Galactus doesn't eat eggs - he eats planets, so who knows if it would work for him.
Posted by: Clint | October 17, 2006 at 08:52 PM
Hmm.. Interesting.
I'm trying to grok that and whether everyone "scales movement symetrically".
I guess I haven't seen GUI control panels offer asymetrical sensitivity adjustment, so that's one sample point.
However, if I thought about something that required "reach" and "jump", two people of the same arm length might reach the same distance, but if one of them is really heavyset or old, or both, they might not jump as high.
Anyhow, it's still cool tech!
I posted some more thoughts here:
http://kpallist.blogspot.com/2006/10/toolz.html
Posted by: Kim | October 18, 2006 at 07:22 PM
Hey Clint,
I'm dorking around with this right now... agree that it seems obscenely useful, as long as it works well. So far it seems great; the only catch is that it is a gesture recognizer, so don't expect to use it for continuous movement.
---Mark
Posted by: Madsax | October 19, 2006 at 12:43 PM
>>it is a gesture recognizer, so don't expect to use it for continuous movement.
Interesting. I assume that you mean - it's not useful for basic movement and looking control as in a shooter, which makes sense, but are there tools that allow you to create looping motions out of the gestures? If I have an egg-frying game, can I fry-fry-fry-fry-fry-fry and will the tools allow you to indicate that a movement can loop to itself as a 'continuous' motion? Maybe even slackening up the requirement for accuracy in the motion once you get into a couple iterations of it? So I would accidentally fry-fry-fry-stab-fry-fry so easily?
Posted by: Clint | October 19, 2006 at 02:22 PM
I'm still messing around with it, but basically you begin feeding the software accelerometer data once a particular event occurs (a button press or accelerometer spike). Then you keep feeding the data however long you like, until some other event, like a button release or another accel spike (probably more likely a decel spike). Then you make a call to the recognition routine, which compares that range of data against the recorded motions you've used earlier, and if it recognizes one it will report back which motion it was.
So you can do buttonpress-fry-buttonrelease-buttonpress- fry-buttonrelease. I have yet to see whether fry-fry will work right. It really does seem to demand that you know the start and stop of a motion, which is kind of a pain in the butt.
It does have a bunch of slackness variables you can tweak to get your motions to recognize correctly, and it'll also allow you to record a "gold standard" motion and then have a particular player do "tuning" of that motion before trying to recognize them. That's pretty smart stuff.
All in all it does seem quite useful; have to play with it more to figure out the exact constraints of its use though.
Posted by: madsax | October 21, 2006 at 01:57 AM