Few months ago I received an email from a Microsoft partner asking me to write about the future of interaction. What is next in this field considering the innovations that came with Microsoft Surface and Apple iPhone? was his question.
First of all, let’s remind that this blog is just a space for my personal opinions, not Microsoft. Also, being a personal space it allows me to talk about not only Microsoft products but ANY other products and interfaces I admire.
Like everybody, I was amazed with when I first saw the iPhone and Surface. I had seen several projects using this kind of interaction in the past but, all of them restricted to an specific environment, many times requiring hard setups to operate. The beauty of Microsoft and Apple’s products is the fact they are, for the first time, spreading the technology to the world. Now, it is a commodity. Today, almost every cellphone has a touch interface, Windows 7 will embrace the technology (in fact HP has already a multi-touch screen computer running Windows Vista), Apple brought this to their MacBook touchpad, and the list goes on. As you already know, the concept is pretty simple. Touch whatever you want. Your fingers are in control.
This kind of interaction however is not the answer for every problem. Are you happing filling out long forms using the screen keyboard? Can you play your favorite but complex games using just 1 or 2 fingers? I am not, although I recognize the great job some game developers are doing with the multi-touch SDKs available.
What about voice recognition?
An older but very effective way to interact with machines is voice recognition. Have you seen how many progresses we had in this area? It is present everywhere, from your supermarket kiosk to every call center. Use your voice to command your intentions. To be honest, this is a technology I expect to see developers and designers using more in a near future, specially when the voice recognition engines become more friendly to other languages than English. Don’t you think it would be nice to talk naturally, as we do in real life, and have your results back?
The progresses in motion capture technology have changed the motion picture industry and they are close to do the same in the game industry. Can you image to animate Gollum or a dinosaur using stop motion animation? No way!
The concept is simple: 1) define some control points in the object to be animated, 2) using a camera, capture the object, 3) with a software, discover the control points in the image and 4) process the results.
A great example is the Wii Remote, although they use a different kind of motion capture (some accelerometer controls send the position of the control to the console that triangulates the positions and replicates it into the games). No more 515 buttons in your joystick! Move your remote and you are done!
Although I like the Wii concept, I am not a fan of their games (I think they lack in graphic quality), neither of the model (ok, I am against the world, you may say) of interaction. I never like the idea to use special glass to be immersive in a virtual reality game, or any other kind of device. They are simply not natural and for me, the Wii remote is very similar to this model as you have to wear the remote (the Wii Board is cool thou!).
One thing that brought my attention lately is Project Natal, from Microsoft (to be honest, I know as many as you about this project). The idea (using the basic principle I mentioned above) is to capture player movements using a camera attached to XBOX 360. No need to wear any kind of device. Do your moves and the software does the rest. Are you curious about that? See the first public presentation at http://www.joystiq.com/2009/06/11/video-project-natal-invades-late-night-with-jimmy-fallon/
So, are these technologies the future?
Fact is we use all our 5 senses to interact with the real world. No current technology alone is providing yet the full immersion we would like to have. Touch is not the answer for every problem, neither is the Wii remote or even Project Natal but we have to agree they are great steps for the future of interaction.
With these public technologies we are already covered in touch, vision, audio. Now it is time to see what is coming for smelling and flavor detection. Wouldn’t you like to taste some food and maybe have all the ingredients detect by your tongue? Can you imagine being at home, tasting a good cheese or wine and have their name or brand automatically detected? Or just say to your computer: Please buy me the cologne I am smelling now.
Futurology apart, the advances in interaction technologies are amazing. Expect to see in a near future deep interaction like the ones you have visiting installations in several museums, where you can not only see, hear and touch things but also smell the environment. Imagine to play a game where you can smell the fear? It would be nice, no?
Stop imagine! Download the available SDKs and start to change the world!