logo
Agency of the year

The Future of Interaction

Minority Report gave us that glimpse into the future of interaction, people were very excited about the potential of standing in front of your computer in a pair of fancy gloves throwing your content around the screen, whilst looking cool.

10 years later and we don’t really see this type of interaction used much in everyday computing. Why? Perhaps it’s because we just don’t look as cool as Tom Cruise.

Having seen some presentations from tech research labs at MIT who are working on these interface technologies we certainly have the technical capacity to make Minority Report a possibility, but what has been realised is that this type of interaction isn’t necessarily useful or intuitive. Also, unless you’ve got shoulders like a pro wrestler then very quickly you’ll be wishing you had a mouse back in your hand.

With the adoption of the iPod and iPad we have seen touch screen and gesture recognition become natural and widely used. But move this to the desktop and it doesn’t work. New interaction techniques must improve the user experience, and having to raise your arm to your screen to perform simple tasks isn’t achieving that.

iPad

So for me this is the major issue, although some new technology can recognise gesture on a smaller scale, can it ever be accurate enough, and by taking away one point of interaction which defines your purpose (touch) how will a computer fully understand my intention, am I pinching to expand an image, bring up a TV guide, or just picking my nose?

Touch works for phones, for tablets, a mouse or trackpad works for desktops and gesture perhaps has a place in the here and now for looking cool, I.e. TV presenters with gesture controlled weather screens or interactive teaching in classrooms.

At a Microsoft Kinetic hacking demo they showed off the commercial Kinect, close range gesture recognition, this tracks at 18″ distance and is therefore very accurate, but, I still believe the better user experience, the more accurate user experience is going to be through touch.

Gaming, the primary reason for the Kinect is very different, it is less about accuracy, less about simple and intuitive interface control and more about fun. I think this is certainly a huge future for gesture recognition.

The first of two things I believe will change how we interact is the ability to make any surface a touch interface. For example, there is conductive ink which can be used to make any surface a touch screen, not only can this react to touch but also react to other objects. Wallpaper, paint, windows create many possibilities, rather than single screen, single user interfacing we move more towards multi-screen, multi-user interactivity.

There is much talk about the “web of things” where it is expected that all of your devices will communicate with each other, perhaps this connectivity will lead to users having to do less interaction as much of what you do or require of your devices is automated.

The second thing, which is perhaps more of a desire of mine is bio-hacking, tapping in to the body, but rather than the latency or inaccuracy of gesture recognition by a device, we instead are physically the controller, small/micro movements can control with great accuracy I.e. touching your thumb and fingertip together to start interaction, so it gives much greater control. Perhaps as science begins to understand more about the patterns and activity of the brain, as has happened recently, we’ll see more interactivity based on your thoughts. Contact lenses with a head-up display feeding content directly into your field of vision. Mix this with augmented reality and all the information you need is right in front of you.

Interactivity has become very “head down” With the increase in consumption from mobile devices, as these devices become smarter, as they get to know you better, as you share more information with them, as the cloud knows more about you, your friends, your likes and dislikes you will enviably have to interface less with your devices as they will serve you the content you want or need.

Siri is currently the best voice recognition we can feasibly fit in a smartphone, and at a feasible price point. The capabilities are more advanced already and will continue to increase.

A smart home of connected devices, monitoring your movements, recognising your patterns, anticipating your needs and listening to your commands is not very far away.

I think that we will continue to need a mix of these interactive possibilities, sitting in a quiet conference you are unlikely to speak loudly and clearly to your smartphone to unlock your phone, there is context and suitability to consider. Personally, Waving my arms round in huge gestures like Tom cruise during a presentation at work I’d feel like a bit of a dick.

There is research in progress looking at manipulating physical devices to communicate information to you without the need to look. Imagine interacting with your phone, or your phone interacting with you through more advanced feedback. Weight shifting in the phone to guide you to where you are going, the temperature of the phone changing depending on how much social conversation is happening. Your phone beating like a heart, growing in size to represent data. All very possible and all within reach.

From an interaction perspective, in the future simplicity for the user is key, making interaction more natural, making it quicker and smarter.

On a final note, the fact that you can patent a gesture means that there can be no consistency which inevitably means it is less likely to be widely adopted.