Welcome To Zero UI
In the screenless future, how will we interact with our technological devices?
What will the User Interface (UI) of the future look like?
We’re already getting some sneak peeks: Amazon has just announced the Echo, a slim cylinder standing nine inches tall on your kitchen table, ready to hear your voice commands. It responds to ‘Alexa’ and can be yours for $180, so you can ask it to play some Led Zeppelin, what the weather will be like tonight, or how many grams are in a cup of flour. The Echo can also do things like set a timer, access your calendar and control the lights if you have the necessary kit.
As voice-controlled devices go, Echo holds up pretty well according to Walt Mossberg at ‘Re/Code’: “In our house, it has been pretty nice to have in the kitchen, if only for the on-demand, hands-free newscasts and music.”. Alexa can hear you from across the room, and most of the time she gets things right, even though she’s a bit particular: you have to say “United Kingdom” and not “England” when asking about English towns, for example. But chances are she’ll learn and improve upon those kinds of details: Amazon is working on improving the Echo, having just issued a developer’s kit and launched a fund to underwrite new apps.
The Echo could be the kind of device we’ll all have in our houses in the future, acting as a control centre in the Internet of Things-enabled home. Andy Goodman, group director at design and innovation group Fjord, outlined his ideas for what he called the ‘Zero UI’ at the Solid Conference, held in San Francisco in June 2015. Zero UI is a user interface not constrained by screens, but instead designed with haptic, automated or ambient elements. The idea is to get away from the touchscreen and interact more naturally, by speaking to the device, having it respond to eye movements, or predicting our needs.
“Designers will have to become experts in science, biology, and psychology to create these devices – stuff we don’t have to think about when our designs are constrained by screens,” Goodman told ‘FastCoDesign’. It means moving away from a linear workflow to a multi-dimensional process: voice-controlled devices will need to be able to handle more complex requests than “how is the weather?”. Instead, Zero UI devices will need to grasp how people actually think: “Do I need a jacket to go outside?”.
“As we move away from screens, a lot of our interfaces will have to become more automatic, anticipatory, and predictive,” said Goodman. The Nest thermostat is a good example of this: you set it up once, then it learns to anticipate what you want based on how you interact with it. Similarly, a young or old person might prefer different hand gestures to control the volume of their TV.
Goodman admits the use of the term ‘Zero UI’ isn’t meant to be taken literally: “There are always going to be user interfaces in some form or another, but this is about getting away from thinking about everything in terms of screens.”. It’s a fascinating idea: machines learning how to behave from humans, rather than the other way around. While computers have become a lot better – now often suggesting what may be wrong instead of just flashing stubborn error messages – it’s still down to us to adjust to them and not the other way around. With Zero UI, the tables could soon be turning. After all, the Echo has already learned to play rock-paper-scissors, and Alexa knows we will all be excited to make Star Trek jokes with it. Ask her to make you a cup of tea (Earl Grey, hot), and she’ll respond: “Unable to comply. Replicators are offline.”.
Is true artificial intelligence closer than we think? Find out more in this blog post.