From the glowing rectangle in our pockets to the one on our desks, we live in a world dominated by screens. But, the advent of contextually aware devices might change the way we interact with our devices. Sooner rather than later, we will transition to a screen-less world, where we control machines through our voice, movements, and even thoughts.
Welcome to zero UI.
What Is Zero UI?
Right now, web design is mostly visual. And, rightly so, considering that almost all of the devices we’re using in our day to day lives have screens. From our smartphones to our computers and our TVs, we control these devices through a visual interface. As technology continues to advance, our gadgets are able to foresee our needs before we even type them out.
But, as the Internet of Things (IoT) becomes a common occurrence, the devices we’re using will be able to hear our words and anticipate our needs.
It might all sound like the stuff of Sci-Fi movies, but zero UI isn’t such a novel idea. If you’ve ever used Siri or Amazon Echo, then you’re already a bit familiar with the zero UI experience.
Zero UI is, therefore, a new paradigm where we control devices through our voice, movements, glances or thoughts. The goal is to move away from screens and begin interacting with the systems around us more naturally as if you were communicating with another person.
How Will Zero UI Change Design?
We know what you’re thinking: will zero UI mean the end of the visual interface?
Well, not necessarily. Instead, it will be a new paradigm where the interfaces we engage with so much today will fade into the background, allowing us to connect with our devices more naturally and straightforwardly.
However, designers will have to change the way they build their systems. If now they think in linear sequences (what the user is trying to do right now,) they need to move to a multidimensional approach (what the user is trying to do in multiple scenarios.)
Here’s an example. If you were to ask Siri a question (Who won the Oscars last year?” or make a statement (Call my boss), the system will respond to each request. But if you were to ask Siri “Call my boss, Oscar’s winners, and tell me the predictions for this year,” the system will probably freeze. Because it was designed linearly, it can’t handle this random train of thoughts.
But, in the future, designers will need to build systems that are capable of interpreting our needs and thoughts and adapting to everything in real-time.
How Can You Design Zero UI for Screenless Interactions?
That is the question that troubles the mind of designers right now. What tools and skills will they need to be able to deliver a flawless zero UI experience?
Experts believe that designers will have to expand their horizons from just design and become proficient in psychology, biology, data analysis and other subjects. Why? The better these designers can understand the "why" behind user behavior, the stronger zero UI designs they can create for that audience.
It might seem like an impossible task to achieve, but various brands have already integrated zero UI into their strategies. Here are a few examples:
- Contextually Aware Devices: Devices that are contextually aware don’t need too much input from a user to predict his or her needs. Domino’s Pizza, for example, has built an app, the Zero-Click App that works on the premise that if you launch it, you want pizza, so they deliver it to you. If you accidentally opened the app, you can cancel your order if you close the program within ten seconds.
- Haptic Communication: This type of technology provides you with motion or vibration-based feedback. For example, smartwatches or fitness trackers use haptic communication to notify you when you’ve received a message.
- Gesture-Based Interactions: Gestures are a bit more difficult to implement. Take the example of a motion-controlled TV. A senior who lived in the era of analog television might twist an imaginary dial to turn the volume up while a teenager might give a thumbs up to signal the device. A zero UI system needs to have access to a wide range of behavioral data to be able to interpret different gestures. Google’s Project Soli, a wireless gesture recognition technology is the closest we’ve gotten to an intuitive interface.
- Voice Recognition: Voice search is becoming more and more popular, with experts suggesting that adoption will double in the next five years. Already we have systems like Siri, Google Voice or Amazon Echo that can interpret our commands with great accuracy. In fact, the word-error rate for voice recognition system is as low as 6.3%, which is not that much different from humans. The problem, however, is making these systems able to understand slang and regional dialect.
The Final Word
Don’t take the term “zero UI” literally. There are always going to be users interacting with a device in one way or another. However, designers need to look beyond the screens and understand that the world is moving in a direction where the way we communicate with devices will change.