Humans and computers unfortunately don’t speak the same language, or at least not yet. We need interfaces to communicate with each other. Interfaces capture our input, translate it to the computer, allow the computer to perform requested actions and feedback the result of those actions back to us in a human-readable format.
Adapting to the limits of technology
For the past 4 decades, we’ve been using the Graphical User Interface (or GUI) to interact with computers. But hovering our mouse or fingers over a glass screen, clicking and navigating ourselves through lists and menus fundamentally doesn’t come natural to us.
However, our natural way of gaining information, completing tasks, and fulfilling needs is to engage all of our senses in a multi-dimensional environment. We use our eyes to visually observe the world around us, our ears to capture sounds and listen to other people, our voice to express ourselves, and our hands and bodies to feel and handle objects.
Technology is a human-made tool. So as technology evolves, and the interfaces evolve with it, we should make it evolve in such a way that technology adapts to us, rather than us adapting to technology.
Moving beyond the screen
A series of recent technological advances are allowing us to interact with technology in more natural ways.
Technologies like augmented and virtual reality, voice commands and conversational interfaces, gestural controls and haptic feedback allow us to extend our interfaces beyond the screen and facilitate more natural and intuitive interactions.
Leveraging those, we can start building interfaces that are closer to our natural behaviours by engaging more of our senses.
Augmented reality allows us to add a digital interface layer on top of the real world, taking away the metaphorical abstraction of purely graphical interfaces. You perform actions and tasks by looking at the real world, with digital information added on top of it, rather than looking at purely digital representations.
Virtual assistants like Siri and Google Assistant already made us familiar with using our voice to interact with technology. Devices like the Amazon Echo, Google Home, Apple Homepod and countless chatbots radically do away with a visual interface, and use voice both as input (people talking to the device) and output (the device talking to people).
3D Touch and other forms of haptic feedback can recreate the sense of physically touching something through subtle forces, vibrations or motions. It’s this technology that allowed Apple to remove the physical home button on the iPhone X, while still giving you the illusion that you’re pressing that same physical button you were used to. So even though the interface you're controlling is virtual, you will physically feel natural feedback.
Virtual reality can immerse your whole mind, body and all of your senses into the virtual experience. It is your entire presence and movement that controls your actions. By walking around in the room, looking at things or touching things with hand controllers, you control the natural interface simply by being in it.
Devices adapting to our needs
In an article he wrote in 2011, Bill Gates said the following:
“Until now, we have always had to adapt to the limits of technology and conform the way we work with computers to a set of arbitrary conventions and procedures.
With natural user interfaces (NUI), computing devices will adapt to our needs and preferences for the first time and humans will begin to use technology in whatever way is most comfortable and natural for us.”
Natural interfaces allow us to obtain digital information or complete tasks quicker and more seamlessly.
With that, he captures the essential advantage of natural user interfaces (NUI) over previous interfaces: natural interfaces exploit the intrinsic skills that we have acquired through a lifetime of living in the ‘real world’. By doing so, they reduce the learning curve and cognitive load and minimise the distraction. Ultimately, NUIs allow us to obtain digital information or complete tasks quicker and more seamlessly.
Getting started with NUIs
We’re already seeing great examples of digital products whose interfaces are designed in more natural ways.
One example: augmented reality is changing the way we assemble and repair things. With technologies like Google Glass, the HoloLens or even your iPhone, instructions can now be projected on top of the actual object we are assembling or repairing, allowing us to focus on the task at hand instead of constantly having to shift our attention between the object and the instruction manual. Just take a look at the prototype we made for Telenet, in which we used AR to help people install their Digicorder.
Where NUIs will probably once be seen as one of the radical shifts in how we interact with technology, designing your current product or interface to become more natural should be more of a gradual approach instead of a radical one. Just starting to use one of the enabling technologies, like voice, won’t make your interface natural. The technology is a means to an end, not the goal itself.
When applied right, NUIs make it easier for us to interact with technologies in fundamental ways, allowing us to spend less time navigating interfaces and more time getting actual things done.
And if we succeed, maybe one day humans and computers will speak the same language.