Over the past five years, we have seen the world with multiple innovative ideas regarding interaction and user interface. People around the globe are now familiar with the concept of AR, VR, voice command, gesture, and so forth. With the rise of AI (Artificial Intelligence), the application and devices are more and more intelligent with the capability of human needs and behavior prediction. Consequently, in this article, I will shape the potential user characteristics and the interaction style in the next decade.
The portrait of the future users
To begin with the shape of the future users, I used to look at my children as he’s six years old now. How will he be in the next ten years when he’s sixteen? In fact, nowadays, children from six to eight years old start using electric devices such as smartphones and tablets and can learn quite fast. The typical user like this generation (my boy in the example) is using multiple senses to interact with the application. They usually use voice, touch, visualization tools, movement in their daily activities. For example, my kid is very familiar with Google Assistant and Amazon Alexa. Hence, I believe they will feel comfortable with voice commands, hands movement commands, and human-robot interaction (HBI) style in the near future.
Moreover, the portrait of the users in the next ten years will broaden from the young generation to 60-70 years old people (as we are getting old from now on). Users will keep doing multi-tasking, even though they can’t do multi-task like a machine, but they still prefer to switch back and forth between the things simultaneously. As a result, the demand for adaptive devices and integrated devices will increase. Users might want to have glasses that allow them to watch the streaming video, listen to music, and answer the phone call using the eye’s movement. The disabled people will also be able to control their robot arms or legs with the brain signal, maybe.
Predicting the interaction style for user’s interfaces
As we can see the users in the future, the application or device’s interface will also need to be improved massively. At this moment, most of the user skips using a computer mouse but use a trackpad or touch screen instead. The more multi-tasking they are, the less able to control the device they could afford for. With that in mind, I would believe the future interaction style will focus on a few critical trends as below.
The voice interface
As modern users get busier, they prefer some interaction style that allows delivering the command freely. Users will turn on or turn off the TV or the air conditioning with just a voice command. The mobile and web applications are now supporting voice search steadily, and they will work smoothly in the coming years. The voice interface is not an innovation in terms of technology development, but it becomes possible to apply with the blooming AI industry.
The robot and automation interaction
Robotics technology is growing in any area of our lives. From medical care to customer support, it becomes a key business strategy because of the quick response to the client. For example, the chatbot has been the big trend in the last three years, with elementary conversation supported. However, in the next ten or twenty years, we will have more than enough data to train the computer with deep learning to make the computer program more powerful and more brilliant. We have a vacuum cleaner, rice cooker, and I believe we can somehow ‘make a full week plan’ for those devices in 2025.
Interaction through human senses
Facebook wanted to patent their way of turning on our camera to track our face feeling silently (3), and it might be popular in the future. Imagine that Netflix, Music app, Travelling application, or even your juice maker machine can be based on your facial feeling today to suggest what you should drink, where you should go, what you’d better watch and listen to.
It is how the world might surround you and serve you with the intent to satisfy you the most.
To conclude, the future interaction style might focus on flexibility and adaptability based on the user’s profile or context. Users can interact with things we design in their most convenient way. Hands will not be the only way to interact, but other senses and movement will also be the input source for the applications.