Future interaction for the interface of applications and devices

Over the past five years, we have seen the world with multiple innovative ideas regarding interaction and user interface. People around the globe are now familiar with the concept of AR, VR, voice command, gesture, and so forth. With the rise of AI (Artificial Intelligence), the application and devices are more and more intelligent with the capability of human needs and behavior prediction. Consequently, in this article, I will shape the potential user characteristics and the interaction style in the next decade.

The portrait of the future users

To begin with the shape of the future users, I used to look at my children as he’s six years old now. How will he be in the next ten years when he’s sixteen? In fact, nowadays, children from six to eight years old start using electric devices such as smartphones and tablets and can learn quite fast. The typical user like this generation (my boy in the example) is using multiple senses to interact with the application. They usually use voice, touch, visualization tools, movement in their daily activities. For example, my kid is very familiar with Google Assistant and Amazon Alexa. Hence, I believe they will feel comfortable with voice commands, hands movement commands, and human-robot interaction (HBI) style in the near future.

Moreover, the portrait of the users in the next ten years will broaden from the young generation to 60-70 years old people (as we are getting old from now on). Users will keep doing multi-tasking, even though they can’t do multi-task like a machine, but they still prefer to switch back and forth between the things simultaneously. As a result, the demand for adaptive devices and integrated devices will increase. Users might want to have glasses that allow them to watch the streaming video, listen to music, and answer the phone call using the eye’s movement. The disabled people will also be able to control their robot arms or legs with the brain signal, maybe. 

Predicting the interaction style for user’s interfaces

As we can see the users in the future, the application or device’s interface will also need to be improved massively. At this moment, most of the user skips using a computer mouse but use a trackpad or touch screen instead. The more multi-tasking they are, the less able to control the device they could afford for. With that in mind, I would believe the future interaction style will focus on a few critical trends as below.

The voice interface

As modern users get busier, they prefer some interaction style that allows delivering the command freely. Users will turn on or turn off the TV or the air conditioning with just a voice command. The mobile and web applications are now supporting voice search steadily, and they will work smoothly in the coming years. The voice interface is not an innovation in terms of technology development, but it becomes possible to apply with the blooming AI industry. 

The robot and automation interaction

Robotics technology is growing in any area of our lives. From medical care to customer support, it becomes a key business strategy because of the quick response to the client. For example, the chatbot has been the big trend in the last three years, with elementary conversation supported. However, in the next ten or twenty years, we will have more than enough data to train the computer with deep learning to make the computer program more powerful and more brilliant. We have a vacuum cleaner, rice cooker, and I believe we can somehow ‘make a full week plan’ for those devices in 2025. 

Interaction through human senses

Facebook wanted to patent their way of turning on our camera to track our face feeling silently (3), and it might be popular in the future. Imagine that Netflix, Music app, Travelling application, or even your juice maker machine can be based on your facial feeling today to suggest what you should drink, where you should go, what you’d better watch and listen to.

It is how the world might surround you and serve you with the intent to satisfy you the most. 

To conclude, the future interaction style might focus on flexibility and adaptability based on the user’s profile or context. Users can interact with things we design in their most convenient way. Hands will not be the only way to interact, but other senses and movement will also be the input source for the applications. 

References

(1) https://uxdesign.cc/the-future-ui-trend-of-2025-14d9fdf6745

(2) https://careerfoundry.com/en/blog/ux-design/whats-the-future-of-ux-design/#how-will-ux-change

(3) https://www.inc.com/minda-zetlin/facebook-patents-spying-smartphone-camera-microphone-privacy.html

Few Agile notes I learnt today

#agile #scaleagile

So, today I passed the LinkedIn assessment for the Agile methodologies, thumbs up. However, there are a few things I learned during the way that I want to take note of here.

In Scaled Agile Framework, if you just need to measure one thing, what is it?

Note: If you only want to measure one thing in the Scaled Agile Framework (SAFe®) or other frameworks, yes, it could be “cost of delay” (that is the official version, according to the SAFe® test). This is where I read.

Three key different things between task board and kanban board

This is the article you should read

What is the Enabler’s role in Scaled Agile Framework?

An Enabler supports the activities needed to extend the Architectural Runway to provide future business functionality. These include exploration, architecture, infrastructure, and compliance. Enablers are captured in the various backlogs and occur throughout the Framework. This is further reading.

What are the most common ways to split a story?

spidr-technique-user-stories.png

Further reading for story splitting is here:

What Happens When A Story is Not Accepted by Product Owner?

Abandon it. In some cases, the story may be abandoned. This is a healthy response of a mature product owner, and it typically happens when the data does not provide the information the product owner was hoping to get. They see that what the team can realistically create based on what’s in the system is not what they or the stakeholders had envisioned.

Further reading is here.

That’s it so far.

Hope it helps sometimes you read this.

Cheers!