Start United States USA — software Mixed reality is the UI of the future

Mixed reality is the UI of the future

479
0
TEILEN

The past few decades of computing and the internet have been limited to flat 2D screens on laptops, tablets, and smartphones. However, the rise of mixed reality means that society will soon move beyond the screen: where the user interface won’t be a flat piece of glass but the physical 3D spaces in which we live and move around.
The past few decades of computing and the internet have been limited to flat 2D screens on laptops, tablets, and smartphones. However, the rise of mixed reality means that society will soon move beyond the screen: where the user interface won’t be a flat piece of glass but the physical 3D spaces in which we live and move around.
As this happens, user interface design will have to change dramatically to accommodate a mixed reality internet that overlays our real world. I’ve interviewed four AR and VR designers who share their thoughts and predictions on how this evolution in UI design will unfold.
Since its inception, human-computer interaction has been defined by the capabilities and constraints of hardware. This includes keyboards, mice, smartphones, and tablets, and all the gestures associated with these devices. Although a 3D spatial interface will result in an entirely new means of interacting with technology, does this mean we will completely abandon the best practices of 2D interfaces?
Val Head, Design Evangelist, UX Innovation at Adobe, believes that mixed reality will draw from the best practices of both 2D and 3D interface design. She explains, “The mixed reality user interfaces of the future will draw upon influences from both old and new techniques. For example, in VR applications like Tilt Brush, users hold a 3D palette in their hand that can be rotated with the swipe of a joystick. Each featured menu of the palette is essentially 2D and, in this sense, is familiar to us from how we currently interact with menus. However, it’s possible to remove these 2D screens from the palette and then stick them into space, so that it floats in front of the user and is easily accessible. In this way, we can see how 2D elements of human-computer interactions can be ported into a 3D environment and still be successful and intuitive to users.”
Head applies this concept to interface animation in mixed reality environments by saying, “Although animation that is used in a 3D spatial canvas will be different than it is on 2D screens, they have the same intentions — to show feedback to users and help them to keep track of where to go. But when it comes to moving objects, we will have to be more accurate for physicality, weight, space and timing. It should be more realistic because interactions will be occurring in real space and time.”
Mixed reality will have a substantial social impact in that it will bring fictional content to life in the real world. This was best demonstrated by the success of Pokemon Go, in which millions of people would walk miles per day around their cities to capture digital monsters. Thus, an entirely new type of location-specific storytelling and entertainment will become possible in our daily lives.
Mohen Leo is the Director of Content and Platform Strategy at ILMxLAB. He expands on the idea that “mixed reality can be something that people casually consume throughout the day, in the same way that we consume content on our phones. However, these experiences will feel more like immersive narratives that come in and out of your life as you sit on the subway or go to the supermarket, with characters popping into your world and interacting with you and the environment.”
Leo addresses a challenge that creators will face in making mixed reality experiences truly immersive, “audio interactions will be particularly crucial to mixed reality experiences. Since we can no longer tap or swipe a screen, voice recognition will become an increasingly important means of communication and user inputs. For example, if you are conversing with a mixed reality Darth Vader, but the AI can’t understand what you are saying, it would immediately pull you out of total immersion. After all, you can’t politely ask Darth Vader to repeat his point. Unlike a traditional movie in which users suspend their disbelief and pretend that the narrative on the screen is real, in immersive experiences the margin of error goes to zero.”
As humans, we’ve evolved to process 3D spaces and use mental models. Mixed reality will help support these natural tendencies and enable a more intuitive means of interacting with technology: in our personal lives and in the enterprise.
Alfredo Ruiz is the Design Lead at IBM, where he focuses on the user experience for AR data visualization and exploration. He expands on how the enterprise will play a key role in making mixed reality mainstream for consumers: “Though data can be depicted on flat screens, being able to manipulate data in augmented 3D space makes it easier and more efficient to detect patterns. When a data scientist using our technology is literally in the middle of their data, they can walk around it and explore it from different angles. When you are dealing with enormous masses of information, this can make a huge difference in creating insights.”
Ruiz expands on this point: “one challenge we face is that there aren’t currently many enterprise applications for AR. This is due to the immature state of the technology as well as a lack of understanding of how AR can seamlessly blend into enterprise workflows. However, once the technology catches up, the user experience will ultimately become natural and intuitive and spread into the mass market.”
The popular Netflix show “Black Mirror” reflects a prominent concern of our time: privacy, security, and ethics in the age of ubiquitous technology. There remain serious ethical issues related to technology that, in 2018, have still not been resolved. As such, how will we live in a society where the population walks around with cameras attached to their faces, constantly scanning the environment?
Rose Ann Haft is the CEO at Lumenora, which makes hardware for virtual and augmented reality. She expands on this point: “the impact on human’s lives and their desire for privacy was not considered when Google Glass was released – and for good reason. These headsets are deeply powerful in that they can capture all of our behavior, both the good and the bad, in addition to supplementing our capabilities. This potential capturing of our behavior has numerous negative implications for HIPAA Laws and Probable Cause and it’s understandable that people would instinctively push back against people wearing these devices.

Continue reading...