Industry attributes
Other attributes
A human-robot intelligent user interface is designed to improve communication between humans and computers, specifically designed by researchers, designers, and developers to enhance the flexibility, usability, and power of human-robot interaction for all users. Human-robot interaction consists of an input facility, environment display, intuitive command and reaction, and the architecture of the interface program. For example, an intelligent user interface system for human-robot interaction can consist of ultrasonic, position sensitive detector (PSD), and DC motors.
Part of the interaction and the research into intelligent user interfaces with robotics aims to equip robots with the required intelligence to actively support humans completing it. The focus of much of the research is to develop intuitive and "natural" interfaces and interactions to allow for tool handover in a work environment, such as using robotics in an operating theater, on a factory floor, or in extreme working environments, such as a nuclear power plant.
Human-robot intelligence user interface is a field of research and development built on research on intelligent user interface that fuses the fields of human-computer interaction and artificial intelligence to create interfaces that users perceive as "intelligent." Human-computer interaction is concerned with solutions for the usage of interfaces, techniques, and artifacts; AI research is focused on techniques for automation to aid users with the execution of various tasks. To make an interface seem "intelligent," it usually has to be based on the type of interaction, the automation of the interaction or output, or the interface or interaction as a whole.
As in many related intelligent user interface use cases, human-robot interaction is facing challenges such as safety, transparency, trust, gesture or speech, and user experience to help humans collaborate with robots across industries and use cases. Research into human-robot intelligent user interface seeks to solve or ease those pain points to increase the ease of human-robot interaction.
Whether the robotic design or interface is for delivery robots, self-driving cars, or autopilot systems, there needs to be an interface in which the human can control the machine to enable control and feedback, meaning users need to learn with their interface. A more complex interface will not allow someone to use the robotics system without proper training.
One example of an interface that has been evolving for a long time is aviation, with aviation systems being one of the oldest human-machine interfaces. Since the 1950s, aviation systems have been slowly and increasingly automated. Two examples of these systems are those developed by Airbus and Boeing. Airbus developed a hard autopilot system that defines and controls entire operations and does not give the pilot any ability to control the aircraft. Whereas Boeing built a flexible system in which the pilot could take control at any time. The flexible system allowed airflight crews to save their planes from emergencies in unexpected situations.
This suggests there needs to be some kind of flexibility in robotics interfaces to allow for human control in a given scenario. Similarly, any interface needs to balance in design, with the interface needing to balance four key dimensions: user, application, interface, and technology. This can include designing an interface that is expressive and able to interact with someone using modalities such as speech, gestures, or symbolic expressions.
The types of interfaces that can be designed also depend on the type of interaction expected or required by a robotic system and should be designed with the goals of allowing intuitive control of the robotic system and creating a balance between automation and user control. These interfaces can be touch, voice, or gesture recognition. For example, a robot supervising operations in a hospital needs to be able to react to a nurse, and react quickly, to perform emergency care.
Robots, whether they are mobile or stationary or act in an environment close to humans, have a primary purpose of supporting humans in various environments, such as at work, at home, or for leisure. This type of expected interaction, as noted above, will define the interface and the expected type of interaction.
The following are two common and contrasting interfaces often used for an assistive manipulation robot system:
- graphical user interface (GUI), in which the user operates entirely through a touchscreen as a representation
- tangible user interface (TUI), which makes use of devices in the real world—such as laser pointers, projectors, and camera systems—to enable augmented reality
Both systems are designed to allow an operator to use a robotic system in an open environment.
As robots are increasingly integrated with artificial intelligence or are part of an internet of things (IoT) environment, they can play a greater role in individuals' lives through automation. However, how users interact with these devices can vary. For example, the system can have a human-robot interface with a tablet, also known as a "robot with a tablet" system, in which a human is given increased control through a tablet-based interface, although it does not limit a robotic system from having additional interfaces, such as speech or expressive systems. A contrasting type of interface, the "robot only" interface, does not provide the user with an interface and can still provide the same functionality as a robotic system equipped with a tablet but can be more cost-effective. While the interface—which will usually be vocal—has to be more robust to properly interact with users.
One development of robot-user interface is a method for predicting user intentions as part of the interface, using geometric objects that can partition an input space to provide a robotic system with a means for discriminating individual objects and clustering objects for hierarchical tasks. This could be developed in combination with robots designed to mimic human conversational gaze behavior to help robots collaborate with humans, and humans feel the robotic system is understanding the task described in a conversational user interface.
A social robot, and an understanding of how humans and robots can interact to accomplish a specific task, can create a more sophisticated robotic platform that could become integral in human societies. This is particularly because a social robot needs to be able to learn the preferences and capabilities of the people it interacts with to allow it to adapt its behaviors for more efficient and friendly interaction.
Advances in human-computer interaction technologies have worked to improve human-robot interaction, allowing users to interact with robots through natural communication or speech, and voice-controllable intelligent user interfaces allow more ease of use for humans. Studies into these systems have found that users prefer voice control to manual control. Subjects with high spatial reasoning tend to prefer voice control, and those with lower spatial reasoning prefer manual control. The overall effect of spatial reasoning was shown to be less important with voice control compared with manual control.
Because of the complexity of human-robot interactions, part of developing a user interface can include the development of spatial limitations, or intelligent environments. This could be as simple as geofencing, which dictates a robotic system's functionality inside and outside of a designated area, and can increase in complexity to systems capable of orienting robots to roads, airspace, or spaces in cities and buildings. This can be part of the interface for order—picking robots capable of navigating a warehouse based on directions and increasing the capability of workers to collaborate with robots. And it could extend to delivery robots and lanes reserved for self-driving cars and also be used for experimenting related to human-robot and human-machine interaction for developing smart cities.