Natural User Interfaces (NUI) still lacks a formal definition. It can appear as a synonym for intuitive, easy or gesture based interaction. So rather than trying to define one I will list some characteristics that are usually associated with them:
If you are new to the term, this is a cool video that has some examples and attempts to define some concepts:
In video games, the use of more natural interfaces helped technology intimidated people to get the courage to jump in and play. Even if most hard-core gamers still prefer classic game controllers, these new interaction techniques really helped to expand the market for videogame consoles. Bowling with the Wiimote seems more natural and simple than using a combination of buttons and analog sticks. The Nintendo Wii, Playstation Move and Microsoft Kinect are successful technologies that opened the path to development of more natural interactions for videogames.
It is hard to talk about an absolute natural interface, but the previous example shows that one interface can feel more natural than another. The extent of this feeling depends not only on the technology but also on the activity being performed and the actual user of the interface. This further complicate the issue since NUI relies on previous background knowledge about how things are supposed to work. This assumptions are ultimately related to culture and previous experience of each user.
With a few exceptions most user interfaces are a coolest but not so efficient way to perform a task. If we look closely we can see that in few examples, some actions performed are far from being natural. Even int the coolest complex hand/head/spirit tracking setup. In fact, a natural interaction does not require complex multiple degrees of freedom setup. Its quality emerge from the perfect coupling between the hardware, the software, the task and the user.
More on that latter. Or not.
I would like to talk a little about my new area of study. Coming from computer graphics research it is going to be very fun and challenging to learn the diversity of techniques and subtleties of the field.
Human Computer Interaction (HCI) deals with the interface between computers and people. It seeks to understand how humans behave when performing actions and interpreting computer output. At the same time it tries to develop new technologies or paradigms that can be useful to make this communication better.
The difficulties arise because we do not understand well how our brain and perception works. What are its limits and capabilities ? Computers operate on a way that is quite different from us and even the most power computer pale in comparison to small prodigies of the human mind. However computers are very useful in many specifics tasks and that is the motivation behind most research in computing.
Researchers in HCI generally employ a multidisciplinary approach when trying to devise better interaction models. One can get support from disciplines such as cognitive sciences, ergonomics, graphics design, engineering, ethnography, etc. Designs are also based on different metaphors and ideas. Windowed interfaces, for example, are available in most computers. They are based on discrete visualization areas, that you can move around, open an close, each one associated with a specific document or software. Windows like these only exists on the display and do not really represent anything. They are just abstractions that work really well. Depending on display, you can use a mouse, keyboard or touch screen to interact.
Tangible User Interfaces, on the other hand, try to associate information and actions with physical entities. In this way information can have a presence in the real world: you can grab, move, combine it with your hands. This mapping needs not be static and may change depending on the context. There are a couple of really interesting experiments with this idea.
Interfaces that are instinctive and employ mappings that are well fitted to the task or situation are called Natural User Interfaces. This approach generally makes use of more advanced techniques such as gesture recognition and head tracking to allow direct manipulation of virtual elements. This is a way to enable the use of our real world knowledge and expectations when interacting with the computer.
After some time I am back to writing. I am now a Computer Science PhD student at the Virginia Tech Graduate School. I will be doing research at the Center for Human Computer Interaction under the supervision of Dr. Doug Bowman.
I would like to take the opportunity to express my gratitude for everyone that supported me in this way here!
The city is very nice and the people very friendly. I will be inevitably posting more about HCI but expect a couple of different things from time to time.
Virginia Tech Graduate School, Blacksburg.