GUI is mainly dependent on hand-eye coordination and thus limits the user’s ability to interact with multiple devices simultaneously.
NUI is gesture-based and can be configured to various parameters such as eye movement, gestures, facial expressions – virtually to any variable.
GUI requires the user to pay attention, and react to what is on the screen. With the growing number of “smart” devices attention time per UI is decreasing by the day.
Next-generation interface technologies allow the user to look away from the screen. The machine looks to the user to understand what to do.
There is always a learning curve for user interaction with GUI, which varies according to factors such as the user's past experience with a similar GUI, intuitive response level, demographic factors, etc.
Next-generation interface technologies are designed to understand natural gestures and thus the onus on learning is focused on the machine, not the user.
GUIs need to be designed keeping a set of end-users in mind. To make traditional GUI-based user interfaces more intuitive and user-friendly systems must be more data dependent and rich in graphics, which in turn compromises system performance. Moreover, a GUI which is good for one set of users (say beginners) may not be good enough for another set (say advanced) of users.
Next-generation interface technologies are designed with humans’ natural gestures in mind. It responds to common variables that are user-independent. For example a VR headset today does not require its user to undertake training. It is designed in such a way that it automatically responds to the natural gestures of its users, such as head movement, eye movement, etc.
Hand-eye Coordination
Gesture-based
User Input
Machine Input
Learning Curve
Machine Learning
Data Dependent
Responds to User