It is said “NUI is the user interface
goal, not interface itself. NUI is
intuitive interface.“ (from Wikipedia)
Natural is such an ambiguous
adjective that this definition
doesn’t tell much.
It is said “Voice and Gesture are
NUI, because they are used in real
Input modality itself is not natural. They may
convey unnatural UI, for example, special
gesture language, voice command to select item
from menu. But the reason part is a good hint.
It is said “NUI is not WIMP (Window,
Icon, Menu, Pointing device), and it
is the post-GUI paradigm”.
Contrasting with current main
stream UI is a hindsight, but
doesn’t show specific direction.
It is said “Touch is NUI, because users
directly manipulate objects. The
contents is the interface (Wigdor)”.
The definition suggests a hint wrt
method. But touch UI can be
It is said “NUI is invisible
The definition looks mysterious,
but broadens the implications of
“direct manipulation” definition.
Steve Mann’s Reality-Based Interface or AR or
VR (for example, clickable world-objects) is NUI,
because it extends the reality which human is
RBI can be unnatural. However it
suggests we should think about
objects beyond GUI.
It is said “NUI doesn’t require training. NUI is
intuitive. Machine adapts to human in NUI,
while human adapts to machine in traditional
Those are attributes or
results of NUI.
Get back to origin. Ideal NUI would be:
Interact with machine as if he/she
interacts with human or world everyday
• Method: No artifacts but eye, hand, and mouth
• Objects: No WIMP but real world objects = human
(communication with remote people), perceptive
(INTEL terminology) or robotized devices
(Filed as JPA 2017-204737)