Difference between revisions of "3D human-computer interaction"

From XVRWiki
Jump to navigation Jump to search
 
(33 intermediate revisions by the same user not shown)
Line 1: Line 1:
'''3D Human-Computer Interaction''' (3D HCI) refers to the methods and technologies that allow users to interact with computers in a three-dimensional space. Unlike traditional 2D interactions (using a mouse, keyboard, or touch screen), 3D HCI leverages depth, volume, and spatial context to enhance user experiences and functionalities.
+
[[File:Augmented Reality Studierstube.jpg|thumb|Users wearing headsets and interacting in 3D]]
 +
'''3D human-computer interaction''' (3D HCI) refers to the methods and technologies that allow users to interact with computers in a three-dimensional space.
  
==Input devices==
+
3D interaction is human-computer interaction in which the user's tasks are performed directly in a 3D spatial context.<ref name="g747">{{cite web | last=Bowman | first=Doug A. | title=3D User Interfaces | website=The Interaction Design Foundation | date=2023-08-12 | url=https://www.interaction-design.org/literature/book/the-encyclopedia-of-human-computer-interaction-2nd-ed/3d-user-interfaces | access-date=2024-12-21}}</ref>
* Motion Controllers: Devices like the [[Nintendo Wii Remote]] or VR controllers that detect movement in three dimensions.
+
 
 +
Full-duplex 3D HCI requires a [[3D input device]] and a [[3D display]]. The 3D display can be autostereoscopic, lightfield, or holographic.
 +
 
 +
It can be summed up as a situation where a person can move something around in 3D, and a computer knows where it is, and the computer can move something around in 3D, and the person knows where it is.
 +
 
 +
A main area of interest is [[3D direct interaction]].
 +
 
 +
It can involve [[solid view display]]s.
 +
__NOTOC__
 +
==Control peripherals==
 +
* Motion Controllers: Devices like VR controllers that detect movement in three dimensions using IMUs and/or positional tracking, either optical or elsewise.
 
* Gesture Recognition: Cameras and sensors (e.g., [[Microsoft Kinect]], [[Leap Motion]]) that capture body movements and hand gestures.
 
* Gesture Recognition: Cameras and sensors (e.g., [[Microsoft Kinect]], [[Leap Motion]]) that capture body movements and hand gestures.
 
* Haptic Feedback: Systems that provide tactile feedback to the user, enhancing the sense of touch in a virtual environment.
 
* Haptic Feedback: Systems that provide tactile feedback to the user, enhancing the sense of touch in a virtual environment.
  
==Output devices==
+
==Visual peripherals==
* 3D Displays: Screens or projectors that provide a perception of depth, such as [[stereoscopic display]]s, [[holographic display]]s, or [[VR headset]]s.
+
* [[solid view display]]s, including [[biscopic display]]s and [[holographic display]]s
* Augmented Reality (AR): Overlaying digital information on the real world, typically through devices like [[AR glasses]] or smartphones.
+
* [[VR headset]]s
  
 
==Interaction techniques==
 
==Interaction techniques==
 
* Manipulation of 3D Objects: Techniques for selecting, rotating, scaling, and otherwise interacting with virtual objects in a three-dimensional space.
 
* Manipulation of 3D Objects: Techniques for selecting, rotating, scaling, and otherwise interacting with virtual objects in a three-dimensional space.
 +
 +
==History==
 +
3D computer interaction succeeds 2D interactions (using a mouse, keyboard, or touch screen).
 +
 +
==Uses==
 +
This is a list of uses of 3D computer interface that consists of [[3D control peripheral]]s and [[3D display]]s.
 +
 +
* [[Molecular visualization]]
 +
* [[Building information modeling]] (BIM)
 +
* [[CAD]]
 +
* [[Multiphysics simulation]]
 +
* Military planning and communication
 +
* 3D mathematical visualization software
  
 
==References==
 
==References==
 
<references />
 
<references />
 +
 +
[[Category:Augmented reality]]
 +
[[Category:Virtual reality]]
 +
[[Category:Human-computer interaction]]

Latest revision as of 21:34, 8 March 2025

Users wearing headsets and interacting in 3D

3D human-computer interaction (3D HCI) refers to the methods and technologies that allow users to interact with computers in a three-dimensional space.

3D interaction is human-computer interaction in which the user's tasks are performed directly in a 3D spatial context.[1]

Full-duplex 3D HCI requires a 3D input device and a 3D display. The 3D display can be autostereoscopic, lightfield, or holographic.

It can be summed up as a situation where a person can move something around in 3D, and a computer knows where it is, and the computer can move something around in 3D, and the person knows where it is.

A main area of interest is 3D direct interaction.

It can involve solid view displays.

Control peripherals[edit]

  • Motion Controllers: Devices like VR controllers that detect movement in three dimensions using IMUs and/or positional tracking, either optical or elsewise.
  • Gesture Recognition: Cameras and sensors (e.g., Microsoft Kinect, Leap Motion) that capture body movements and hand gestures.
  • Haptic Feedback: Systems that provide tactile feedback to the user, enhancing the sense of touch in a virtual environment.

Visual peripherals[edit]

Interaction techniques[edit]

  • Manipulation of 3D Objects: Techniques for selecting, rotating, scaling, and otherwise interacting with virtual objects in a three-dimensional space.

History[edit]

3D computer interaction succeeds 2D interactions (using a mouse, keyboard, or touch screen).

Uses[edit]

This is a list of uses of 3D computer interface that consists of 3D control peripherals and 3D displays.

References[edit]