Difference between revisions of "3D human-computer interaction"
Jump to navigation
Jump to search
m (Text replacement - "devices" to "gadgets") |
|||
Line 4: | Line 4: | ||
It can involve [[solid view display]]s. | It can involve [[solid view display]]s. | ||
− | + | __NOTOC__ | |
==Controlling peripherals== | ==Controlling peripherals== | ||
* Motion Controllers: Devices like VR controllers that detect movement in three dimensions using IMUs and/or positional tracking, either optical or elsewise. | * Motion Controllers: Devices like VR controllers that detect movement in three dimensions using IMUs and/or positional tracking, either optical or elsewise. |
Revision as of 20:07, 29 August 2024
3D Human-Computer Interaction (3D HCI) refers to the methods and technologies that allow users to interact with computers in a three-dimensional space. 3D HCI leverages depth, volume, and spatial context to enhance user experiences, unlike traditional 2D interactions (using a mouse, keyboard, or touch screen).
The main area of interest is 3D direct interaction.
It can involve solid view displays.
Controlling peripherals
- Motion Controllers: Devices like VR controllers that detect movement in three dimensions using IMUs and/or positional tracking, either optical or elsewise.
- Gesture Recognition: Cameras and sensors (e.g., Microsoft Kinect, Leap Motion) that capture body movements and hand gestures.
- Haptic Feedback: Systems that provide tactile feedback to the user, enhancing the sense of touch in a virtual environment.
Visual peripherals
- solid view displays, including biscopic displays and holographic displays
- VR headsets
- Augmented Reality (AR): Overlaying digital information on the real world, typically through gadgets like AR glasses or smartphones.
Interaction techniques
- Manipulation of 3D Objects: Techniques for selecting, rotating, scaling, and otherwise interacting with virtual objects in a three-dimensional space.