CAVE

From XVRWiki
Jump to navigation Jump to search
A CAVE, showing the stereoscopic projection on the walls.

A cave automatic virtual environment (CAVE) is a virtual reality environment where stereoscopic images are shown on the walls of a room. The images are projected to at least three of the walls.

The images can be stereoscopically polarized or done using shutter glasses.

General characteristics[edit]

The walls of a CAVE are typically made up of rear-projection screens. The floor can be a downward-projection screen, a bottom projected screen or a flat panel display.

The user wears 3D glasses inside the CAVE to see imagery generated by the CAVE. People using the CAVE can see objects apparently floating in the air, and can walk around them, getting a proper view of what they would look like in reality. This was initially made possible by electromagnetic sensors, but has converted to infrared cameras.

The frame of early CAVEs had to be built from non-magnetic materials such as wood to minimize interference with the electromagnetic tracking system; the change to infrared tracking has removed that limitation. A CAVE user's movements are tracked by the sensors typically attached to the 3D glasses and the video continually adjusts to retain the viewers perspective. Computers control both this aspect of the CAVE and the audio aspect. There are typically multiple speakers placed at multiple angles in the CAVE, providing 3D sound to complement the 3D video.

Technology[edit]

A lifelike visual display is created by projectors positioned outside the CAVE and controlled by physical movements from a user inside the CAVE. A motion capture system records the real time position of the user. Stereoscopic LCD shutter glasses convey a 3D image.

The computers rapidly generate a pair of images, one for each of the user's eyes, based on the motion capture data. The glasses are synchronized with the projectors so that each eye only sees the correct image. Since the projectors are positioned outside the cube, mirrors are often used to reduce the distance required from the projectors to the screens. One or more computers drive the projectors. Clusters of desktop PCs can run CAVEs. They are faster and less expensive.

Software and libraries designed specifically for CAVE applications are available. There are several techniques for rendering the scene. There were 3 popular scene graphs in use: OpenSG, OpenSceneGraph, and OpenGL Performer. OpenSG and OpenSceneGraph are open source; while OpenGL Performer is free, its source code is not included.

Calibration[edit]

To be able to create an image that is not distorted or out of place, the displays and sensors must be calibrated. The calibration process depends on the motion capture technology being used. Optical or inertial-acoustic systems only requires to configure the zero and the axes used by the tracking system. Calibration of electromagnetic sensors (like the ones used in the first cave) is more complex. In this case a person will put on the special glasses needed to see the images in 3D.

The projectors then fill the CAVE with many one-inch boxes set one foot apart. The person then takes an instrument called an "ultrasonic measurement device" which has a cursor in the middle of it, and positions the gadget so that the cursor is visually in line with the projected box.

This process can go on until almost 400 different blocks are measured. Each time the cursor is placed inside a block, a computer program records the location of that block and sends the location to another computer.

If the points are calibrated accurately, there should be no distortion in the images that are projected in the CAVE. This also allows the CAVE to correctly identify where the user is located and can precisely track their movements, allowing the projectors to display images based on where the person is inside the CAVE.[1]

Uses[edit]

Prototypes of parts can be created and tested, interfaces can be developed, and factory layouts can be simulated, all before spending any money on physical parts. This gives engineers a better idea of how a part will behave in the product in its entirety. CAVEs are also used more and more in the collaborative planning in construction sector.[2] Researchers can use CAVE system to conduct their research topic in a more accessible and effective method. For example, CAVEs was applied on the investigation of training subjects on landing an F-16 aircraft.[3]

The EVL team at UIC released the CAVE2 in October 2012.[4] It is based on LCD panels.

References[edit]

  1. "The CAVE (CAVE Automatic Virtual Environment)". http://inkido.indiana.edu/a100/handouts/cave_out.html. Retrieved 2006-06-27.
  2. Nostrad (2014-06-13). "Collaborative Planning with Sweco Cave: State-of-the-art in Design and Design Management". Slideshare.net. http://www.slideshare.net/Swecofinland/sweco-cave-20140613jjau. Retrieved 2014-08-04.
  3. Repperger, D. W.; Gilkey, R. H.; Green, R.; Lafleur, T.; Haas, M. W. (2003). "Effects of Haptic Feedback and Turbulence on Landing Performance Using an Immersive Cave Automatic Virtual Environment (CAVE)". Perceptual and Motor Skills 97 (3): 820–832. doi:10.2466/pms.2003.97.3.820. PMID 14738347.
  4. EVL (2009-05-01). "CAVE2: Next-Generation Virtual-Reality and Visualization Hybrid Environment for Immersive Simulation and Information Analysis". http://www.evl.uic.edu/cave2. Retrieved 2014-08-07.

External links[edit]

  • Carolina Cruz-Neira, Daniel J. Sandin and Thomas A. DeFanti. "Surround-Screen Projection-based Virtual Reality: The Design and Implementation of the CAVE", SIGGRAPH'93: Proceedings of the 20th Annual Conference on Computer Graphics and Interactive Techniques, pp. 135–142, doi:10.1145/166117.166134