|
The CAVE EnvironmentThis section describes the CAVE Virtual Reality environment here at NCSA. CAVE is a recursive acronym that stands for CAVE Automatic Virtual Environment. The CAVE is a projection-based VR display that was developed at the Electronic Visualization Laboratory (EVL) at the University of Illinois at Chicago. It premiered at the SIGGRAPH '92 conference. In [5], the authors define a VR system as one which provides real-time viewer-centered head-tracking perspective with a large angle of view, interactive control, and binocular (stereo) display. Earlier VR systems, such as Head Mounted Displays (HMDs) and BOOMS (TM), achieved these features by using small display screens that move with the viewer, close to the viewer's eyes. The CAVE was developed to overcome some of the limitations of HMDs and BOOMS (TM), especially for scientific applications. The CAVE makes use of large, fixed screens more distant from the viewer. This minimizes the encumbrances carried or worn by users and allows multiple people to share the VR experience. [6] In addition to the ability to share the virtual environment among multiple users, the CAVE has presented other benefits. For instance, the CAVE is immersive, but it doesn't completely isolate the users from the real world. According to [4], real world isolation can be highly intrusive and disorienting. The viewer is still aware of the real world and may fear events such as running into a wall. In [5], the authors observe from their experience that seeing one's own body also decreases the chances of nausea. It has also been shown in [5] that tracking errors in the CAVE are less distracting then in other systems. This is due to the fact that the projection plane does not move with the viewer's position and angle as it does in a HMD or BOOM device. The CAVE is made up of the following hardware components [5][6]. It is a 10x10x9 foot theater, with images rear-projected onto three of the walls which are screens. A fourth projector projects onto the floor from above. In the current NCSA configuration, the four displays are driven by one Silicon Graphics Onyx2 computer that has four SGI Infinite Reality graphics pipelines. Each graphics pipe drives one of the four displays. Each of the four displays has full workstation resolution (1024x768) and displays stereoscopic images at 96 Hz.
The 3D effect of the CAVE comes from the stereo projection. This stereo projection is achieved by projecting an image for the left eye followed by an image for the right eye. Viewers wear Stereographics' LCD Crystal Eyes stereo shutter glasses to view the stereoscopic images. The glasses are synchronized with the computer using infrared emitters mounted around the CAVE. As a result, whenever the computer is rendering the left eye perspective, the right shutter on the glasses is closed and vice-versa. This tricks the brain and gives the illusion that the left eye is seeing the left perspective and the right eye is seeing the right perspective.
In order for the perspective to be accurate, the user's head position and orientation has to be tracked. Although many people can be in the CAVE at once, only one user (the driver) is tracked. As a result, the other people in the CAVE should stand close to the driver to get the correct perspective. The tracking is achieved with a Ascension Flock of Birds electro-magnetic, 6 degree of freedom tracking system. The tracker senses a tethered electromagnetic sensor that is mounted on the driver's glasses. A second electromagnetic sensor is attached to a device called the wand. The wand is a hardware device that can be thought of as a 3D equivalent of a mouse. In addition to being tracked, the wand has three buttons and a pressure-sensitive joystick. The wand is the primary input device in the CAVE, but certain applications may integrate their own special hardware. Since the wand is tracked, it facilitates various interaction techniques that are not found on the desktop [11][10][1]. More natural interaction techniques is one of the motivations for using virtual environments in the first place. For instance, an object in the virtual world can be picked by pointing at it with the wand. Typically, in such a technique, a virtual beam is emitted from the wand, allowing the user to intersect that beam with the desired object and selecting it with a wand button.
Typically, the virtual world in a CAVE application may be a lot larger then the physical size of the CAVE. In order to be able to explore this world, some form of navigation has to be encorporated into the CAVE application. This navigation can come in many forms, including flying or walking and may be controlled in various ways [11] [1]. Software support for the CAVE comes in the form of the CAVE library. Applications are built on top of the CAVE library, which controls the display, tracking, and input devices. The CAVE hides a lot of the device specific details. It generates the correct perspective, synchronizes the screens, etc. The library provides callback routines that execute the application code. In the application specific code, the programmer can make use of CAVE library calls to poll tracker states for the wand or the glasses, poll for joystick and button states, etc. The CAVE library also provides a CAVE simulator that lets the developer test applications on the desktop. The simulator runs the same executables as the CAVE using different configuration files. The configuration file specifies whether or not to use simulator mode. It also specifies the layout of the walls in the CAVE. This makes the CAVE library flexible and allows it to support different projection based setups other then the CAVE, such as the ImmersaDesk (IDesk) or the Infinity Wall (IWall). These systems are covered in detail in [6]. The standard CAVE library makes use of OpenGL (a standard graphics API) and requires CAVE applications to be written using OpenGL. In addition to the standard CAVE library, there is a small interface library called pfCAVE that allows users to use Performer in the CAVE. Performer is a software toolkit for 3D rendering that is based on OpenGL. Performer is discussed in detail in a later section.
Next: The Visualization Toolkit (vtk) Up: Environment Previous: Environment Paul John Rajlich Mon May 4 16:53:57 CDT 1998 |