Defense applications of the CAVETM

(CAVE Automatic Virtual Environment)

Scott K. Isabelle, Robert H. Gilkeya,b Robert V. Kenyonc, George Valentinod,
John M. Flacha, Curtis H. Spennye, and Timothy R. Andersonb

 

a Department of Psychology, Wright State University, Dayton, OH 45435
b Armstrong Laboratory, AL/CFBA, Wright-Patterson Air Force Base, OH
c Department of Electrical Engineering, University of Illinois at Chicago
d Systran Corporation, Dayton, OH 45432
e Department of Aeronautics and Astronautics, Air Force Institute of Technology,
Wright-Patterson Air Force Base, OH. 

 

ABSTRACT 

The CAVETM; (CAVE Automatic Virtual Environment) is a multi-person, room-sized, high-resolution, 3D, video and auditory environment, which can be used to present very immersive virtual environment experiences. This paper describes the CAVE technology and the capability of the CAVE system as originally developed at the Electronics Visualization Laboratory of the University of Illinois-Chicago and as more recently implemented by Wright State University (WSU) in the Armstrong Laboratory at Wright-Patterson Air Force Base (WPAFB). One planned use of the WSU/WPAFB CAVE is research addressing the appropriate design of display and control interfaces for controlling Uninhabited Aerial Vehicles. The WSU/WPAFB CAVE has a number of features that make it well-suited to this work: 1) 360° surround, plus floor, high resolution visual displays, 2) virtual spatialized audio, 3) the ability to integrate real and virtual objects, and 4) rapid and flexible reconfiguration. However, even though the CAVE is likely to have broad utility for military applications, it does have certain limitations that may make it less well-suited to applications that require "natural" haptic feedback, vestibular stimulation, or an ability to interact with close detailed objects. 

Keywords: virtual environments, CAVE, uninhabited aerial vehicles

 

1.0 INTRODUCTION 

Although many of the core technologies that form the basis of virtual environment generation have been available for many years, there has been an explosion of interest and a revolution in thinking during the last decade. New companies, laboratories, journals, magazines, and television programs have formed, all with virtual environments as a focus. Innovative new technologies have been developed and old technologies have been dramatically improved, leading to higher fidelity, better realism, and more immersive experiences. Perhaps more significant has been the substantial increase in affordability; now this technology is available much more broadly for research and applications both inside and outside the military.

The National Research Council1 considered the somewhat broader topic of synthetic environments (SEs), which includes virtual environments, teleoperator systems, and augmented displays, and identified a number of application domains that are relevant to the military: 1) medicine; 2) information visualization; 3) training; 4) design and manufacturing; 5) telecommunications; 6) hazardous operations; 7) perceptual and psychological research. The military is currently supporting research in all of these areas. Wright State University (WSU) has had a long history of DoD-sponsored research programs and close collaboration with Air Force Laboratories. Most recently, WSU has established a virtual environment research facility at Wright-Patterson Air Force Base (WPAFB), which includes a CAVETM; (CAVE Automatic Virtual Environment).2 The present paper will describe the CAVE technology, the specific implementation at WSU/WPAFB, and planned DoD related use of the WSU/WPAFB CAVE.

 

2.0 THE CAVE AUTOMATIC VIRTUAL ENVIRONMENT (CAVE) 

2.1 The CAVE as implemented at the University of Illinois at Chicago 

The CAVETM; system was developed at the Electronic Visualization Laboratory (EVL) of the University of Illinois-Chicago, originally as a tool for data visualization.3 It is a multi-person, room-sized, high-resolution, 3D, video and audio environment. It has subsequently been used for collaborative design projects, architectural visualization, and innovative forms of artistic and aesthetic expression. The CAVE, and other forms of projected virtual environments such as the ImmersaDeskTM; and InfinityWall™, served as primary display systems for many of the projects in the Global Information Infrastructure (GII) Testbed that was part of the Information Wide Area Year (I-WAY) at Supercomputing '95, intended as a mechanism to accelerate progress in areas relevant to National Challenge and Grand Challenge problems.4 Other CAVE sites include the DoE's Argonne National Laboratory and the National Center for Supercomputer Applications (NCSA) at the University of Illinois at Urbana-Champaign. 

 

Figure 1. An artist's rendering of the CAVE at EVL. The viewpoint is from outside the CAVE looking in towards the backside of the front wall. The mirror in the foreground projects light from the projector at the near left to form an image on the front wall. The projector and mirror at the far left project to one side wall and the projector and mirror at the far right project to the other side wall. The rear wall is open, allowing entry to the CAVE. The projector and mirror above the CAVE project the image to the floor.

 

The current configuration of the CAVE at EVL is a theater, about 3.1 x 3.1 x 2.7 meter, made up of three rear-projected screens for walls and a top-projected screen for the floor (Figure 1). The CAVE uses "window" projection where the projection plane and the center of projection relative to the plane are specified for each eye, thus creating an off-axis perspective projection.5 Silicon Graphics, Inc. (SGI) Onyx computers with Reality Engine2 (RE2) or Infinite Reality (IR) graphics rendering engines are used to create the imagery that is projected onto the walls and floor. Electrohome Marquis 8000 or 8500 projectors throw stereo, full-color workstation fields (1024 x 768 stereo) at 96 Hz onto the screens, giving 3000 x 2000 linear pixel resolution to the surrounding composite image. To give the illusion of 3D, two alternating images are displayed on each wall at a rate of 96 Hz (48-Hz refresh rate per field). The viewer wears stereo LCD shutter glasses (Crystal Eyes, Stereographics) that enable a different image to be displayed to each eye by synchronizing the rate of alternating shutter openings to the screen update rate. The CAVE has a panoramic view that varies from 90° to greater than 180° depending on the distance of the viewer from the projection screens. The direct viewing field of view is about 100° and is a function of the frame design for the stereo glasses. 

The ImmersaDesk (I-Desk) was also developed at EVL. The I-Desk is a drafting-table format virtual prototyping device. Using stereo glasses and magnetic head and hand tracking, this projection-based system offers a type of virtual environment that is semi-immersive. Rather than surrounding the user with graphics and blocking out the real world, the I-Desk features a 1.2 x 1.5-meter rear-projected screen at a 45° angle. The size and position of the screen give a moderately wide-angle view and the ability to look down as well as forward. The resolution is 1024 x 768 at 96 Hz. It can be operated from either a SGI Onyx RE2/IR or the SGI Maximum/High IMPACT Indigo2. 

The InfinityWall (I-Wall) is a third data visualization technology developed at EVL. The I-Wall is a large-screen, high-resolution stereo projection display well-suited for large audiences. Low-cost polarized passive glasses (like cardboard glasses used for viewing 3D movies) can be used instead of the active stereo glasses used in the CAVE and I-Desk systems. The I-Wall achieves its immersion by wide-screen projection, but unlike the CAVE and I-Desk, does not allow a way to look down, a problem with any normal audience seating arrangement. As noted above, the GII Testbed at Supercomputing'95 was the first use of the I-Wall as a virtual environment display device.4 

In both the CAVE and the I-Desk, a user's head and hand are tracked with tethered electromagnetic sensors operating at a 144-Hz sampling frequency for a dual sensor configuration (Flock of Birds, Ascension Technology). The tracker has a valid operating range of 2.3 meters and a minimum total system delay of about 50-75 ms. The correct image perspective and stereo projections are based on values returned by the position sensor attached to the stereo shutter glasses. The CAVE's second position sensor is used to allow the user to interact with the virtual environment. Finally, computer-controlled audio provides a sonification capability. 

2.2 The CAVE as implemented by Wright State University 

In 1995, six institutions in Ohio (the Air Force Institute of Technology, Miami University, Kent State University, University of Cincinnati, University of Dayton, and Wright State University) formed the Ohio Consortium for Virtual Environment Research (OCVER), a multidisciplinary group comprising psychologists, engineers, computer scientists, and physicians, and submitted a proposal to the Ohio Board of Regents to build the Virtual Environment Research, Interactive Technology, And Simulation (VERITAS) facility. An initial award was made to Wright State University, the OCVER lead institution, to develop the VERITAS facility in order to support both basic research on the sensory, motor, and cognitive underpinnings of human performance in synthetic environments; and the subsequent transfer of this technology to cockpits, endosurgery, design and manufacturing, prosthetic/orthotic devices, and other applications of active interest to OCVER members. For a variety of reasons, including longstanding and productive collaborations with the Armstrong Laboratory, the site chosen to house the facility was the Biodynamics and Biocommunications Division, Crew Systems Directorate, of the Armstrong Laboratory (AL/CFB) at Wright-Patterson Air Force Base. When developing the facility, we emphasized the use of commercial off-the-shelf components for both hardware and software. Prosolvia Research and Technology, Inc. (Troy, MI) acted as the integrator. Initial operation of the VERITAS facility began in the first quarter of 1997. 

A unique aspect of the VERITAS CAVE is the addition of a fourth wall at the rear, implemented as a sliding door/rear projection screen enabling the display of 360° surround visual images. This CAVE is 3.1 x 3.1 x 3.1-meter, with four rear-projected walls and a down-projected floor (Pyramid Video). Imagery is created by an SGI Onyx with InfiniteReality graphics. As in the EVL CAVE, the images are displayed by CRT projectors (M8500, Electrohome), a magnetic tracking system is employed to monitor the position and orientation of the user's head and hand (Flock of Birds, Ascension Technology), while stereo images are created by use of LCD shutter glasses (CrystalEyes, Stereographics). In addition, the user's finger-pinch gestures are sensed to provide one means to interact with the virtual environment (PinchGloves, Fakespace). The addition of the unique fourth wall provides complete immersion in the azimuthal dimension (the only non-imaged area is directly overhead), allowing the operator to monitor and interact with the environment to the rear as well as to the sides and the front. 

The VERITAS CAVE will also emphasize multisensory displays. For spatialized audio, we have integrated the system used in our binaural and spatial auditory research (PowerSDAC, Tucker-Davis Technology). This system uses 28 digital signal processors to apply up to 127-point head-related transfer functions (i.e., spatial filters) to up to 28 sounds or echoes. As implemented, these 3-dimensional virtual sounds are delivered through headphones. The off-the-body projection system of the CAVE limits the ability to integrate haptic stimulation (i.e., haptic stimulators are likely to be visible to the user). Nevertheless, we plan to investigate a number of force-feedback manual control sticks and hand-controllers for manipulating objects and controlling the virtual environment. 

To support the anticipated broad variety of synthetic environment research activities, with special focus on our initial, defense-related applications, we have integrated commercial image generation software with the CAVE projection systems (Vega, Paradigm Simulation). This software provides a number of high-level development tools, but will still allow us to access, and optimize for, the underlying capability of the hardware platform (via SGI Performer and OpenGL). Furthermore, the chosen software platform permits options such as enabling import of CAD and terrain data (Clarus CAD Real-time Link, Prosolvia Clarus AB, Gothenburg, Sweden), simulation of flight dynamics (FLSIM, Virtual Prototypes, Montreal, Canada), simulation of ground vehicle dynamics (Clarus Drive), simulation of manufacturing processes (Clarus Manufacturing), and interfacing with display and control devices (Clarus InteractiveVR). 

2.3 Viewer-centered perspective 

For some of our initial work, the CAVE will function like a flight simulator. In this function, the CAVE has certain advantages and disadvantages compared to traditional flight simulators. In many vehicle simulation environments, the operator's perspective view is fixed to the heading of the vehicle and not to the operator's direction of gaze. For example in most flight simulators, the eye point used for the visual perspective, while located close to the expected location of the pilot's head, is fixed to the axis of the vehicle and not to the pilot's head direction. This situation results in only one correct viewing direction for the rendering of the visual scene. Movements of the head and eyes to locations away from the direction-of-projection can result in a somewhat distorted perspective view for the operator. 

In the CAVE and other head-tracked virtual environments, the perspective view is generated using the direction-of-projection determined by the measured position and orientation of the operator's head. Without this feature the farther the user is from the true center of projection the more distorted the image of near objects appear. The need to track the user's head for this and other reasons can add a great penalty to the performance of the system. The generation of the images is now at the mercy of the head tracking instrument's performance. These systems can add long, and in some cases unacceptable, delays between user motion and the resulting motion on the screen. This is especially true for magnetic systems, which have gained much popularity because they give the operator freedom to roam about the environment. In addition to the lag, these systems have nonlinearities near the edges of the tracker range that are caused by the metallic objects and electromagnetic fields created by other devices resident in and about the CAVE. These nonlinear errors can so distort the image that objects can appear to fly away from the observer as they are approached. To counteract these effects and make this environment useful for training physical world tasks, calibration of the tracker within the working space is needed. This can add more complexity to these systems and more computations per image. The nonlinearities can be corrected to within 1.5% by linearizing values using a correction table containing measured positions in the CAVE and then applying linear interpolation to the points that lie between the measured values.

2.4 Coexisting physical and virtual objects 

One of the advantages that the CAVE affords its users is the ability to see both physical and virtual objects simultaneously. This permits the user to directly view his/her own body, limbs, hands, or that of another person, or other real objects in the CAVE. Consequently, we do not need to allocate computational resources for modeling or rendering replicas of objects that can instead be integrated as real objects. We plan to exploit this capability so that users can see and manipulate real objects, allowing a more detailed view and better dexterity in controlling objects.7 On the other hand, this advantage also introduces anomalies into the visual world. For example, physical objects can occlude virtual objects but the reverse is not true. In addition, conflict between accommodative and convergence stimuli furnished by adjacent physical and virtual objects within the work space can lead to eye strain and visibility problems within the environment. 

3.0 DEFENSE APPLICATIONS OF THE CAVE 

3.1 The CAVE and uninhabited aerial vehicles 

Although we expect to use the CAVE to examine a wide variety of basic and applied issues, much of our initial research effort will focus on applications relevant to the Air Force. Specifically, we view the CAVE as an extremely flexible prototyping environment where perceptual, motor, and cognitive issues related to the design and evaluation of effective displays and controls can be investigated. In particular, we will consider Uninhabited Aerial Vehicles (UAVs) and how effective interface designs may help to overcome the information limitations that are inherent when a vehicle is piloted remotely. The initial applications of UAVs have been in the area of reconnaissance, where flight control is relatively simple and considerable automaticity is possible. Uninhabited Tactical Aircraft (UTAs) will be much more demanding and require a higher degree of human-in-the-loop control. That is, UTAs may have a wider performance envelope than manned tactical aircraft, even though the information channel is likely to be narrower, less reliable, and sluggish. Effective display designs must consider these limitations and provide meaningful information in order to maintain situation awareness in the remote environment. Similarly, the control interface must find the appropriate balance between manual control and automation in order to optimize performance within the limits of mission and system constraints. 

One way that we will use the CAVE is as a flexible "flight simulator," integrating a cockpit mockup in the center of the CAVE and using the projection system to display an "out-the-window" view of the environment surrounding the virtual UAV. However, an interesting aspect of designing for UAV control is that the constraints of traditional aircraft cockpits (e.g., space limitations, pilot expectations, etc.) need not apply. For this reason, the CAVE is particularly well-suited for our research, not only because it can simulate a flight simulator, but because it can simulate a wide variety of control environments, which can be rapidly altered to address specific research questions. We plan to evaluate several general display/control formats, including pilot-centered (with virtual or mockup cockpit), command and control center ("infinity wall" with map and information displays), and object-resolved control with God's-eye view. Moreover, it is anticipated that different formats may have advantages for specific sub-tasks and thus a mixture of display/control formats may be used for different stages of a mission or for different flight team members, for example, a pilot-centered format for ordnance delivery, a command and control center format for navigation, and a God's-eye format for management of multiple aircraft in formation flying. Within each format the same basic questions can be asked: How can displays be designed so that operators effectively integrate information to maintain situation awareness? How can controls be designed so that operators' intentions can be directly communicated to the UAV? 

As mentioned, in the pilot-centered view, a cockpit mockup would be positioned in the center of the CAVE and the walls of the CAVE would be used to display the surrounding environment. Although our current plan will not emphasize the fidelity of the cockpit mockup relative to any particular aircraft, it will include pedals, throttle, stick, and multi-function displays to create a view much like that in a traditional fixed-base flight simulator. 

In the command and control center format, one or more workstations would be positioned in the CAVE and the walls of the CAVE would be used to present, in a manner similar to an InfinityWall, maps, videos, status displays on one or more UAVs, and other mission relevant data. Operators would be free to move around and interact with each other or with the information presented on the wall. They would also be able to move information from the walls to their own high-resolution workstation for detailed viewing. Operators would control the information displays and the UAVs using keyboard, 3D mouse, or remote pointer. 

Another format we plan to evaluate uses object-resolved controls,8 and a God's-eye view of the UAV and the surrounding environment. Such a system assumes considerable intelligence resident in the UAV. The operator would directly manipulate a model of the UAV (a proxy, implemented via a force-feedback hand-controller) within a 3-dimensional model of the airspace. High-level (low-bandwidth) commands could be sent to the UAV (e.g., location and orientation, rather than direct commands to the control surfaces). Information about the status of the aircraft relative to its performance limits or relative to a predetermined flight path could be delivered back to the operator through haptic feedback via the proxy (e.g., so that the operator could "feel" the performance envelope), through visually displayed instruments that would directly represent the constraints of the flight control task, and through other sensory displays (e.g., an auditory display to help monitor the location of the most urgent threat or target). If such an interface system were used with a real UAV, the position of the UAV could be updated based on GPS tracking data, without updating the entire visual display unless dictated based on information from onboard sensors. The impact of delays, bandwidth limitations, and interference should be less severe with this approach. This representation would allow the operator to monitor the surrounding airspace broadly (perhaps simultaneously controlling multiple UAVs), relying heavily on the intelligence in the aircraft while still having the ability to respond to unexpected threats and targets of opportunity. 

The CAVE will also allow a mixture of these formats in which the best elements could be combined to form a new representation. Moreover, the CAVE will allow automatic or operator controlled transition from one representation to another so that different formats could be used for different tasks or during different parts of the mission. 

In our research, we will evaluate the effectiveness of display and control representations in the context of a full mission scenario, determining how mission requirements, as well as the bandwidth, reliability, and sluggishness of the information channel influence the effectiveness of various interface formats. We believe that as task demands increase and the veracity of the information to and from the UAV decreases, there will be a greater need for higher-level functional representations to tell the UAV what needs to be done and to tell the operator what the UAV is doing. For example, instrument displays in traditional cockpits are designed to insure that all state variables (e.g., air speed and attitude) are available to the pilot. In principle, all of the functional constraints (e.g., the stall boundary) can be specified from these variables, and in fact skilled pilots can typically integrate over the state variables and act according to the higher-order functional constraints. However, with UAVs this integration will be much more difficult. Therefore, the geometry of graphical and control interfaces should directly reflect the higher-order functional constraints. For example, the WrightCAD display being developed at WSU represents deviations from optimal glide slope and distance to stall boundary directly in a configural flight display, which uses an optical flow metaphor to integrate all the primary flight variables (e.g., pitch, roll, airspeed, altitude, heading, etc.).9 

3.2 Other defense-related applications of the CAVE 

The CAVE has a number of features that make it particularly well-suited for our applications. The ability to rapidly and flexibly reconfigure the CAVE allows a variety of display and control representations to be evaluated. The 360°-surround imagery, projected floor, 100° field of view, and spatialized audio allows users to monitor threats and targets throughout the air and ground combat space for best situation awareness. The ability to integrate real objects, such as a mockup cockpit or workstation, allows maximum resolution and usability without introducing cumbersome visual and haptic rendering problems. 

On the other hand, there are a number of limitations to the CAVE approach that may make it less well-suited for some applications. The absence of a projected ceiling makes the CAVE better suited for air-to-ground simulations, rather than for primarily air-to-air simulations. The physical constraints of the CAVE projection system will limit the ability to render virtual vestibular information (i.e., any large motion base within the CAVE would be likely to interfere with the projected virtual world and moving the whole CAVE would be unwieldy) and virtual haptic information (i.e., a force-feedback or tactile display system is likely to be visible to the user and may interfere with the intended experience). Close-up and highly-detailed work may be difficult to render with sufficient fidelity unless additional display systems (e.g., workstations) are added. Moreover, the introduction of a real object (e.g., the user's hand) near a close-up virtual object may disrupt the experience because of the simultaneous need to focus at the wall (to see the virtual object) and close-up (to see the real object). 

3.2.1 Medicine 

Although the military has considerable interest in virtual medicine or telemedicine, the CAVE may not be the best display system for many medical applications. Defense-specific research activities relating to SE-based surgical training are coordinated by the tri-service Medical Advanced Technologies Office (www.matmo.org). The Medical Readiness Strategic Plan 2001, released in 1994, identified two application arenas for advanced visual simulation, virtual environment, and telecommunications technologies: 1) integrating tele-imaging and remote virtual environment displays to support real-time consultation with medical specialists; and 2) providing realistic battlefield techniques and procedures in the context of medical readiness training. Some characteristics of the CAVE seem well-suited for telecommunications applications (e.g., the wide field of view and the ability to support multiple users). Similarly, the CAVE could be used to simulate an entire operating room with reasonable fidelity; however, as previously noted, the CAVE may be less well-suited to render the highly detailed close-up area where the surgeon's hands are working and thus may be less well-suited to simulating the surgery itself (which may be better simulated using other technologies such as head-mounted displays). Similarly, although a variety of technologies exist for delivering haptic feedback to virtual surgical instruments, it may be difficult to integrate these devices into the CAVE environment without interfering with the immersive experience. 

3.2.2 Data visualization 

The military has wide ranging interest in data visualization. In the course of the DoD's High Performance Computer Modernization Program (HPCMP), the scientific and engineering research effort of the DoD laboratories and centers has been partitioned into ten discipline-specific Computational Technology Areas (CTAs): computational fluid dynamics; computational structural mechanics; computational electromagnetics and acoustics; computational electronics and nanoelectronics; computational chemistry and materials science; integrated modeling and test environments; climate/weather/ocean modeling and simulation; signal/image processing; environmental quality modeling and simulation; and forces modeling simulation and C4I (www.hpcmo.hpc.mil). Many or all of these efforts depend heavily on advanced visualization capabilities. The visualization of complex data sets was one of the original motivations for the CAVE, and many of the earliest applications were in this area. The surround imagery and the user's ability to move around in the display give the user the opportunity to "get inside" of the data set in a way not possible with most visualization approaches. The NCSA has demonstrated the advantages of the CAVE display for the visualization of data from molecular modeling and simulation of weather patterns.

For some visualization applications, particularly those designed to serve a group of more than about 5 to 10 observers, a display configuration such as the InfinityWall may be better suited than the CAVE. For example, the Interactive DataWall developed at Rome Laboratory (similar in concept to the InfinityWall) is a 1 x 4-meter flat display with a total of 1200 x 4800 pixels, which is envisioned for defense applications in command and control, mission planning and rehearsal, battle management, and data fusion (see www.rl.af.mil). If full immersion is not required, and if the audience is small, then a table-top or drafting table approach such as the ImmersaDesk may be better suited. At least three of the DoD Major Shared Resource Center HPC sites are using the ImmersaDesk for visualization: Army Corps of Engineers Waterways Experiment Station, Army Research Laboratory Aberdeen Proving Ground, and Wright-Patterson Air Force Base (see also www.hpcmo.hpc.mil). The ARL is developing a virtual environment-based substitute for the large mockup boards, known as "sand tables," used as strategy planning aids by military commanders (see also www.arl.mil/EA/tnews1.html). In our UAV work, we plan to use the CAVE to present a God's-eye view of other aircraft, threats, and targets in the 3D environment surrounding the UAV. We expect that a similar 3D display could be quite useful as an aid to airspace management in a command and control center. 

3.2.3 Simulation and training 

Many of the characteristics of the CAVE that make it well-suited to our applications as a "flight simulator" may also be advantageous in other simulation and training applications. For example, the surround imagery, wide field of view, ability to move, and incorporation of spatialized audio may all be important factors for simulation and training of dismounted troops (those who participate in combat without the use of vehicles). Some current training systems incorporate CAVE-like technology. The Team Tactical Engagement Simulator, developed through the Naval Air Warfare Center, Training Systems Division (NAWCTSD), consists of a single rear-projected screen, with an operating (free movement) area of about 2.5 x 2.5 m. Magnetic tracking of the trainee's head and weapon is used both to compute correct perspective (no stereo) and to provide position information to other similar display systems, in which the trainee is represented to other participants by a computer-generated image (an avatar).10 A multiple-room version, the Weapons Team Engagement Trainer, is being developed at NAWCTSD for military and civilian SWAT applications in hostage rescue, ambush response, etc. Another CAVE-like system in development is the Dismounted Infantry Virtual Environment (DIVE), planned for use in the Dismounted Battlespace Battle Lab at the Infantry School at Fort Benning. The DIVE system consists of a triangular rear-projection screen chamber, with a circular usable floor area. Optical (video-based) techniques for tracking the trainee's head, limb, and weapon position are to be used (our impression is that they are transitioning to magnetic tracking11). 

A major technological issue limiting this type of simulation is the development of a "treadmill" system to adequately simulate the desired repertoire of movements. The Army's Simulation, Training, and Instrumentation Command (STRICOM) has supported development efforts aimed at creating a system that will support simulated movements by dismounted troops within virtual environments, for example, the OmniTrek™ (CGSD Corp.), the LocoSim™ (SYSTRAN Corp.), and the Omni-Directional Treadmill (Virtual Space Devices). Dismounted troops must engage in a wide variety of movements to exploit opportunities and conceal themselves from threats. Ideally, such a system would allow a trainee, while exerting the same effort as in the real world, to run, walk, crawl, crab, climb, step over objects, etc., but with actual translational motion limited to the confines of the simulator (i.e., 3.1 x 3.1 x 3.1 m in the case of the CAVE). Thus the requirements for interfacing dismounted troops are extremely broad and challenging and all of the current approaches have limitations; in particular, most devices are large and would require compromises if they were interfaced to the CAVE. 

Even though appropriate simulation of locomotion is likely to be a persistent problem in virtual environments, we expect CAVE-like systems to have broad utility for training. Other military applications where we expect that CAVE-like systems to be useful for training include: equipment maintenance, midair refueling, firefighting, etc. Many of these applications will benefit from the fact that multiple users can work together in the same CAVE, seeing and interacting with each other as they normally would with only minimal intrusions by cumbersome body-worn devices. That is, the training environment will resemble the target environment in its social/teaming aspects as well as its sensory/cognitive features. 

3.2.4 Prototyping and development 

Even for those applications where the constraints of the CAVE may eventually make it impractical, it is likely that initial explorations can usefully take advantage of the rapid prototyping capability of the CAVE (e.g., designing and evaluating cockpits, weapon systems, etc.). Data from CAD programs can be readily imported and rendered in the CAVE, allowing visualization of, and interaction with, virtual versions of the devices or systems being developed. 

Augmented displays is another development area in which the CAVE may be able to make unique contributions. For example, consider an augmented display for surgical use, in which a see-through HMD would be used to superimpose diagnostic data (e.g., CAT scans) on the real surgical field. In such a setting, a significant problem is that of image registration (accurately aligning the virtual images with the corresponding real objects). In the CAVE, the images presented through the HMD and the images presented from the CAVE walls would both be computer-generated, allowing other display design issues to be addressed and evaluated without the need to solve the registration problem up front. 

4.0 CONCLUSION 

The CAVE technology provides a wide range of capabilities for virtual environment, simulation, and data visualization work. It is particularly well-suited to our research, which focuses on the simulation of UAV control in primarily air-to-ground missions. Although the CAVE has some constraints that may limit its applicability to certain other problems, we expect the CAVE and CAVE-like technologies to have broad utility in the military.

 

ACKNOWLEDGEMENTS 

This work was supported by a Grant from the Air Force Office of Scientific Research (F49620-95-1-0106). Additional support was provided by the Ohio Board of Regents Investment Fund Program and Research Challenge Program.

 

REFERENCES 

1. N. I. Durlach and A. Mavor, Virtual Reality: Scientific and Technological Challenges, National Academy of Science Press, Washington, D.C., pp. 35-36, 1995.

2. CAVE, ImmersaDesk, and InfinityWall are all trademarks of the Trustees of the University of Illinois.

3. C. Cruz-Neira, D. J. Sandin, and T. A. Defanti, "Surround-screen projection based virtual reality: The design and implementation of the CAVE," Comp. Graph., 27, pp. 135-142, 1993.

4. Supercomputing '95, ACM/IEEE, San Diego, CA, 1995.

5. P. J. Bos, "Performance limits of stereoscopic viewing systems using active and passive glasses," Proc. of the IEEE Annual Virtual Reality International Symposium (VRAIS), pp. 371-376, 1993.

6. M. Ghazisaedy, D. Adamczyk, D. J. Sandin, R. V. Kenyon, and T. A. Defanti, "Ultrasonic calibration of a magnetic tracker in a virtual reality space," Proc. of the IEEE Annual Virtual Reality International Symposium (VRAIS), pp. 179-188, 1995.

7. R. V. Kenyon and M. Afenya, "Training in Virtual and Real Environments," Ann. Biomed. Eng. (accepted).

8. C. H. Spenny and D. L. Schneider, "Object-Resolved Teleoperators," IEEE Intl. Conf. on Robotics and Automation, (to appear), April, 1997.

9. J. M. Flach, "Ready, fire, aim: Toward a theory of meaning processing systems," In D. Gopher & A. Koriat (Eds.). Attention & Performance XVII (under review).

10. C. R. Karr, D. Reece, and R. Franceschini, "Synthetic Soldiers," IEEE Spectrum, 43, pp. 39-45, March 1997

11. Based on conversations at a meeting attended by Valentino, March 1995 at the Infantry School, Fort Benning.