Virtual Reality

This chapter will explain Virtual Reality. The first section will give a short introduction in which some of the common terms of Virtual Reality are explained. After this introduction, there will be a short overview of the history of Virtual Reality. This might help to understand why we thought it was necessary to do research about presence and people in Virtual Reality. The next section will tell about the Virtual Environments group at the Graphics, Visualization & Usability Center, GVU, at Georgia Institute of Technology. It will give an overview of who is working in the Virtual Environments group and what equipment can be used. In the final section there will be a short description of the Simple Virtual Environment Toolkit, SVE, used and developed at GVU [verlinden 93].

First of all, the term Virtual Reality has to be explained. With the term Virtual Reality, I mean a computer generated world with which the user can interact. The interaction can vary from simply looking around to interactively modifying the world.

Virtual Reality is always associated with people wearing large helmets and gloves. This does not have to be the case. A simple projection on a large video screen is also a form of Virtual Reality. This is often called Projected Reality. A nice example of Projected Reality is the cave [cruz-neira 93].

Yet another form of Virtual Reality is Augmented Reality. In Augmented Reality, the subject is still able to look at the actual world around him but wears a helmet on which additional information can be projected.

Terms used in Virtual Reality

Virtual Reality is still, to most people, relatively unknown. People will have heard about Virtual Reality but do not know exactly what it is and what equipment is used with Virtual Reality. This section will give an overview of the equipment used in Virtual Reality. First a broad overview of all the equipment and then a short description of how the different pieces work. At the end there will be an explanation of other terms used in Virtual Reality.


Figure 2.1: Person wearing all VR
equipment

Figure 2.1 shows a common setup of Virtual Reality. The subject wears a Head Mounted Display, HMD. This HMD is used to project a computer generated world in front of the subject. On one of his hands, he is wearing a glove. The glove is used to get information about the hand and its fingers. Above the subject hangs a transmitter. Receivers are placed on both the HMD and the glove. The transmitter and receiver together make up the trackers. These are used to get information about the place the subject occupies in the real world.

Figure 2.2: Head Mounted Display

A HMD, see Figure 2.2, is a helmet with two little TV screens inside. Using optics, the picture on these miniature TV's can be watched by the user. These optics are used to make sure that the picture can be watched from close up. The optics are also often used to enlarge the image. Using these optics, the image takes up the whole view of the user.

Trackers are used to find out where the subject is in the real world, see Figure 2.1. This information about the real whereabouts are then used in the virtual world. A receiver is often attached to the HMD. This way, whenever the subject moves his head in the real world the virtual world will change accordingly. A second receiver can be attached to the CyberGlove(TM) This way it is possible to find out where the hand is in the real world and again use this information in the virtual world.

Figure
2.3: Virtual Technologies Inc. CyberGlove(TM)

The CyberGlove(TM), see Figure 2.3, is used to get the bend of the individual fingers. Inside the glove there are small pieces of metal which are able to notice when they are bent. Using this information, it is possible to know the angle of each individual joint of the hand. Knowing this information about the different joints makes it possible to recognize gestures made with the hand. One of the most common gestures is the grab gesture, a fist. Another common gesture is the fly gesture, a finger pointing in the direction where the user wants to fly.

The computer generated world consists of colored polygons. These polygons can also be colored using an image. The image is then put on the polygon and stretched until it fits the polygon. This technique is called texture mapping.

When there are light sources in the computer generated world, for instance the sun, a technique can be used to change the colors of the polygons according to the position of the light source. This technique, shading, helps to make the world look more real.

Not only auditory and visual feedback is used in Virtual Reality. It is also possible to push the user whenever he hits an object. This is called force feedback. Using this technique the user can now feel the objects in the virtual world.

When the same image is presented to both eyes it is called monoscopic. Stereoscopic is when each eye has its own image. Using the difference between the images the user is able to get depth perception.

Aliasing is caused by the fact that the computer attempts to draw two images at the same place on the screen. This can happen when two objects are modeled on the same place but also when two objects are close together and the user is far away. The computer will not know what object is in front of the other and the result is nondeterministic. This results in color flashes, one moment the first object is in front and a green color is visible, the other moment the other object is in front and a red color is visible.

History of Virtual Reality

When someone enters Virtual Reality, he leaves the computer behind. No longer is the computer screen a window through which the world is watched. Now the user is completely inside the computer. The user can directly interact with the elements of the computer world, can move easily through this world and change it. To describe this phenomena, the term Virtual Reality is used [pimentel 93].

The history of Virtual Reality [pimentel 93] is older than most people think. As early as 1966, Ivan Sutherland built a HMD which was connected to the computer. All that it showed was a simple wireframe cube which could be looked at using the HMD. This HMD was known as the sword of Damocles. This was due to the fact that it hung with bars from the ceiling. These bars were used to track the movement of the head and to support the enormous weight of the HMD. The HMD used small CRT's to display the monoscopic pictures.

In 1970, Sutherland further developed the HMD hardware at the University of Utah. The HMD was now no longer monoscopic but displayed stereoscopic images instead. By using gyroscopes on the HMD, it now felt more stable and less heavy. Besides the HMD, many improvements were made to the computer systems.

Around the same time, Myron Kreuger developed VIDEOPLACE. VIDEOPLACE is a form of Projected Reality. In VIDEOPLACE, Myron Kreuger used a big screen in front of the user. On this screen, a shadow of the user was displayed. The user could now fingerpaint in the sky. It was also possible to display multiple people on the same screen (perhaps the first form of Computer Supported Collaborative Work, CSCW [ishii 92]). It was also possible to introduce the outline of a little animal, CRITTER, in this environment. The CRITTER was used to allow the user to interact with the computer and his environment.

Around this, time Boeing was experimenting with Augmented Reality. The idea was to help the mechanic when he was working on the engines of a plane with some sort of X-Ray and references. He could see inside the engine and the computer would point out certain parts. This technique is still used to help mechanics with repairing complicated machinery.

The military quickly saw the advantages of Virtual Reality and developed it further. In 1982, Thomas Furness III developed a HMD with a very high resolution, 2000 scanlines (this is almost four times normal TV and two times most X-window terminals), by using small 1" CRT's. Using the helmet, the pilot saw a symbolic representation of the world. The military kept their Virtual Reality technology secret for a long time.

In the beginning of the 80's, the ideas of both Furness and Sutherland were put together at NASA Ames by McGreevy. He used Liquid Crystal Displays, LCD's, to build a HMD. A tracker from Polhemus was used to track the movement of the head. This was the first HMD using cheap technology (the HMD costs less than $2000). Up until now, Virtual Reality was costly. McGreevy showed that it was possible to use cheap equipment and still build a Virtual Reality setup. This was the breakthrough for Virtual Reality. Now more scientists could afford Virtual Reality.

After this, Virtual Reality took off. More and more people saw the possibilities of Virtual Reality and started to do research in it. In 1983 Zimmerman teamed up with Lanier to form VPL. VPL was one of the first companies to start building equipment for Virtual Reality. One of the first things they built was the DataGlove(TM).

After this, more and more small companies started to build equipment for Virtual Reality. Now it is possible to buy everything from just a single HMD, dataglove or tracker, up to complete systems consisting of a computer, HMD, dataglove and tracker.

As with most new techniques, it was very profitable in the beginning to sell just the hardware. After a couple of years, people started to build libraries which could be used to build an application. One of the best known and most widely used libraries today is the WorldToolKit from Sense8.

Up until now, most of the effort was put in creating hardware for VR. Some applications have been built but were mainly used to test the hardware. The availability of VR toolkits made it possible for other researchers to use VR for their specific tasks, [brijs 92] and [ribarsky 94]. Applications were now built to use VR for a specific task not to test the VR hardware. Following are some examples of applications built in VR.

In their paper, Bajura et al described the use of Virtual Reality to look at ultrasound imagery. The ultrasound imagery is projected in a HMD and the doctor is now able to look at the inside of the patient. Their idea is that this helps the doctor get a better overview [bajura 92].

At Chapel Hill, research is done using a Scanning Tunneling Microscope and force feedback. They use the force feedback to feel the images made with a Scanning Tunneling Microscope. The user is also able to shoot at the surface with a laser and see the change immediately [taylor 93].

At different locations, Augmented Reality has been tested to help with the repair of complex equipment. While looking at the actual object, the computer gives clues about the different parts and the inside of the object [feiner 93].

All off these research projects have not looked at the human being in Virtual Reality. Most of the experiments were used to either test a certain piece of hardware or the use of Virtual Reality in general. Trying to set up an experiment with psychologists we hope to get some more insight in the functioning of people in Virtual Reality.

Virtual Environments group

At the GVU center, there is a special group working on Virtual Environments. This multi-disciplinary group is led by Dr. Larry F. Hodges. There are specialized people working on audio and visual aspects. Because of the many different projects on which this group is working, there is a lot of input from other faculties. At the time when this report was written, there was contact with people from architecture, civil engineering, robotics and with the medical university of Emory. There are also other groups of people interested in working with the Virtual Environments group but these contacts have not resulted in any projects yet.

Most of the people working in the Virtual Environments group use SVE to create the Virtual worlds for their projects. SVE is a toolkit built by the group to make it easier to build Virtual Environments This library is explained in more detail in the next section.

Development of Virtual Environments is done on SGI's. SGI builds powerful graphics computers which are widely used in the computer graphics community. At the GVU center, there are SGI indigo's and two SGI Reality Engines. All of these machines are used to generate Virtual Worlds. The two Reality Engines are especially popular because of their graphics power.

The HMD used is the flighthelmet from Virtual Research. The HMD uses two LCD screens with a resolution of 320 by 200 color triads. This is such a low resolution, about a quarter of normal TV resolution, that people who wear the helmet are considered legally blind. At the time of writing, a second helmet was in use which had a much better resolution. This one was, however, not available in the beginning and was not used in this research.

The tracker system used with the flighthelmet is a "flock of birds". The "flock of birds" transmits a magnetic pulse which is picked up by the receiver, for instance, on the HMD or on the CyberGlove(TM). The "flock of birds" is connected to an SGI indigo. This indigo polls the "flock of birds" for information. When an application runs on a different machine, for instance the Reality Engine, the information from the "flock of birds" is sent over the network.

The dataglove used is a CyberGlove(TM) from Virtual Technologies, Inc. Each of the joints has a piece of metal which measures the bend of that particular joint. A total of 18 joints can be measured, see Figure 2.4.

Figure 2.4: All the measure points of
the glove

Simple Virtual Environment library

SVE, which is used at the GVU, is a library which makes it easy for the user to develop a Virtual Environment. It hides the complicated task of rendering the objects and asking the different input devices for information. It is, like X11, completely event driven. This means that the user specifies which events it wants and what routines will handle these events, a callback mechanism. The library will now enter a loop in which it will continuously ask for information from the input devices and render the world accordingly. Whenever an event occurs in which the program is interested, the library will call the routine associated with this specific event (see Figure 2.5).

Figure 2.5: Event Model

More information about the SVE library can be found in the users manual [verlinden 93].


Introduction What is presence? TOC
Rob Kooper
kooper@cc.gatech.edu

Last modified: Fri Nov 17 11:29:52 EST 1995