Virtual Reality

By Olin Lathrop, Updated April 1999.

What is Virtual Reality?

Virtual Reality is a bad name for a good idea. Throughout the relatively short history of computer graphics, engineers have strived to develop more realistic, responsive, and immersive means for the human to interact with the computer. What is often called virtual reality (or simply VR) is the latest in that progression.

The name Virtual Worlds is being used by some in an attempt to be more realistic. Unfortunately this has not yet caught on in a major way. You are still much more likely to hear the term Virtual Reality, so that's what I'll use in this introduction to avoid confusing you with non-standard terms.

While there is no one definition of VR everyone will agree on, there is at least one common thread. VR goes beyond the flat monitor that you simply look at, and tries to immerse you in a three dimensional visual world. The things you see appear to be in the room with you, instead of stuck on a flat area like a monitor screen. As you might imagine, there are a number of techniques for acheiving this, each with its own tradeoff between degree of immersion, senses envolved beyond sight, computational requirements, physical contraints, cost, and others.

Some VR Systems

We'll start simple and build step by step to more fancy systems. This will help you understand how we got to where we are, and better appreciate the wide range of solutions with their various tradeoffs.

Stereo Viewing
An important aspect of VR is that the things you look at are presented in three dimesions. Humans see the world around them in 3D using many different techniques. Probably the most powerful technique for nearby (a few tens of meters) objects is called stereoscopic.

In stereoscopic 3D perception, we use the fact that we have two eyes that are set some distance apart from each other. When we look at a nearby object we measure the difference in angle from each eye to the object. Knowing this angle, the distance between the eyes, and a bit of trigonometry, we compute the distance to the object. Fortunately this is done for us sub-conciously by an extremely sophisticated image processor in our brain. We simply perceive the end result as objects at specific locations in the space around us.

right eye view left eye view
Figure 1 - A Stereo Pair of Images
Cross your eyes so that the two images overlap as one image in the center. This may be easier if you move back from the page a bit. After a while, your eyes will lock and re-focus on the center image. When that happens, you will see the 3D layout of the spheres pop out at you. Don't feel bad if you can't see it. I've found that roughly 1/3 of the people can see it within a minute, another 1/3 can see it with practise, and the remaining 1/3 seem to have a hard time.

Figure 1 shows stereo viewing in action. When you cross your eyes as directed, your right eye sees the left image, and your left eye sees the right image. For those of you that can see it, the effect is quite pronounced, despite the relatively minor differences between the two images.

So, one way to make a 3D computer graphics display is to render two images from slightly different eye points, then present them separately to each eye. Once again there are several ways of achieving this.

Shutter Glasses:
Probably the simplest way of displaying stereo computer images is by using the existing monitor. Suppose the display alternates rapidly between the left and right eye images. We could then make sure each eye only saw the image intended for it by opening a shutter in front of the eye when its image is being displayed. The shutters would have to be synchronized to the display.

This is exactly what shutter glasses are. They typically use electronic shutters made with liquid crystals. Another variation places the shutter over the monitor screen. Instead of blocking or unblocking the light, this shutter changes the light polarization between the left and right eye images. You can now wear passive polarizing glasses where the polarizer for each eye only lets thru the image that was polarized for that eye.

While this form of stereo viewing can provide good 3D preception and is suitable for many tasks, I don't personally consider this VR yet. The image or 3D scene is still "stuck" in the monitor. You are still looking at the picture in a box, instead of being in the scene with the objects you are viewing.

Shutter Glasses

Figure 2 - Shutter Glasses and Controller
CrystalsEyes product by StereoGraphics. This image was swiped from their web page.
Head Mounted Display:
Another way to present a separate image to each eye is to use a separate monitor for each eye. This can be done by mounting small monitors in some sort of head gear. With the right optics, the monitors can appear large and at a comfortable viewing distance. This setup is usually referred to as a head mounted display, or HMD for short.

I don't think of this by itself as VR yet either, although we're getting closer. You can have a reasonable field of view with great latitude in the placement of 3D objects, but the objects appear to move with your head. That's certainly not what happens when you turn your head in Real Reality.

Head mounted display

Figure 3 - Head Mounted Display in Use
The head mounted display is a Virtual Research VR4000. This image was swiped from their web page.

Head Tracking
What if the computer could sense the position and orientation of your head in real time? Assuming we have a sufficiently fast computer and a head mounted display (that's where the position/orientation sensor is hidden), we could re-render the image for each eye in real time also, taking into account the exact eye position. Objects could then be defined in a fixed space. As you moved your head around, the eye images would update to present the illusion of a 3D object at a fixed location in the room, just like real objects. Of course, the objects themselves can be moving too. But the important thing is that they are moving within some fixed frame that you can also move around in.

Now we've reached the minimum capabilities I'm willing to call VR.

3D position/orientation sensors

Figure 4 - Position and Orientation Sensors
UltraTrak product by Polhemus. The position and orientation of each of the sensors on the table are reported to the computer in real time. The data is relative to the large ball on top of the unit, which contains three orthogonal magnetic coils. Individual sensors can be imbedded in a head mounted display, fixed to various body parts, or mounted on other object that move around. This image was swiped from the Polhemus web page.

Hand Tracking
But wait, there's more. We can use more motion sensors and track the position and orientation of other objects, like your fingers, for example. Just like a mouse or joystick can be used to control a program, your finger actions could be used to control a program. This might take the form of pushing virtual menu buttons, or maybe grabbing an object and moving it around with your hand. The possible interactions are virtually boundless, limited mostly by the software designer's imagination.

Hand and finger position and orientation is typically achieved by wearing a special glove that has a position sensor on it and can sense the angles of your finger joints. This can be taken a step further by wearing a whole suite with imbedded joint angle sensors.

Glove sensing joint angles

Figure 5 - Glove that senses joint angles
This is the CyberGlove by Virtual Technologies. It reports various joint angles back to the computer. This image was swiped from a Virtual Technologies web page.

Force Feedback
So far we can be in the same space with the objects we're viewing and interact with them thru hand motions, but we can't feel them. That's where force feedback comes in. This is also referred to as haptic feedback. Suppose the special glove (or body suit) you were wearing could not only sense joint angles, but also had actuators that could push back at you. With some clever software and fast computers, the actuators could present the illusion of hard objects at particular locations. You can now not only see it in 3D, walk around it, control it, but also bump into it.

Note that force feedback is currently limited to "pushing back" to simulate the existance of a object. It does not provide other parts of what we call tactile feel, like texture, temperature, etc.


Figure 6 - Haptic Feedback Glove
No, this isn't some midieval torture device. It's the Cybergrasp product by Virtual Technologies. The cables and pulleys on the outside of the glove can be used to "push back" at the operator under computer control. This can be used, among other things, to allow the wearer to feel the presence of objects in the virtual world.

The CAVE
Instead of using a head mounted display, imagine a room where output of computer displays is projected onto the walls. The projected images are in stereo by rapidly alternating between the two eye images. You stand somewhere near the middle and wear shutter glasses for a 3D effect.

This concept was first developed in 1991 at the Electronic Visualization Lab of the University of Illinois at Chicago. Several other CAVE systems have since been set up at other sites.

Applications
I'm just going to briefly mention a few areas VR technology is used.

Entertainment
This is definitely the biggest market, and is the main force for driving down prices on VR hardware. You can be in a computer game with computer generated players and/or other real players.

SpacePad

Figure 7 - Virtual Reality Gaming
An Ascension Technology SpacePad motion tracker is being used in conjunction with a head mounted display and hand buttons to implement a gaming station. This image was swiped from the Ascension Technology web page.

Augmented Reality
Imagine a VR head mounted display as we've discussed, but the display doesn't block out the regular view, it's just superimposed on it. Imagine walking around a building and "seeing" inside the walls to the wiring, plumbing, and structure. Or, seeing the tumor inside a patient's head as you hack away at it.

Training
VR is already being used in to teach people how to use expensive equipment, or when the cost of a mistake in Real Reality is very high. For example, use of VR is getting more common in aircraft simulators used as part of an overall program to train pilots. The benifits are also substantial for military applications for obvious reasons.

Remote Robotics
This is another "real" application that is gaining much attention. Suppose you had a robot that had arms and hands modeled after those of humans. It could have two video cameras where we have eyes. You could be wearing a head mounted display and see what the robot sees in real time. If your head, arm, and hand motions are sensed and replicated in the robot, for many applications you could be where the robot is without really being there.

This could be useful and worth all the trouble in situations where you can't physically go, or you wouldn't be able to survive. Examples might be in hostile environments like high radiation areas in nuclear power plants, deep undersea, or in outer space. The robot also doesn't need to model a human exactly. It could be made larger and stronger to perform more physical labor than you could, or it might be smaller to fit into a pipe you couldn't. Remote robotics could also be a way to project a special expertise to a remote site quicker than bringing a person there. Some law enforcement and military applications also come to mind.

Distributed collaboration
VR is being employed to allow geographically distributed people to do more together than simply hear and see each other as allowed by telephone or videoconferencing.

For example, the military is using VR to create virtual battles. Each soldier participates from his own point of view in an overall simulation that may envolve thousands of individuals. All participants need not be in physical proximity, only connected on a network.

This technology is still in its infancy, but evolving rapidly. I expect commercial applications of distributed collaboration to slowly gain momentum over the next several years.

Visualization
Scientists at NASA/Ames and other places have been experimenting with VR as a visualization research tool. Imagine being able to walk around a new aircraft design as it's in a simulated mach 5 wind tunnel. VR can be used to "see" things humans can't normally see, like air flow, temperature, pressure, strain, etc.

VR visualization scenario

Figure 8 - Simulated picture of VR used in visualization
This picture tries to show us what the virtual world looks like to the two engineers. Some of this is wishful thinking, but it still does a good job of illustrating the concepts. Note that no finger position sensors or force feedback devices are apparent, even though the bottom engineer is selecting virtual menu buttons, and the other engineer seems to be resting his hand on the model. Also, anyone in the room not wearing a display would not be able to see the model at all. This image was swiped from the Virtual Research home page.

Problems
The current state of VR is far from everything we could imagine or want. A few of the open issues are:

Cost
This stuff is just too expensive for everyone to have one, and it's likely to stay that way for quite a while.

What's it Good For?
I listed some application areas above, but note that none of them solve common everyday problems. While VR certainly has its application niches - and the number is steadily growing - it's hard to imagine how it can help the average secretary type a letter on a word processor.

Display Resolution
Head mounted displays need to be small and light else you get a sore neck real fast. Unfortunately, the display resolution is therefore limited. Most displays are only about 640 pixels accross, which is a tiny fraction of what a normal human can see over the same angle of view.

Update Speed
Most VR displays are updated at 30 Herz (30 times per second). This requires a large amount of computation, just to maintain what looks and feels like "nothing is happening". The amount of computation required also depends on the scene complexity. VR is therefore limited to relatively "simple" scenes that can be rendered in 1/30 second or faster. This currently procludes any of the rendering methods that can provide shadows, reflections, transparency, and other realistic lighting effects. As a result, VR scenes look very "computer-ish".

The standard 30 Herz update rate comes from existing systems intended for displaying moving images on a fixed monitor or other screen. With VR, you also have take into account how fast something needs to be updated as the user's head turns to present the illusion of a steady object. Apparently, that requires considerably more than 30 Herz for the full effect.

Links to Other VR Resources


Home : Teaching : Courses : Cg 

Informatics Forum, 10 Crichton Street, Edinburgh, EH8 9AB, Scotland, UK
Tel: +44 131 651 5661, Fax: +44 131 651 1426, E-mail: school-office@inf.ed.ac.uk
Please contact our webadmin with any comments or corrections. Logging and Cookies
Unless explicitly stated otherwise, all material is copyright © The University of Edinburgh