Virtual and augmented reality

This page is under construction, please contact Prof MacIntyre if you wish to select this as one or your sub-areas.

FACULTY

SCOPE

This field covers natural interaction with virtual environments and models or with the digital annotations overlays over real environments.

SUGGESTED READINGS

  1. Woodrow Barfield and Tom A. Furness. Virtual Environments and Advanced Interface Design. Oxford Press, 1995.
  2. Kenneth Meyer, Hugh L. Applewhite, and Frank A. Biocca. A survey of position trackers. PRESENCE: Teleoperators and Virtual Environments, 1(2):173-200, 1992.
  3. Benjamin A. Watson and Larry F. Hodges. Using texture maps to correct for optical distortion in head-mounted displays. In Virtual Reality Annual International Symposium (VRAIS), pages 172-178, March 1995. Ron Azuma's survey of AR from Presence,
  4. Daniel P. Mapes and J. Michael Moshell. A two-handed interface for object manipulation in virtual environments. PRESENCE: Teleoperators and Virtual Environments, 4(4):403-416, 1995.
  5. John C. Goble, Ken Hinckley, Randy Pausch, John W. Snell, and Neal F. Kassell. Two-handed spatial interface tools for neurosurgical planning. IEEE Computer, 28(7):20-26, July 1995.
  6. C. R. Wren, A. Azarbayejani, T. Darrell and A. Pentland. Pfinder: Real-Time Tracking of the Human Body. Proceedings of the Second International Conference on Automatic Face and Gesture Recognition, 1996.
  7. Siggraph papers on Framebufferless Rendering (short 2 page) Hybrid tracking from '98.
  8. Professor Hodges will put a notebook of readings for VR in the GVU Library and in the FCLab that contains papers and book chapters read in the VR Course.
  9. Ron Azuma, "A survey of augmented reality", Presense, 6:4, pp 355-386, 1997
  10. Elizabeth D. Mynatt and Maribeth Back and Roy Want and Michael Baer and Jason B. Ellis "Designing Audio Aura", ACM CHI '98, pp 566-573, 1998.
  11. Steven Feiner and Blair MacIntyre and Doree Seligmann, "Knowledge-Based Augmented Reality", Communications of the ACM, 36:7, July 1993.
  12. Gary Bishop, Henry Fuchs, Leonard McMillan, and Ellen J. ScherZagier "Frameless rendering: double buffering considered harmful" Proceedings of SIGGRAPH 94, pp 175-176, 1994

SUGGESTED COURSES

SAMPLE QUESTIONS

  1. What is the difference between VR and Augmented Reality?
  2. What are specific ways to quantitatively measure the degree of presence?
  3. What are some of the effects that may result from latency and lag (from the user's perspective)?
  4. From the HCI viewpoint, describe the formal process of designing usability studies for a VR system.
  5. As a virtual reality expert you have been asked to develop a virtual environment that will be used to determine the effects of site lighting, scaffolding height, and width of scaffolding on the number of serious falling accidents by construction workers. Analyze and describe the virtual environment that you will build from each of the following four perspectives.
  1. Commercial head-mounted displays have been developed based on both CRT and Liquid Crystal display technology. Compare the two technologies with respect to their relative advantages and disadvantages as the display technology for a HMD. Your answer should address screen resolution (define what you mean by resolution), pixel pitch, weight, power consumption, image brightness, cost and any other factors you think might be important.
  2. Four hardware approaches to building virtual environments are head-mounted displays, the CAVE, the BOOM, and the Immersive Workbench. Describe the basic components of each approach and compare them based on whatever criteria you think is appropriate.
  3. Describe and draw a sketch of a simple magnifier head-mounted display design. Explain its relation to the thin lens equation and collimated displays.
  4. Describe and compare LEEP optics with Fresnel lens optics as used in HMDs.
  5. Discuss the basic operating principles and characteristics of electromagnetic trackers, mechanical linkage trackers, and ultrasonic trackers.
  6. With respect to immersive 3D interfaces answers the following questions.
  7. Present an argument for or against the following statement: "Interaction in virtual environments should be as close as possible to interaction in the physical world."
  8. Do you think that VEs will eventually have a "standard" interface, analogous to the desktop GUIs on today's PCs? Why or why not?
  9. List some advantages and disadvantages for each of the following methods of interaction evaluation: usability study, single-variable experimentation, testbed evaluation.
  10. Define Presence. Discuss techniques that one could use to measure Presence in a Virtual Environment. List some open questions about the definition and use of Presence.
  11. With respect to time-multiplexed stereoscopic display: Give a short description of how time-multiplexed stereoscopic display systems work. Explain what we mean by the accommodation / convergence conflict in a stereoscopic display? What are some of the factors that affect "ghosting" in a time-multiplexed stereoscopic display system?
  12. Give a complete description of a hierarchical tree structure that could be used to represent the relative locations and orientations of all the components of a virtual environment, including transmitter, receivers, user, and geometric objects in the world. From your example describe how you would compute the transformations that represent an object's position if the object were to begin on a table in the scene, be picked up by the user's hand, then moved to and deposited on a second table in the scene.
  13. What are the issues involved in 3D interaction in virtual environments? Are the requirements task or device-dependent? Give specific examples.
  14. How can a photographic image be stitched or blended into a synthetic environment?
  15. Spectral fitting is one way to use the frequency information to blend two types of images in the same scene (for example, to blend a video frame or photo image with a synthetic scene). Describe how this is done.
  16. Describe two approaches (video-mixing and optical-mixing) to blending graphics with the user's view of the world, and the pros and cons of each.
  17. Describe the possible sources of lag in an AR system that can prevent graphics from being registered temporally registered with the world, and suggest ways of overcoming each? For example, include a description of each source, the typical delays associated with that source, and how your suggestion for overcoming this delay would work. Include: sensor latency (lag between acquiring and outputting the analysis of the signal), application latency and rendering latency (both rendering times and frame duration).
  18. Explain what a Kalman filter is (intuitively), how it works, and why it is useful.
  19. What are the effects (both pros and cons) of having the real world be visible in an Augmented Reality system?
  20. What are the two main technological approaches to superimposing graphics on a user's view of the world? What are the pros and cons of each?
  21. What are some of the effects that may result from latency and lag (from the user's perspective)? Discuss the differences between Augmented and Virtual Reality with respect to tolerance of lag.
  22. Commercial head-mounted displays have been developed based on both CRT and Liquid Crystal display technology. Compare the two technologies with respect to their relative advantages and disadvantages as the display technology for a HMD. Your answer should address screen resolution (define what you mean by resolution), pixel pitch, weight, power consumption, image brightness, cost and any other factors you think might be important.
  23. Four hardware approaches to building virtual environments are head-mounted displays, the CAVE, the BOOM, and the Immersive Workbench. Describe the basic components of each approach and compare them based on whatever criteria you think is appropriate.
  24. Possible addition: How would each of these be used for Augmented Reality? (basic answer: see-through/video-mixed HMD's and BOOM, and video-mixed CAVE and IW).