PhD CS – Graphics & Visualization Body of Knowledge



  • Bill Ribarsky
  • John Stasko
  • Chris Shaw
  • Greg Turk


Visualization deals with the automatic construction and display of visual interpretations of data.

Suggested Readings

  1. W Schroeder, K Martin, and W Lorenson. The Visualization Toolkit, 2nd E dition. Prentice Hall PTR, Upper Saddle River, NJ (1998).
  2. SK Card, J Mackinlay, and B Schneiderman. Readings in Information Visualization. (Morgan Kaufmann, San Francisco, 1998).
  3. C. Upson, T Faulhaber, D Kamins, D Laidlaw, D Schlegel, J Vroom, R Gurwitz, and A van Dam. The Application Visualization System: A Computational Environment for Scientific Visualization. IEEE Computer Graphics &Applications, pp. 30-42 (July, 1989).
  4. T Delmarcelle and L Hesselink. Visualizing Second-Order Tensor Fields with Hyperstreamlines. IEEE Computer Graphics & Applications, pp. 25-33 (July, 1993).
  5. M. Cox, and D. Ellsworth. Application-controlled Demand Paging for Out -of-core Visualization. Proceedings of the IEEE Visualization Conference, pp 2 35-244 (1997).
  6. S. Bryson and S. Johan. Time Management, Simultaneity and Time-critical Computation in Interactive Unsteady Visualization Environments. Proc. Visualization 1996, pp. 255-261 (1996).
  7. R.M. Kirby, H. Marmanis, and D.H. Laidlaw. Visualizing Multivalued Data from 2D Incompressible Flows Using Concepts from Painting. Proceedings IEEE Visualization '99, pp. 333-340.
  8. C.A.H Baker, M.S.T Carpendale, P. Prusinkiewicz, and M.G. Surette. GeneVis: Visualization Tools for Genetic Regulatory Network Dynamics. Proceedings IEEE Visualization '02, pp. 243-250.
  9. Nickolas Faust, William Ribarsky, T.Y. Jiang, and Tony Wasilewski, “Real-Time Global Data Model for the Digital Earth,”d Proceedings of the INTERNATIONAL CONFERENCE ON DISCRETE GLOBAL GRIDS (2000). pdf
  10. H. Hoppe. Smooth View-Dependent Level-of-Detail Control and its Application to Terrain Rendering. Proceedings Visualization '98, 18-23 Oct 1998, pp.35 -42, 516.
  11. L. Hong, S. Muraki, A. Kaufman, D. Bartz, and T. He. Virtual Voyage: Interactive Navigation in the Human Colon. Proceedings of Computer Graphics and Interactive Techniques '97, pp. 27-34.
  12. S. Rusinkiewicz and M. Levoy.. QSplat: A Multiresolution Point Rendering System for Large Meshes. Proceedings of Computer Graphics and Interactive Techniques 2000, pp. 343-352.
  13. S. Eick. Visual Discovery and Analysis. IEEE Transactions on Visualization and Computer Graphics, Volume: 6 Issue: 1 , Jan-Mar 2000, pp. 44 -58.
  14. W. de Leeuw, J. van Wijk. Enhanced Spot Noise for Vector Field Visualization. Proceedings IEEE Visualization ’95, pp. 233-239 (1995).
  15. M. Zwicker, H. Pfister, J. van Baar, and M. Gross. EWA Volume Splatting. IEEE Transactions on Visualization and Computer Graphics, Volume: 8 Issue: 3 , Jul-Sep 2002, pp. 223 -238.


Optional Readings

  1. Edward R. Tufte. The Visual Display of Quantitive Information. Graphics Press, 1983.
  2. Peter R. Keller and Mary M. Keller. Visual Cues: Practical Data Visualization. IEEE Computer Society Press, 1993.


Suggested Courses

  • CS 6480 Computer Visualization Techniques
  • CS 7450 Information Visualization
  • CS 6780 Medical Image Processing


Sample Questions

  1. Describe in detail the Marching Cubes algorithm, and the topological errors that need to be accounted for relative to the original (1987) formulation.
  2. Describe in detail Levoy's volume rendering algorithm.
  3. Describe in detail the different approaches to volume visualization represented by surface rendering, volume rendering, and texture memory.
  4. Describe in detail how a synthetic object (i.e., a graphically generated structure) can be imbedded in a discrete volume (i.e., an acquired 3D dataset).
  5. How can an octree data representation be used to guide and/or expedite the volume rendering process?
  6. Describe in detail a principal-component or eigenvalue approach to representing tensor fields.
  7. Describe how a large dataset that can include widely varying scales (from very large resolution data to very small resolution samples) can be efficiently visualized by varying the level of detail in the interactive visualization process. Describe in detail how the transition from one scale to another can be smoothed.
  8. Describe in detail a technique for finding isosurfaces in a volume of data that would be scalable with respect to size of the dataset.
  9. Suppose you were confronted with a time-dependent dataset that you wanted to depict with a particular technique (isosurfaces, volumetric, clusters, etc.) How would you modify the technique to make it efficiently handle time dependence.
  10. Consider a multidimensional, time dependent collection of spatial data. Describe two separate methods to visualize this multivariate, temporally evolving dataset.
  11. Dataflow visualization architectures are quite widely used. These are based on the notion of directed acyclic graphs. Discuss the advantages and disadvantages of these architectures. In particular, what are their limitations for handling large data, and are they intrinsic?
  12. Discuss what factors affect performance in a visually-dominant virtual environment. Are there certain tasks affected more by some factors than others? Describe how you would set up an experiment to measure the effects of these factors and how you would analyze the results
  13. Compare and contrast scientific visualization and information visualization. Are different approaches or techniques needed in the two fields, or do they use variations of the same methods?
  14. Visualization involves more than methods of representation for data. It is also concerned with the process of visual analysis. Describe some general approaches and specific examples for visual analysis.
  15. For data organizations it is necessary to know the topology, geometry, and attributes of the data. Describe these and give examples of each. In particular, describe in detail the the topological classifications of 3D data and show how these can be used in visual representations.
  16. Describe in detail a technique for finding isosurfaces in a volume of data that would be scalable with respect to size of the dataset.