Ph.D. CS - Human-Computer Interaction Body of Knowledge

User Interface Software and Technology
Introduction

In addition to understanding how to design and evaluate human-computer interfaces, you must also understand how to implement them effectively. Moreover, you must understand the structure of artifacts (window systems, toolkits, sensing systems, algorithms, and so forth) that enable programmers to build and maintain complex applications. As with any other area of computer software, there is a body of techniques, algorithms and approaches that have been built up over the years. This is especially true of the 2D WIMP (Windows, Icons, Mice, Pointers) interfaces found on desktop and laptop computers today, as well as the extension of these systems to touch-based interaction for mobile platforms. The goal of this section of the qualifier is to ensure that you are familiar with the body of knowledge related to user-interface software.

When looking at the reading list below, two things should strike you. First, many of the papers listed are quite old in "computer time" (over a decade, in some cases). This is because many of the techniques and approaches on which current toolkits are built were the subject of research a decade or more ago; many of the application frameworks for creating software on mobile devices, for instance, have their roots in techniques around event dispatch, damage management, and graphics composition rooted in 2D WIMP platforms created for desktop window systems. Second, many of the remaining papers focus on topic areas that are still somewhat “exploratory” when compared to the toolkit architectures used for graphical interfaces. These “off the desktop” UIs include aspects of Ubiquitous Computing, Augmented Reality, Wearable Computing, Tangible Computing, and so forth. These areas have become a core topic of research in the User Interface Software community. However, how best to create these applications is still an active area of research, and so evolves rapidly.

General Resources

Surveys and detailed coverage of many user interface software and technology techniques are covered in CS 6456: Principles of User Interface Software. You should be familiar with the architectural concepts covered in this class as well as the readings assigned in it.

Many papers in the ACM SIGGRAPH/SIGCHI conference series on User Interface Software and Technology, also known as the UIST conference, include significant examples of both toolkits and systems. This is also true to a lesser extent of the other ACM and IEEE sponsored conferences, especially CHI (Human Factors in Computing Systems). Many other ACM/IEEE conferences also contain examples relevant to their particular domain, although they are much more rare. Examples of such conferences include CSCW, ISWC (Wearable Computing), ISMAR (Augmented Reality) and so on. Students specializing in the User Interface Software area should be broadly familiar with recent readings from such conferences.

A good, introductory text on UI software is:

  • Olsen, Jr., Dan R., Developing User Interfaces., Morgan Kaufman, 1998.

Keep in mind that this is a very introductory book; it is a quick read for people with UI programming experience, and will provide a good background on elementary 2D graphics and UI programming for those with little experience.

Historical and Broad Coverage Papers

Brad Myers has written a number of survey papers covering the history of user interface software. A good example is the following paper, which summarizes some important early milestones in UI software, and includes a discussion of why certain techniques failed to catch on, and what some promising techniques might be for the future:

  • Brad Myers, Scott E. Hudson, and Randy Pausch, Past, Present and Future of User Interface Software Tools. ACM Transactions on Computer Human Interaction. Vol. 7 No. 1, March 2000, pp. 3-28. Available as http://www.cs.cmu.edu/~amulet/papers/futureofhci.pdf.

Technically-oriented HCI research is different from more empirically-oriented research in the field, because the metrics used to determine whether a system is “good” or not may depend on technical or architectural evaluation, not user evaluation. Dan Olsen lays out the case for what makes good systems research in HCI in this paper:

  • Olsen, D. Evaluating User Interface Systems Research. Proceedings of UIST 2007.

Toolkit Issues and GUI Interaction

A good paper that describes one of the first toolkits to use composition of widgets (via container widgets and layout management) to form complex nested layouts:

  • Linton, M. A., Vlissides, John M., and Calder, Paul R., "Composing User Interfaces with Interviews", IEEE Computer, 22(2), Feb. 1989, pp. 8-22.

There is growing interest in creating interfaces that are more lively and engaging. There are plenty of examples of such interfaces in recent UIST and CHI proceedings, and one of the earliest papers to discuss bringing traditional cell-based animation techniques into user interface toolkits is the influential work by

  • Chang, B.W. and Ungar, D. (1993). "Animation: From Cartoons to the User Interface." in UIST'93: Symposium on User Interface Software and Technology. 1993. pp. 45-55.

Other work describes how to integrate animation into a user interface toolkit. The techniques in the paper below are described in the context of a partciular toolkit (ARTKIT), but is easily adapted to any toolkit:

  • Hudson, S., Stasko, J., Animation support in a user interface toolkit: Flexible, robust and reusable abstractions. Proceedings of UIST '93, Nov. 1993, pp. 57-67.

One extremely powerful approach to flexible interface layout that has been widely studied, but has only recently begun to catch on with recent versions of iOS, is constraints. A good paper on simple, one-way constraints (which are common in UI toolkits that try to use constraints), is:

  • Scott E. Hudson, "A System for Efficient and Flexible One-Way Constraint Evaluation in C++", GVU Tech report 93-15.

A common theme of the UI Software literature has been on allowing user interfaces to be created more easily. This is often approached at the architectural level (toolkit-related literature), but has also been approached at the end-user level. Landay’s SILK system, for instance, uses sketch recognition to allow users to create UI designs via sketching, which are then interpreted and rendered into a runnable UI:

  • Landay, J.A. and B.A. Myers, (2001) Sketching Interfaces: Toward More Human Interface Design. IEEE Computer. 34(3): p. 56-64.

Another theme has explored how to create frameworks for alternative styles of interaction in GUIs that go beyond common WIMP metaphors. These alternative interaction styles often drive new architectural requirements for toolkits:

  • Michel Beaudouin-Lafon (2000) Instrumental interaction: an interaction model for designing post-WIMP user interfaces, Proceedings of CHI'2000, pages 446-453.

Other systems have used toolkit frameworks to create entirely new styles of interfaces. Understanding the architectural underpinnings of toolkits is essential for such work. In one example, graphical applications are adapted at runtime to retrofit them to a purely auditory UI:

  • W. Keith Edwards and Elizabeth D. Mynatt. "An architecture for transforming graphical interfaces." In Proceedings of the ACM symposium on User interface software and technology, 1994, Pages 39 - 47.

More recent work has looked at reverse-engineering graphical applications at the pixel level in order to support custom extensions and other features:

  • Dixon, M. and Fogarty, J. (2010). Prefab: Implementing Advanced Behaviors Using Pixel-Based Reverse Engineering of Interface Structure. Proceedings of the ACM Conference on Human Factors in Computing Systems (CHI 2010), pp. 1525-1534.

Finally, Card et al.’s paper provides a discussion of the issues surrounding the design space of input devices:

  • Stuart K. Card, Jock D. Mackinlay and George G. Robertson. "The design space of input devices", In Proc. CHI '90, pp 117-124.
Sensing

With the advent of low-cost sensing—whether embedded in our environments, or on our mobile devices—how to create user interfaces that leverage sensing has become a key concern of the UI Software community.

  • Ken Hinckley, Jeff Pierce, Mike Sinclair, and Eric Horvitz. 2000. Sensing techniques for mobile interaction. In Proceedings of the 13th annual ACM symposium on User interface software and technology (UIST '00). ACM, New York, NY, USA, 91-100. DOI=10.1145/354401.354417 http://doi.acm.org/10.1145/354401.354417

This is a particularly fast changing area, so students should be familiar with recent conference papers focused on sensing.

Input and Output Devices

The technical HCI community does not just focus on software architectures and techniques, but also on new hardware devices. Often, new input/output devices will drive changes in how we interact with systems, and thus what software framework features are required. As one example the move to multitouch interaction has driven research in recognition of finger gestures, changes to input handling, and more.

A couple of influential papers have explored early approaches to creating multitouch user interfaces:

  • Dietz, P.H.; Leigh, D.L., "DiamondTouch: A Multi-User Touch Technology", ACM Symposium on User Interface Software and Technology (UIST), ISBN: 1-58113-438-X, pps 219-226, November 2001
  • Han, J. Y. 2005. Low-Cost Multi-Touch Sensing through Frustrated Total Internal Reflection. In Proceedings of the 18th Annual ACM Symposium on User Interface Software and Technology

More recent work has created large-scale multitouch hardware that is more robust, and more easily commercializable:

  • Steve Hodges, Shahram Izadi, Alex Butler, Alban Rrustemi, and Bill Buxton, ThinSight: versatile multi-touch sensing for thin form-factor displays, in UIST '07: Proceedings of the 20th annual ACM symposium on User interface software and technology, Newport, Rhode Island, USA, Association for Computing Machinery, Inc., New York, NY, USA, October 2007

Finally, a growing body of work has explored on-body interaction as a means to accomplish I/O. Chris Harrison’s work is particularly influential in this area:

  • Harrison, C., Tan, D. Morris, D. 2010. Skinput: Appropriating the Body as an Input Surface. In Proceedings of the 28th Annual SIGCHI Conference on Human Factors in Computing Systems (Atlanta, Georgia, April 10 - 15, 2010). CHI '10. ACM, New York, NY. 453-462.
Advanced and “Off-the-Desktop” Interfaces

A final category of exploration in the technical HCI domain concerns software and technology support for creating novel styles of interaction, particularly what has been termed “off the desktop” interaction, which does not use a traditional computer.

One of the most influential early papers in this area is from Hiroshi Ishii, who frames the concepts of tangible user interaction, which is interaction that relies on the physical affordances of tangible objects, which are augmented with computational behaviors:

  • Ishii, H., and Ullmer, B. " Tangible Bits: Toward Seamless Interfaces between People, Bits and Atoms ". In Proceedings of CHI 97: Human Factors in Computing Systems , Atlanta, GA, March 1997, pp. 234-241.

Other examples of work in this area have explored novel approaches to graphical interaction. For example, Wilson, et al.’s paper below focuses on physics models for interaction on the Microsoft Surface:

  • Andrew D. Wilson, Shahram Izadi, Otmar Hilliges, Armando Garcia-Mendoza, and David Kirk, Bringing physics to the surface, in Proceedings of the 21st annual ACM symposium on User interface software and technology (ACM UIST 2008) , Association for Computing Machinery, Inc., October 2008

Other examples combine traditional mouse-based user interfaces with the affordances of pen-based interaction, resulting in powerful new modes of interaction:

  • Hinckley, K., Yatani, K., Pahud, M., Coddington, N., Rodenhouse, J., Wilson, A., Benko, H., Buxton, B. "Pen + Touch = New Tools." Proceedings of UIST 2010.

Advances in computer vision have allowed cameras and vision algorithms to play a greater role in user interfaces, particularly in understanding the geometries of complex 3-dimensional scenes:

  • Hao Du, Peter Henry, Xiaofeng Ren, Marvin Cheng, Dan B Goldman, Steven M. Seitz, Dieter Fox, Interactive 3D Modeling of Indoor Environments with a Consumer Depth Camera . Ubicomp 2011

Placing interaction in the physical world is also a focus of augmented reality research. Work at Georgia Tech has explored many facets of augmented reality, including how to bring this technology within the grasp of designers:

  • Blair MacIntyre, Maribeth Gandy, Steven Dow, and Jay David Bolter. "DART: A Toolkit for Rapid Design Exploration of Augmented Reality Experiences." ACM User Interface Software and Technology (UIST'04), October 24-27, 2004, Sante Fe, New Mexico.

Finally, research has also examined the role that human computation can play in the interface. Michael Bernstein’s systems, such as the one below, have been influential in this area:

  • Bernstein, M., Little, G., Miller, R.C., et al. Soylent: A Word Processor with a Crowd Inside. In Proc. UIST 2010. ACM Press.