Introduction
The development of effective interfaces to computer systems is the fundamental objective of research on human-computer interactions.
An interface can be defined as the sum of the hardware and software components through which a system is operated and users informed of its status. The hardware components include data entry and pointing devices (e.g., keyboards, mice), information-presentation devices (e.g., screens, loudspeakers), and user manuals and documentation. The software components include menu commands, icons, windows, information feedback, navigation systems and messages and so on. An interface’s hardware and software components may be so closely linked as to be inseparable (e.g., function keys on keyboards). The interface includes everything the user perceives, understands and manipulates while interacting with the computer (Moran 1981). It is therefore a crucial determinant of the human-machine relation.
Research on interfaces aims at improving interface utility, accessibility, performance and safety, and usability. For these purposes, utility is defined with reference to the task to be performed. A useful system contains the necessary functions for the completion of tasks users are asked to perform (e.g., writing, drawing, calculations, programming). Accessibility is a measure of an interface’s ability to allow several categories of users—particularly individuals with handicaps, and those working in geographically isolated areas, in constant movement or having both hands occupied—to use the system to perform their activities. Performance, considered here from a human rather than a technical viewpoint, is a measure of the degree to which a system improves the efficiency with which users perform their work. This includes the effect of macros, menu short-cuts and intelligent software agents. The safety of a system is defined by the extent to which an interface allows users to perform their work free from the risk of human, equipment, data, or environmental accidents or losses. Finally, usability is defined as the ease with which a system is learned and used. By extension, it also includes system utility and performance, defined above.
Elements of Interface Design
Since the invention of shared-time operating systems in 1963, and especially since the arrival of the microcomputer in 1978, the development of human-computer interfaces has been explosive (see Gaines and Shaw 1986 for a history). The stimulus for this development has been essentially driven by three factors acting simultaneously:
First, the very rapid evolution of computer technology, a result of advances in electrical engineering, physics and computer science, has been a major determinant of user interface development. It has resulted in the appearance of computers of ever-increasing power and speed, with high memory capacities, high-resolution graphics screens, and more natural pointing devices allowing direct manipulation (e.g., mice, trackballs). These technologies were also responsible for the emergence of microcomputing. They were the basis for the character-based interfaces of the 1960s and 1970s, graphical interfaces of the late 1970s, and multi- and hyper-media interfaces appearing since the mid-1980s based on virtual environments or using a variety of alternate-input recognition technologies (e.g., voice-, handwriting-, and movement-detection). Considerable research and development has been conducted in recent years in these areas (Waterworth and Chignel 1989; Rheingold 1991). Concomitant with these advances was the development of more advanced software tools for interface design (e.g., windowing systems, graphical object libraries, prototyping systems) that greatly reduce the time required to develop interfaces.
Second, users of computer systems play a large role in the development of effective interfaces. There are three reasons for this. First, current users are not engineers or scientists, in contrast to users of the first computers. They therefore demand systems that can be easily learned and used. Second, the age, sex, language, culture, training, experience, skill, motivation and interest of individual users is quite varied. Interfaces must therefore be more flexible and better able to adapt to a range of needs and expectations. Finally, users are employed in a variety of economic sectors and perform a quite diverse spectrum of tasks. Interface developers must therefore constantly reassess the quality of their interfaces.
Lastly, intense market competition and increased safety expectations favour the development of better interfaces. These preoccupations are driven by two sets of partners: on the one hand, software producers who strive to reduce their costs while maintaining product distinctiveness that furthers their marketing goals, and on the other, users for whom the software is a means of offering competitive products and services to clients. For both groups, effective interfaces offer a number of advantages:
For software producers:
- better product image
- increased demand for products
- shorter training times
- lower after-sales service requirements
- solid base upon which to develop a product line
- reduction of the risk of errors and accidents
- reduction of documentation.
For users:
- shorter learning phase
- increased general applicability of skills
- improved use of the system
- increased autonomy using the system
- reduction of the time needed to execute a task
- reduction in the number of errors
- increased satisfaction.
Effective interfaces can significantly improve the health and productivity of users at the same time as they improve the quality and reduce the cost of their training. This, however, requires basing interface design and evaluation on ergonomic principles and practice standards, be they guidelines, corporate standards of major system manufacturers or international standards. Over the years, an impressive body of ergonomic principles and guidelines related to interface design has accumulated (Scapin 1986; Smith and Mosier 1986; Marshall, Nelson and Gardiner 1987; Brown 1988). This multidisciplinary corpus covers all aspects of character-mode and graphical interfaces, as well as interface evaluation criteria. Although its concrete application occasionally poses some problems—for example, imprecise terminology, inadequate information on usage conditions, inappropriate presentation—it remains a valuable resource for interface design and evaluation.
In addition, the major software manufacturers have developed their own guidelines and internal standards for interface design. These guidelines are available in the following documents:
- Apple Human Interface Guidelines (1987)
- Open Look (Sun 1990)
- OSF/Motif Style Guide (1990)
- IBM Common User Access guide to user interface design (1991)
- IBM Advanced Interface Design Reference (1991)
- The Windows interface: An application design guide (Microsoft 1992)
These guidelines attempt to simplify interface development by mandating a minimal level of uniformity and consistency between interfaces used on the same computer platform. They are precise, detailed, and quite comprehensive in several respects, and offer the additional advantages of being well-known, accessible and widely used. They are the de facto design standards used by developers, and are, for this reason, indispensable.
Furthermore, the International Organization for Standardization (ISO) standards are also very valuable sources of information about interface design and evaluation. These standards are primarily concerned with ensuring uniformity across interfaces, regardless of platforms and applications. They have been developed in collaboration with national standardization agencies, and after extensive discussion with researchers, developers and manufacturers. The main ISO interface design standard is ISO 9241, which describes ergonomic requirements for visual display units. It is comprised of 17 parts. For example, parts 14, 15, 16 and 17 discuss four types of human-computer dialogue—menus, command languages, direct manipulation, and forms. ISO standards should take priority over other design principles and guidelines. The following sections discuss the principles which should condition interface design.
A Design Philosophy Focused on the User
Gould and Lewis (1983) have proposed a design philosophy focused on the video display unit user. Its four principles are:
- Immediate and continuous attention to users. Direct contact with users is maintained, in order to better understand their characteristics and tasks.
- Integrated design. All aspects of usability (e.g., interface, manuals, help systems) are developed in parallel and placed under centralized control.
- Immediate and continuous evaluation by users. Users test the interfaces or prototypes early on in the design phase, under simulated work conditions. Performance and reactions are measured quantitatively and qualitatively.
- Iterative design. The system is modified on the basis of the results of the evaluation, and the evaluation cycle started again.
These principles are explained in further detail in Gould (1988). Very relevant when they were first published in 1985, fifteen years later they remain so, due to the inability to predict the effectiveness of interfaces in the absence of user testing. These principles constitute the heart of user-based development cycles proposed by several authors in recent years (Gould 1988; Mantei and Teorey 1989; Mayhew 1992; Nielsen 1992; Robert and Fiset 1992).
The rest of this article will analyse five stages in the development cycle that appear to determine the effectiveness of the final interface.
Task Analysis
Ergonomic task analysis is one of the pillars of interface design. Essentially, it is the process by which user responsibilities and activities are elucidated. This in turn allows interfaces compatible with the characteristics of users’ tasks to be designed. There are two facets to any given task:
- The nominal task, corresponding to the organization’s formal definition of the task. This includes objectives, procedures, quality control, standards and tools.
- The real task, corresponding to the users’ decisions and behaviours necessary for the execution of the nominal task.
The gap between nominal and real tasks is inevitable and results from the failure of nominal tasks to take into account variations and unforeseen circumstances in the work flow, and differences in users’ mental representations of their work. Analysis of the nominal task is insufficient for a full understanding of users’ activities.
Activity analysis examines elements such as work objectives, the type of operations performed, their temporal organization (sequential, parallel) and frequency, the operational modes relied upon, decisions, sources of difficulty, errors and recovery modes. This analysis reveals the different operations performed to accomplish the task (detection, searching, reading, comparing, evaluating, deciding, estimating, anticipating), the entities manipulated (e.g., in process control, temperature, pressure, flow-rate, volume) and the relation between operators and entities. The context in which the task is executed conditions these relations. These data are essential for the definition and organization of the future system’s features.
At its most basic, task analysis is composed of data collection, compilation and analysis. It may be performed before, during or after computerization of the task. In all cases, it provides essential guidelines for interface design and evaluation. Task analysis is always concerned with the real task, although it may also study future tasks through simulation or prototype testing. When performed prior to computerization, it studies “external tasks” (i.e., tasks external to the computer) performed with the existing work tools (Moran 1983). This type of analysis is useful even when computerization is expected to result in major modification of the task, since it elucidates the nature and logic of the task, work procedures, terminology, operators and tasks, work tools and sources of difficulty. In so doing, it provides the data necessary for task optimization and computerization.
Task analysis performed during task computerization focuses on “internal tasks”, as performed and represented by the computer system. System prototypes are used to collect data at this stage. The focus is on the same points examined in the previous stage, but from the point of view of the computerization process.
Following task computerization, task analysis also studies internal tasks, but analysis now focuses on the final computer system. This type of analysis is often performed to evaluate existing interfaces or as part of the design of new ones.
Hierarchical task analysis is a common method in cognitive ergonomics that has proven very useful in a wide variety of fields, including interface design (Shepherd 1989). It consists of the division of tasks (or main objectives) into sub-tasks, each of which can be further subdivided, until the required level of detail is attained. If data is collected directly from users (e.g., through interviews, vocalization), hierarchical division can provide a portrait of users’ mental mapping of a task. The results of the analysis can be represented by a tree diagram or table, each format having its advantages and disadvantages.
User Analysis
The other pillar of interface design is the analysis of user characteristics. The characteristics of interest may relate to user age, sex, language, culture, training, technical or computer-related knowledge, skills or motivation. Variations in these individual factors are responsible for differences within and between groups of users. One of the key tenets of interface design is therefore that there is no such thing as the average user. Instead, different groups of users should be identified and their characteristics understood. Representatives of each group should be encouraged to participate in the interface design and evaluation processes.
On the other hand, techniques from psychology, ergonomics and cognitive engineering can be used to reveal information on user characteristics related to perception, memory, cognitive mapping, decision-making and learning (Wickens 1992). It is clear that the only way to develop interfaces that are truly compatible with users is to take into account the effect of differences in these factors on user capacities, limits and ways of operating.
Ergonomic studies of interfaces have focused almost exclusively on users’ perceptual, cognitive and motor skills, rather than on affective, social or attitudinal factors, although work in the latter fields has become more popular in recent years. (For an integrated view of humans as information-processing systems see Rasmussen 1986; for a review of user-related factors to consider when designing interfaces see Thimbleby 1990 and Mayhew 1992). The following paragraphs review the four main user-related characteristics that should be taken into account during interface design.
Mental representation
The mental models users construct of the systems they use reflect the manner in which they receive and understand these systems. These models therefore vary as a function of users’ knowledge and experience (Hutchins 1989). In order to minimize the learning curve and facilitate system use, the conceptual model upon which a system is based should be similar to users’ mental representation of it. It should be recognized however that these two models are never identical. The mental model is characterized by the very fact that it is personal (Rich 1983), incomplete, variable from one part of the system to another, possibly in error on some points and in constant evolution. It plays a minor role in routine tasks but a major one in non-routine ones and during diagnosis of problems (Young 1981). In the latter cases, users will perform poorly in the absence of an adequate mental model. The challenge for interface designers is to design systems whose interaction with users will induce the latter to form mental models similar to the system’s conceptual model.
Learning
Analogy plays a large role in user learning (Rumelhart and Norman 1983). For this reason, the use of appropriate analogies or metaphors in the interface facilitates learning, by maximizing the transfer of knowledge from known situations or systems. Analogies and metaphors play a role in many parts of the interface, including the names of commands and menus, symbols, icons, codes (e.g., shape, colour) and messages. When pertinent, they greatly contribute to rendering interfaces natural and more transparent to users. On the other hand, when they are irrelevant, they can hinder users (Halasz and Moran 1982). To date, the two metaphors used in graphical interfaces are the desktop and, to a lesser extent, the room.
Users generally prefer to learn new software by using it immediately rather than by reading or taking a course—they prefer action-based learning in which they are cognitively active. This type of learning does, however, present a few problems for users (Carroll and Rosson 1988; Robert 1989). It demands an interface structure which is compatible, transparent, consistent, flexible, natural-appearing and fault tolerant, and a feature set which ensures usability, feedback, help systems, navigational aides and error handling (in this context, “errors” refer to actions that users wish to undo). Effective interfaces give users some autonomy during exploration.
Developing knowledge
User knowledge develops with increasing experience, but tends to plateau rapidly. This means that interfaces must be flexible and capable of responding simultaneously to the needs of users with different levels of knowledge. Ideally, they should also be context sensitive and provide personalized help. The EdCoach system, developed by Desmarais, Giroux and Larochelle (1993) is such an interface. Classification of users into beginner, intermediate and expert categories is inadequate for the purpose of interface design, since these definitions are too static and do not account for individual variations. Information technology capable of responding to the needs of different types of users is now available, albeit at the research, rather than commercial, level (Egan 1988). The current rage for performance-support systems suggests intense development of these systems in coming years.
Unavoidable errors
Finally, it should be recognized that users make mistakes when using systems, regardless of their skill level or the quality of the system. A recent German study by Broadbeck et al. (1993) revealed that at least 10% of the time spent by white-collar workers working on computers is related to error management. One of the causes of errors is users’ reliance on correction rather than prevention strategies (Reed 1982). Users prefer acting rapidly and incurring errors that they must subsequently correct, to working more slowly and avoiding errors. It is essential that these considerations be taken into account when designing human-computer interfaces. In addition, systems should be fault tolerant and should incorporate effective error management (Lewis and Norman 1986).
Needs Analysis
Needs analysis is an explicit part of Robert and Fiset’s development cycle (1992), it corresponds to Nielsen’s functional analysis and is integrated into other stages (task, user or needs analysis) described by other authors. It consists of the identification, analysis and organization of all the needs that the computer system can satisfy. Identification of features to be added to the system occurs during this process. Task and user analysis, presented above, should help define many of the needs, but may prove inadequate for the definition of new needs resulting from the introduction of new technologies or new regulations (e.g., safety). Needs analysis fills this void.
Needs analysis is performed in the same way as functional analysis of products. It requires the participation of a group of people interested by the product and possessing complementary training, occupations or work experience. This can include future users of the system, supervisors, domain experts and, as required, specialists in training, work organization and safety. Review of the scientific and technical literature in the relevant field of application may also be performed, in order to establish the current state of the art. Competitive systems used in similar or related fields can also be studied. The different needs identified by this analysis are then classified, weighted and presented in a format appropriate for use throughout the development cycle.
Prototyping
Prototyping is part of the development cycle of most interfaces and consists of the production of a preliminary paper or electronic model (or prototype) of the interface. Several books on the role of prototyping in human-computer interaction are available (Wilson and Rosenberg 1988; Hartson and Smith 1991; Preece et al. 1994).
Prototyping is almost indispensable because:
- Users have difficulty evaluating interfaces on the basis of functional specifications—the description of the interface is too distant from the real interface, and evaluation too abstract. Prototypes are useful because they allow users to see and use the interface and directly evaluate its usefulness and usability.
- It is practically impossible to construct an adequate interface on the first try. Interfaces must be tested by users and modified, often repeatedly. To overcome this problem, paper or interactive prototypes that can be tested, modified or rejected are produced and refined until a satisfactory version is obtained. This process is considerably less expensive than working on real interfaces.
From the point of view of the development team, prototyping has several advantages. Prototypes allow the integration and visualization of interface elements early on in the design cycle, rapid identification of detailed problems, production of a concrete and common object of discussion in the development team and during discussions with clients, and simple illustration of alternative solutions for the purposes of comparison and internal evaluation of the interface. The most important advantage is, however, the possibility of having users evaluate prototypes.
Inexpensive and very powerful software tools for the production of prototypes are commercially available for a variety of platforms, including microcomputers (e.g., Visual Basic and Visual C++ (™Microsoft Corp.), UIM/X (™Visual Edge Software), HyperCard (™Apple Computer), SVT (™SVT Soft Inc.)). Readily available and relatively easy to learn, they are becoming widespread among system developers and evaluators.
The integration of prototyping completely changed the interface development process. Given the rapidity and flexibility with which prototypes can be produced, developers now tend to reduce their initial analyses of task, users and needs, and compensate for these analytical deficiencies by adopting longer evaluation cycles. This assumes that usability testing will identify problems and that it is more economical to prolong evaluation than to spend time on preliminary analysis.
Evaluation of Interfaces
User evaluation of interfaces is an indispensable and effective way to improve interfaces’ usefulness and usability (Nielsen 1993). The interface is almost always evaluated in electronic form, although paper prototypes may also be tested. Evaluation is an iterative process and is part of the prototype evaluation-modification cycle which continues until the interface is judged acceptable. Several cycles of evaluation may be necessary. Evaluation may be performed in the workplace or in usability laboratories (see the special edition of Behaviour and Information Technology (1994) for a description of several usability laboratories).
Some interface evaluation methods do not involve users; they may be used as a complement to user evaluation (Karat 1988; Nielsen 1993; Nielsen and Mack 1994). A relatively common example of such methods consists of the use of criteria such as compatibility, consistency, visual clarity, explicit control, flexibility, mental workload, quality of feedback, quality of help and error handling systems. For a detailed definition of these criteria, see Bastien and Scapin (1993); they also form the basis of an ergonomic questionnaire on interfaces (Shneiderman 1987; Ravden and Johnson 1989).
Following evaluation, solutions must be found to problems that have been identified, modifications discussed and implemented, and decisions made concerning whether a new prototype is necessary.
Conclusion
This discussion of interface development has highlighted the major stakes and broad trends in the field of human-computer interaction. In summary, (a) task, user, and needs analysis play an essential role in understanding system requirements and, by extension, necessary interface features; and (b) prototyping and user evaluation are indispensable for the determination of interface usability. An impressive body of knowledge, composed of principles, guidelines and design standards, exists on human-computer interactions. Nevertheless, it is currently impossible to produce an adequate interface on the first try. This constitutes a major challenge for the coming years. More explicit, direct and formal links must be established between analysis (task, users, needs, context) and interface design. Means must also be developed to apply current ergonomic knowledge more directly and more simply to the design of interfaces.