Monday, 04 April 2011 20:13

Risk Perception

Rate this item
(7 votes)

In risk perception, two psychological processes may be distinguished: hazard perception and risk assessment. Saari (1976) defines the information processed during the accomplishment of a task in terms of the following two components: (1) the information required to execute a task (hazard perception) and (2) the information required to keep existing risks under control (risk assessment). For instance, when construction workers on the top of ladders who are drilling holes in a wall have to simultaneously keep their balance and automatically coordinate their body-hand movements, hazard perception is crucial to coordinate body movement to keep dangers under control, whereas conscious risk assessment plays only a minor role, if any. Human activities generally seem to be driven by automatic recognition of signals which trigger a flexible, yet stored hierarchy of action schemata. (The more deliberate process leading to the acceptance or rejection of risk is discussed in another article.)

Risk Perception

From a technical point of view, a hazard represents a source of energy with the potential of causing immediate injury to personnel and damage to equipment, environment or structure. Workers may also be exposed to diverse toxic substances, such as chemicals, gases or radioactivity, some of which cause health problems. Unlike hazardous energies, which have an immediate effect on the body, toxic substances have quite different temporal characteristics, ranging from immediate effects to delays over months and years. Often there is an accumulating effect of small doses of toxic substances which are imperceptible to the exposed workers.

Conversely, there may be no harm to persons from hazardous energy or toxic substances provided that no danger exists. Danger expresses the relative exposure to hazard. In fact there may be little danger in the presence of some hazards as a result of the provision of adequate precautions. There is voluminous literature pertaining to factors people use in the final assessment of whether a situation is determined hazardous, and, if so, how hazardous. This has become known as risk perception. (The word risk is being used in the same sense that danger is used in occupational safety literature; see Hoyos and Zimolong 1988.)

Risk perception deals with the understanding of perceptual realities and indicators of hazards and toxic substances—that is, the perception of objects, sounds, odorous or tactile sensations. Fire, heights, moving objects, loud noise and acid smells are some examples of the more obvious hazards which do not need to be interpreted. In some instances, people are similarly reactive in their responses to the sudden presence of imminent danger. The sudden occurrence of loud noise, loss of balance, and objects rapidly increasing in size (and so appearing about to strike one’s body), are fear stimuli, prompting automatic responses such as jumping, dodging, blinking and clutching. Other reflex reactions include rapidly withdrawing a hand which has touched a hot surface. Rachman (1974) concludes that the prepotent fear stimuli are those which have the attributes of novelty, abruptness and high intensity.

Probably most hazards and toxic substances are not directly perceptible to the human senses, but are inferred from indicators. Examples are electricity; colourless, odourless gases such as methane and carbon monoxide; x rays and radioactive subs-tances; and oxygen-deficient atmospheres. Their presence must be signalled by devices which translate the presence of the hazard into something which is recognizable. Electrical currents can be perceived with the help of a current checking device, such as may be used for signals on the gauges and meters in a control-room register that indicate normal and abnormal levels of temperature and pressure at a particular state of a chemical process. There are also situations where hazards exist which are not perceivable at all or cannot be made perceivable at a given time. One example is the danger of infection when one opens blood probes for medical tests. The knowledge that hazards exist must be deduced from one’s knowledge of the common principles of causality or acquired by experience.

Risk Assessment

The next step in information-processing is risk assessment, which refers to the decision process as it is applied to such issues as whether and to what extent a person will be exposed to danger. Consider, for instance, driving a car at high speed. From the perspective of the individual, such decisions have to be made only in unexpected circumstances such as emergencies. Most of the required driving behaviour is automatic and runs smoothly without continuous attentional control and conscious risk assessment.

Hacker (1987) and Rasmussen (1983) distinguished three levels of behaviour: (1) skill-based behaviour, which is almost entirely automatic; (2) rule-based behaviour, which operates through the application of consciously chosen but fully pre-programmed rules; and (3) knowledge-based behaviour, under which all sorts of conscious planning and problem solving are grouped. At the skill-based level, an incoming piece of information is connected directly to a stored response that is executed automatically and carried out without conscious deliberation or control. If there is no automatic response available or any extraordinary event occurring, the risk assessment process moves to the rule-based level, where the appropriate action is selected from a sample of procedures taken out of storage and then executed. Each of the steps involves a finely tuned perceptual-motor programme, and usually, no step in this organizational hierarchy involves any decisions based on risk considerations. Only at the transitions is a conditional check applied, just to verify whether the progress is according to plan. If not, automatic control is halted and the ensuing problem solved at a higher level.

Reason’s GEMS (1990) model describes how the transition from automatic control to conscious problem solving takes place when exceptional circumstances arise or novel situations are encountered. Risk assessment is absent at the bottom level, but may be fully present at the top level. At the middle level one can assume some sort of “quick-and-dirty” risk assessment, while Rasmussen excludes any type of assessment that is not incorporated in fixed rules. Much of the time there will be no conscious perception or consideration of hazards as such. “The lack of safety consciousness is both a normal and a healthy state of affairs, despite what has been said in countless books, articles and speeches. Being constantly conscious of danger is a reasonable definition of paranoia” (Hale and Glendon 1987). People doing their jobs on a routine basis rarely consider these hazards or accidents in advance: they run risks, but they do not take them.

Hazard Perception

Perception of hazards and toxic substances, in the sense of direct perception of shape and colour, loudness and pitch, odours and vibrations, is restricted by the capacity limitations of the perceptual senses, which can be temporarily impaired due to fatigue, illness, alcohol or drugs. Factors such as glare, brightness or fog can put heavy stress on perception, and dangers can fail to be detected because of distractions or insufficient alertness.

As has already been mentioned, not all hazards are directly perceptible to the human senses. Most toxic substances are not even visible. Ruppert (1987) found in his investigation of an iron and steel factory, of municipal garbage collecting and of medical laboratories, that from 2,230 hazard indicators named by 138 workers, only 42% were perceptible by the human senses. Twenty-two per cent of the indicators have to be inferred from comparisons with standards (e.g., noise levels). Hazard perception is based in 23% of cases on clearly perceptible events which have to be interpreted with respect to knowledge about hazardousness (e.g., a glossy surface of a wet floor indicates slippery). In 13% of reports, hazard indicators can be retrieved only from memory of proper steps to be taken (e.g., current in a wall socket can be made perceivable only by the proper checking device). These results demonstrate that the requirements of hazard perception range from pure detection and perception to elaborate cognitive inference processes of anticipation and assessment. Cause-and-effect relationships are sometimes unclear, scarcely detectable, or misinterpreted, and delayed or accumulating effects of hazards and toxic substances are likely to impose additional burdens on individuals.

Hoyos et al. (1991) have listed a comprehensive picture of hazard indicators, behavioural requirements and safety-relevant conditions in industry and public services. A Safety Diagnosis Questionnaire (SDQ) has been developed to provide a practical instrument to analyse hazards and dangers through observation (Hoyos and Ruppert 1993). More than 390 workplaces, and working and environmental conditions in 69 companies concerned with agriculture, industry, manual work and the service industries, have been assessed. Because the companies had accident rates greater than 30 accidents per 1,000 employees with a minimum of 3 lost working days per accident, there appears to be a bias in these studies towards dangerous worksites. Altogether 2,373 hazards have been reported by the observers using SDQ, indicating a detection rate of 6.1 hazards per workplace and between 7 and 18 hazards have been detected at approximately 40% of all workplaces surveyed. The surprisingly low mean rate of 6.1 hazards per workplace has to be interpreted with consideration toward the safety measures broadly introduced in industry and agriculture during the last 20 years. Hazards reported do not include those attributable to toxic substances, nor hazards controlled by technical safety devices and measures, and thus reflect the distribution of “residual hazards”.

In figure 1 an overview of requirements for perceptual processes of hazard detection and perception is presented. Observers had to assess all hazards at a particular workplace with respect to 13 requirements, as indicated in the figure. On the average, 5 requirements per hazard were identified, including visual recognition, selective attention, auditory recognition and vigilance. As expected, visual recognition dominates by comparison with auditory recognition (77.3% of the hazards were detected visually and only 21.2% by auditory detection). In 57% of all hazards observed, workers had to divide their attention between tasks and hazard control, and divided attention is a very strenuous mental achievement likely to contribute to errors. Accidents have frequently been traced back to failures in attention while performing dual tasks. Even more alarming is the finding that in 56% of all hazards, workers had to cope with rapid activities and responsiveness to avoid being hit and injured. Only 15.9% and 7.3% of all hazards were indicated by acoustical or optical warnings, respectively: consequently, hazard detection and perception was self-initiated.

Figure 1. Detection and perception of hazard indicators in industry

SAF080T1

In some cases (16.1%) perception of hazards is supported by signs and warnings, but usually, workers rely on knowledge, training and work experience. Figure 2 shows the requirements of anticipation and assessment required to control hazards at the worksite. The core characteristic of all activities summarized in this figure is the need for knowledge and experience gained in the work process, including: technical knowledge about weight, forces and energies; training to identify defects and inadequacies of work tools and machinery; and experience to predict structural weaknesses of equipment, buildings and material. As Hoyos et al. (1991) have demonstrated, workers have little knowledge relating to hazards, safety rules and proper personal preventive behaviour. Only 60% of the construction workers and 61% of the auto-mechanics questioned knew the right solutions to the safety-related problems generally encountered at their workplaces.

Figure 2. Anticipation and assessment of hazard indicators

SAF080T2

The analysis of hazard perception indicates that different cognitive processes are involved, such as visual recognition; selective and divided attention; rapid identification and responsiveness; estimates of technical parameters; and predictions of non-observable hazards and dangers. In fact, hazards and dangers are frequently unknown to job incumbents: they impose a heavy burden on people who have to cope sequentially with dozens of visual- and auditory-based requirements and are a source of proneness to error when work and hazard control is performed simultaneously. This requires much more emphasis to be placed on regular analysis and identification of hazards and dangers at the workplace. In several countries, formal risk assessments of workplaces are mandatory: for example, the health and safety Directives of the EEC require risk assessment of computer workplaces prior to commencing work in them, or when major alterations at work have been introduced; and the US Occupational Safety and Health Administration (OSHA) requires regular hazard risk analyses of process units.

Coordination of Work and Hazard Control

As Hoyos and Ruppert (1993) point out, (1) work and hazard control may require attention simultaneously; (2) they may be managed alternatively in sequential steps; or (3) prior to the commencement of work, precautionary measures may be taken (e.g., putting on a safety helmet).

In the case of simultaneously occurring requirements, hazard control is based on visual, auditory and tactile recognition. In fact, it is difficult to separate work and hazard control in routine tasks. For example, a source of constant danger is present when performing the task of cutting off threads from yarns in a cotton-mill factory—a task requiring a sharp knife. The only two types of protection against cuts are skill in wielding the knife and use of protective equipment. If either or both are to succeed, they must be totally incorporated into the worker’s action sequences. Habits such as cutting in a direction away from the hand which is holding the thread must be ingrained into the worker’s skills from the outset. In this example hazard control is fully integrated into task control; no separate process of hazard detection is required. Probably there is a continuum of integration into work, the degree depending on the skill of the worker and the requirements of the task. On the one hand, hazard perception and control is inherently integrated into work skills; on the other hand, task execution and hazard control are distinctly separate activities. Work and hazard control may be carried out alternatively, in sequential steps, when during the task, danger potential steadily increases or there is an abrupt, alerting danger signal. As a consequence, workers interrupt the task or process and take preventive measures. For example, the checking of a gauge is a typical example of a simple diagnostic test. A control room operator detects a deviation from standard level on a gauge which at first glance does not constitute a dramatic sign of danger, but which prompts the operator to search further on other gauges and meters. If there are other deviations present, a rapid series of scanning activities will be carried out at the rule-based level. If deviations on other meters do not fit into a familiar pattern, the diagnosis process shifts to the knowledge-based level. In most cases, guided by some strategies, signals and symptoms are actively looked for to locate causes of the deviations (Konradt 1994). The allocation of resources of the attentional control system is set to general monitoring. A sudden signal, such as a warning tone or, as in the case above, various deviations of pointers from a standard, shifts the attentional control system onto the specific topic of hazard control. It initiates an activity which seeks to identify the causes of the deviations on the rule-based level, or in case of misfortune, on the knowledge-based level (Reason 1990).

Preventive behaviour is the third type of coordination. It occurs prior to work, and the most prominent example is the use of personal protective equipment (PPE).

The Meanings of Risk

Definitions of risks and methods to assess risks in industry and society have been developed in economics, engineering, chemistry, safety sciences and ergonomics (Hoyos and Zimolong 1988). There is a wide variety of interpretations of the term risk. On the one hand, it is interpreted to mean “probability of an undesired event”. It is an expression of the likelihood that something unpleasant will happen. A more neutral definition of risk is used by Yates (1992a), who argues that risk should be perceived as a multidimensional concept that as a whole refers to the prospect of loss. Important contributions to our current understanding of risk assessment in society have come from geography, sociology, political science, anthropology and psychology. Research focused originally on understanding human behaviour in the face of natural hazards, but it has since broadened to incorporate technological hazards as well. Sociological research and anthropological studies have shown that assessment and acceptance of risks have their roots in social and cultural factors. Short (1984) argues that responses to hazards are mediated by social influences transmitted by friends, family, co-workers and respected public officials. Psychological research on risk assessment originated in empirical studies of probability assessment, utility assessment and decision-making processes (Edwards 1961).

Technical risk assessment usually focuses on the potential for loss, which includes the probability of the loss’s occurring and the magnitude of the given loss in terms of death, injury or damages. Risk is the probability that damage of a specified type will occur in a given system over a defined time period. Different assessment techniques are applied to meet the various requirements of industry and society. Formal analysis methods to estimate degrees of risk are derived from different kinds of fault tree analyses; by use of data banks comprising error probabilities such as THERP (Swain and Guttmann 1983); or on decomposition methods based on subjective ratings such as SLIM-Maud (Embrey et al. 1984). These techniques differ considerably in their potential to predict future events such as mishaps, errors or accidents. In terms of error prediction in industrial systems, experts attained the best results with THERP. In a simulation study, Zimolong (1992) found a close match between objectively derived error probabilities and their estimates derived with THERP. Zimolong and Trimpop (1994) argued that such formal analyses have the highest “objectivity” if conducted properly, as they separated facts from beliefs and took many of the judgemental biases into account.

The public’s sense of risk depends on more than the probability and magnitude of loss. It may depend on factors such as potential degree of damage, unfamiliarity with possible consequences, the involuntary nature of exposure to risk, the uncontrollability of damage, and possible biased media coverage. The feeling of control in a situation may be a particularly important factor. For many, flying seems very unsafe because one has no control over one’s fate once in the air. Rumar (1988) found that the perceived risk in driving a car is typically low, since in most situations the drivers believe in their own ability to achieve control and are accustomed to the risk. Other research has addressed emotional reactions to risky situations. The potential for serious loss generates a variety of emotional reactions, not all of which are necessarily unpleasant. There is a fine line between fear and excitement. Again, a major determinant of perceived risk and of affective reactions to risky situations seems to be a person’s feeling of control or lack thereof. As a consequence, for many people, risk may be nothing more than a feeling.

Decision Making under Risk

Risk taking may be the result of a deliberate decision process entailing several activities: identification of possible courses of action; identification of consequences; evaluation of the attractiveness and chances of the consequences; or deciding according to a combination of all the previous assessments. The overwhelming evidence that people often make poor choices in risky situations implies the potential to make better decisions. In 1738, Bernoulli defined the notion of a “best bet” as one which maximizes the expected utility (EU) of the decision. The EU concept of rationality asserts that people ought to make decisions by evaluating uncertainties and considering their choices, the possible consequences, and one’s preferences for them (von Neumann and Morgenstern 1947). Savage (1954) later generalized the theory to allow probability values to represent subjective or personal probabilities.

Subjective expected utility (SEU) is a normative theory which describes how people should proceed when making decisions. Slovic, Kunreuther and White (1974) stated, “Maximization of expected utility commands respect as a guideline for wise behaviour because it is deduced from axiomatic principles that presumably would be accepted by any rational man.” A good deal of debate and empirical research has centred around the question of whether this theory could also describe both the goals that motivate actual decision makers and the processes they employ when reaching their decisions. Simon (1959) criticized it as a theory of a person selecting among fixed and known alternatives, to each of which known consequences are attached. Some researchers have even questioned whether people should obey the principles of expected utility theory, and after decades of research, SEU applications remain controversial. Research has revealed that psychological factors play an important role in decision making and that many of these factors are not adequately captured by SEU models.

In particular, research on judgement and choice has shown that people have methodological deficiencies such as understanding probabilities, negligence of the effect of sample sizes, reliance on misleading personal experiences, holding judgements of fact with unwarranted confidence, and misjudging risks. People are more likely to underestimate risks if they have been voluntarily exposed to risks over a longer period, such as living in areas subject to floods or earthquakes. Similar results have been reported from industry (Zimolong 1985). Shunters, miners, and forest and construction workers all dramatically underestimate the riskiness of their most common work activities as compared to objective accident statistics; however, they tend to overestimate any obvious dangerous activities of fellow workers when required to rate them.

Unfortunately, experts’ judgements appear to be prone to many of the same biases as those of the public, particularly when experts are forced to go beyond the limits of available data and rely upon their intuitions (Kahneman, Slovic and Tversky 1982). Research further indicates that disagreements about risk should not disappear completely even when sufficient evidence is available. Strong initial views are resistant to change because they influence the way that subsequent information is interpreted. New evidence appears reliable and informative if it is consistent with one’s initial beliefs; contrary evidence tends to be dismissed as unreliable, erroneous or unrepresentative (Nisbett and Ross 1980). When people lack strong prior opinions, the opposite situation prevails—they are at the mercy of the formulation of the problem. Presenting the same information about risk in different ways (e.g., mortality rates as opposed to survival rates) alters their perspectives and their actions (Tversky and Kahneman 1981). The discovery of this set of mental strategies, or heuristics, that people implement in order to structure their world and predict their future courses of action, has led to a deeper understanding of decision making in risky situations. Although these rules are valid in many circumstances, in others they lead to large and persistent biases with serious implications for risk assessment.

Personal Risk Assessment

The most common approach in studying how people make risk assessments uses psychophysical scaling and multivariate analysis techniques to produce quantitative representations of risk attitudes and assessment (Slovic, Fischhoff and Lichtenstein 1980). Numerous studies have shown that risk assessment based on subjective judgements is quantifiable and predictable. They also have shown that the concept of risk means different things to different people. When experts judge risk and rely on personal experience, their responses correlate highly with technical estimates of annual fatalities. Laypeople’s judgements of risk are related more to other characteristics, such as catastrophic potential or threat to future generations; as a result, their estimates of loss probabilities tend to differ from those of experts.

Laypeople’s risk assessments of hazards can be grouped into two factors (Slovic 1987). One of the factors reflects the degree to which a risk is understood by people. Understanding a risk relates to the degree to which it is observable, is known to those exposed, and can be detected immediately. The other factor reflects the degree to which the risk evokes a feeling of dread. Dread is related to the degree of uncontrollability, of serious consequences, of exposure of high risks to future generations, and of involuntary increase of risk. The higher a hazard’s score on the latter factor, the higher its assessed risk, the more people want to see its current risks reduced, and the more they want to see strict regulation employed to achieve the desired reduction in risk. Consequently, many conflicts about risk may result from experts’ and laypeople’s views originating from different definitions of the concept. In such cases, expert citations of risk statistics or of the outcome of technical risk assessments will do little to change people’s attitudes and assessments (Slovic 1993).

The characterization of hazards in terms of “knowledge” and “threat” leads back to the previous discussion of hazard and danger signals in industry in this section, which were discussed in terms of “perceptibility”. Forty-two per cent of the hazard indicators in industry are directly perceptible by human senses, 45% of cases have to be inferred from comparisons with standards, and 3% from memory. Perceptibility, knowledge and the threats and thrills of hazards are dimensions which are closely related to people’s experience of hazards and perceived control; however, to understand and predict individual behaviour in the face of danger we have to gain a deeper understanding of their relationships with personality, requirements of tasks, and societal variables.

Psychometric techniques seem well-suited to identify similarities and differences among groups with regard to both personal habits of risk assessment and to attitudes. However, other psychometric methods such as multidimensional analysis of hazard similarity judgements, applied to quite different sets of hazards, produce different representations. The factor-analytical approach, while informative, by no means provides a universal representation of hazards. Another weakness of psychometric studies is that people face risk only in written statements, and divorce the assessment of risk from behaviour in actual risky situations. Factors that affect a person’s considered assessment of risk in a psychometric experiment may be trivial when confronted with an actual risk. Howarth (1988) suggests that such conscious verbal knowledge usually reflects social stereotypes. By contrast, risk-taking responses in traffic or work situations are controlled by the tacit knowledge that underlies skilled or routine behaviour.

Most of the personal risk decisions in everyday life are not conscious decisions at all. People are, by and large, not even aware of risk. In contrast, the underlying notion of psychometric experiments is presented as a theory of deliberate choice. Assessments of risks usually performed by means of a questionnaire are conducted deliberately in an “armchair” fashion. In many ways, however, a person’s responses to risky situations are more likely to result from learned habits that are automatic, and which are below the general level of awareness. People do not normally evaluate risks, and therefore it cannot be argued that their way of evaluating risk is inaccurate and needs to be improved. Most risk-related activities are necessarily executed at the bottom level of automated behaviour, where there is simply no room for consideration of risks. The notion that risks, identified after the occurrence of accidents, are accepted after a conscious analysis, may have emerged from a confusion between normative SEU and descriptive models (Wagenaar 1992). Less attention was paid to the conditions in which people will act automatically, follow their gut feeling, or accept the first choice that is offered. However, there is a widespread acceptance in society and among health and safety professionals that risk taking is a prime factor in causing mishaps and errors. In a representative sample of Swedes aged between 18 and 70 years, 90% agreed that risk taking is the major source of accidents (Hovden and Larsson 1987).

Preventive Behaviour

Individuals may deliberately take preventive measures to exclude hazards, to attenuate the energy of hazards or to protect themselves by precautionary measures (for instance, by wearing safety glasses and helmets). Often people are required by a company’s directives or even by law to comply with protective measures. For example, a roofer builds a scaffolding prior to working on a roof to prevent the eventuality of suffering a fall. This choice might be the result of a conscious risk assessment process of hazards and of one’s own coping skills, or, more simply, it may be the outcome of a habituation process, or it may be a requirement which is enforced by law. Often warnings are used to indicate mandatory preventive actions.

Several forms of preventive activities in industry have been analysed by Hoyos and Ruppert (1993). Some of them are shown in figure 3, together with their frequency of requirement. As indicated, preventive behaviour is partly self-controlled and partly enforced by the legal standards and requirements of the company. Preventive activities comprise some of the following measures: planning work procedures and steps ahead; use of PPE; application of safety work technique; selection of safe work procedures by means of proper material and tools; setting an appropriate work pace; and inspection of facilities, equipment, machinery and tools.

Figure 3. Typical examples of personal preventive behaviour in industry and frequency of preventive measure

SAF080T3

Personal Protective Equipment

The most frequent preventive measure required is the use of PPE. Together with correct handling and maintenance, it is by far the most common requirement in industry. There exist major differences in the usage of PPE between companies. In some of the best companies, mainly in chemical plants and petroleum refineries, the usage of PPE approaches 100%. In contrast, in the construction industry, safety officials have problems even in attempts to introduce particular PPE on a regular basis. It is doubtful that risk perception is the major factor which makes the difference. Some of the companies have successfully enforced the use of PPE which then becomes habitualized (e.g., the wearing of safety helmets) by establishing the “right safety culture” and subsequently altered personal risk assessment. Slovic (1987) in his short discussion on the usage of seat-belts shows that about 20% of road users wear seat-belts voluntarily, 50% would use them only if it were made mandatory by law, and beyond this number, only control and punishment will serve to improve automatic use.

Thus, it is important to understand what factors govern risk perception. However, it is equally important to know how to change behaviour and subsequently how to alter risk perception. It seems that many more precautionary measures need to be undertaken at the level of the organization, among the planners, designers, managers and those authorities that make decisions which have implications for many thousands of people. Up to now, there is little understanding at these levels as to which factors risk perception and assessment depend upon. If companies are seen as open systems, where different levels of organizations mutually influence each other and are in steady exchange with society, a systems approach may reveal those factors which constitute and influence risk perception and assessment.

Warning Labels

The use of labels and warnings to combat potential hazards is a controversial procedure for managing risks. Too often they are seen as a way for manufacturers to avoid responsibility for unreasonably risky products. Obviously, labels will be successful only if the information they contain is read and understood by members of the intended audience. Frantz and Rhoades (1993) found that 40% of clerical personnel filling a file cabinet noticed a warning label placed on the top drawer of the cabinet, 33% read part of it, and no one read the entire label. Contrary to expectation, 20% complied completely by not placing any material in the top drawer first. Obviously it is insufficient to scan the most important elements of the notice. Lehto and Papastavrou (1993) provided a thorough analysis of findings pertaining to warning signs and labels by examining receiver-, task-, product- and message-related factors. Furthermore, they provided a significant contribution to understanding the effectiveness of warnings by considering different levels of behaviour.

The discussion of skilled behaviour suggests that a warning notice will have little impact on the way people perform a familiar task, as it simply will not be read. Lehto and Papastavrou (1993) concluded from research findings that interrupting familiar task performance may effectively increase workers’ noticing warning signs or labels. In the experiment by Frantz and Rhoades (1993), noticing the warning labels on filing cabinets increased to 93% when the top drawer was sealed shut with a warning indicating that a label could be found within the drawer. The authors concluded, however, that ways of interrupting skill-based behaviour are not always available and that their effectiveness after initial use can diminish considerably.

At a rule-based level of performance, warning information should be integrated into the task (Lehto 1992) so that it can be easily mapped to immediate relevant actions. In other words, people should try to get the task executed following the directions of the warning label. Frantz (1992) found that 85% of subjects expressed the need for a requirement on the directions of use of a wood preservative or drain cleaner. On the negative side, studies of comprehension have revealed that people may poorly comprehend the symbols and text used in warning signs and labels. In particular, Koslowski and Zimolong (1992) found that chemical workers understood the meaning of only approximately 60% of the most important warning signs used in the chemical industry.

At a knowledge-based level of behaviour, people seem likely to notice warnings when they are actively looking for them. They expect to find warnings close to the product. Frantz (1992) found that subjects in unfamiliar settings complied with instructions 73% of the time if they read them, compared to only 9% when they did not read them. Once read, the label must be understood and recalled. Several studies of comprehension and memory also imply that people may have trouble remembering the information they read from either instruction or warning labels. In the United States, the National Research Council (1989) provides some assistance in designing warnings. They emphasize the importance of two-way communication in enhancing understanding. The communicator should facilitate information feedback and questions on the part of the recipient. The conclusions of the report are summarized in two checklists, one for use by managers, the other serving as a guide for the recipient of the information.

 

Back

Read 14328 times Last modified on Monday, 22 August 2011 14:01

" DISCLAIMER: The ILO does not take responsibility for content presented on this web portal that is presented in any language other than English, which is the language used for the initial production and peer-review of original content. Certain statistics have not been updated since the production of the 4th edition of the Encyclopaedia (1998)."

Contents

Safety Policy and Leadership References

Abbey, A and JW Dickson. 1983. R&D work climate and innovation in semiconductors. Acad Manage J 26:362–368.

Andriessen, JHTH. 1978. Safe behavior and safety motivation. J Occup Acc 1:363–376.

Bailey, C. 1993. Improve safety program effectiveness with perception surveys. Prof Saf October:28–32.

Bluen, SD and C Donald. 1991. The nature and measurement of in-company industrial relations climate. S Afr J Psychol 21(1):12–20.

Brown, RL and H Holmes. 1986. The use of a factor-analytic procedure for assessing the validity of an employee safety climate model. Accident Anal Prev 18(6):445–470.

CCPS (Center for Chemical Process Safety). N.d. Guidelines for Safe Automation of Chemical Processes. New York: Center for Chemical Process Safety of the American Institution of Chemical Engineers.

Chew, DCE. 1988. Quelles sont les mesures qui assurent le mieux la sécurité du travail? Etude menée dans trois pays en développement d’Asie. Rev Int Travail 127:129–145.

Chicken, JC and MR Haynes. 1989. The Risk Ranking Method in Decision Making. Oxford: Pergamon.

Cohen, A. 1977. Factors in successful occupational safety programs. J Saf Res 9:168–178.

Cooper, MD, RA Phillips, VF Sutherland and PJ Makin. 1994. Reducing accidents using goal setting and feedback: A field study. J Occup Organ Psychol 67:219–240.

Cru, D and Dejours C. 1983. Les savoir-faire de prudence dans les métiers du bâtiment. Cahiers médico-sociaux 3:239–247.

Dake, K. 1991. Orienting dispositions in the perception of risk: An analysis of contemporary worldviews and cultural biases. J Cross Cult Psychol 22:61–82.

—. 1992. Myths of nature: Culture and the social construction of risk. J Soc Issues 48:21–37.

Dedobbeleer, N and F Béland. 1989. The interrelationship of attributes of the work setting and workers’ safety climate perceptions in the construction industry. In Proceedings of the 22nd Annual Conference of the Human Factors Association of Canada. Toronto.

—. 1991. A safety climate measure for construction sites. J Saf Res 22:97–103.

Dedobbeleer, N, F Béland and P German. 1990. Is there a relationship between attributes of construction sites and workers’ safety practices and climate perceptions? In Advances in Industrial Ergonomics and Safety II, edited by D Biman. London: Taylor & Francis.

Dejours, C. 1992. Intelligence ouvrière et organisation du travail. Paris: Harmattan.

DeJoy, DM. 1987. Supervisor attributions and responses for multicausal workplace accidents. J Occup Acc 9:213–223.

—. 1994. Managing safety in the workplace: An attribution theory analysis and model. J Saf Res 25:3–17.

Denison, DR. 1990. Corporate Culture and Organizational Effectiveness. New York: Wiley.

Dieterly, D and B Schneider. 1974. The effect of organizational environment on perceived power and climate: A laboratory study. Organ Behav Hum Perform 11:316–337.

Dodier, N. 1985. La construction pratique des conditions de travail: Préservation de la santé et vie quotidienne des ouvriers dans les ateliers. Sci Soc Santé 3:5–39.

Dunette, MD. 1976. Handbook of Industrial and Organizational Psychology. Chicago: Rand McNally.

Dwyer, T. 1992. Life and Death at Work. Industrial Accidents as a Case of Socially Produced Error. New York: Plenum Press.

Eakin, JM. 1992. Leaving it up to the workers: Sociological perspective on the management of health and safety in small workplaces. Int J Health Serv 22:689–704.

Edwards, W. 1961. Behavioural decision theory. Annu Rev Psychol 12:473–498.

Embrey, DE, P Humphreys, EA Rosa, B Kirwan and K Rea. 1984. An approach to assessing human error probabilities using structured expert judgement. In Nuclear Regulatory Commission NUREG/CR-3518, Washington, DC: NUREG.

Eyssen, G, J Eakin-Hoffman and R Spengler. 1980. Manager’s attitudes and the occurrence of accidents in a telephone company. J Occup Acc 2:291–304.

Field, RHG and MA Abelson. 1982. Climate: A reconceptualization and proposed model. Hum Relat 35:181–201.

Fischhoff, B and D MacGregor. 1991. Judged lethality: How much people seem to know depends on how they are asked. Risk Anal 3:229–236.

Fischhoff, B, L Furby and R Gregory. 1987. Evaluating voluntary risks of injury. Accident Anal Prev 19:51–62.

Fischhoff, B, S Lichtenstein, P Slovic, S Derby and RL Keeney. 1981. Acceptable risk. Cambridge: CUP.

Flanagan, O. 1991. The Science of the Mind. Cambridge: MIT Press.

Frantz, JP. 1992. Effect of location, procedural explicitness, and presentation format on user processing of and compliance with product warnings and instructions. Ph.D. Dissertation, University of Michigan, Ann Arbor.

Frantz, JP and TP Rhoades.1993. Human factors. A task analytic approach to the temporal and spatial placement of product warnings. Human Factors 35:713–730.

Frederiksen, M, O Jensen and AE Beaton. 1972. Prediction of Organizational Behavior. Elmsford, NY: Pergamon.
Freire, P. 1988. Pedagogy of the Oppressed. New York: Continuum.

Glick, WH. 1985. Conceptualizing and measuring organizational and psychological climate: Pitfalls in multi-level research. Acad Manage Rev 10(3):601–616.

Gouvernement du Québec. 1978. Santé et sécurité au travail: Politique québecoise de la santé et de la sécurité des travailleurs. Québec: Editeur officiel du Québec.

Haas, J. 1977. Learning real feelings: A study of high steel ironworkers’ reactions to fear and danger. Sociol Work Occup 4:147–170.

Hacker, W. 1987. Arbeitspychologie. Stuttgart: Hans Huber.

Haight, FA. 1986. Risk, especially risk of traffic accident. Accident Anal Prev 18:359–366.

Hale, AR and AI Glendon. 1987. Individual Behaviour in the Control of Danger. Vol. 2. Industrial Safety Series. Amsterdam: Elsevier.

Hale, AR, B Hemning, J Carthey and B Kirwan. 1994. Extension of the Model of Behaviour in the Control of Danger. Volume 3—Extended model description. Delft University of Technology, Safety Science Group (Report for HSE). Birmingham, UK: Birmingham University, Industrial Ergonomics Group.
Hansen, L. 1993a. Beyond commitment. Occup Hazards 55(9):250.

—. 1993b. Safety management: A call for revolution. Prof Saf 38(30):16–21.

Harrison, EF. 1987. The Managerial Decision-making Process. Boston: Houghton Mifflin.

Heinrich, H, D Petersen and N Roos. 1980. Industrial Accident Prevention. New York: McGraw-Hill.

Hovden, J and TJ Larsson. 1987. Risk: Culture and concepts. In Risk and Decisions, edited by WT Singleton and J Hovden. New York: Wiley.

Howarth, CI. 1988. The relationship between objective risk, subjective risk, behaviour. Ergonomics 31:657–661.

Hox, JJ and IGG Kreft. 1994. Multilevel analysis methods. Sociol Methods Res 22(3):283–300.

Hoyos, CG and B Zimolong. 1988. Occupational Safety and Accident Prevention. Behavioural Strategies and Methods. Amsterdam: Elsevier.

Hoyos, CG and E Ruppert. 1993. Der Fragebogen zur Sicherheitsdiagnose (FSD). Bern: Huber.

Hoyos, CT, U Bernhardt, G Hirsch and T Arnhold. 1991. Vorhandenes und erwünschtes sicherheits-relevantes Wissen in Industriebetrieben. Zeitschrift für Arbeits-und Organisationspychologie 35:68–76.

Huber, O. 1989. Information-procesing operators in decision making. In Process and Structure of Human Decision Making, edited by H Montgomery and O Svenson. Chichester: Wiley.

Hunt, HA and RV Habeck. 1993. The Michigan disability prevention study: Research highlights. Unpublished report. Kalamazoo, MI: E.E. Upjohn Institute for Employment Research.

International Electrotechnical Commission (IEC). N.d. Draft Standard IEC 1508; Functional Safety: Safety-related Systems. Geneva: IEC.

Instrument Society of America (ISA). N.d. Draft Standard: Application of Safety Instrumented Systems for the Process Industries. North Carolina, USA: ISA.

International Organization for Standardization (ISO). 1990. ISO 9000-3: Quality Management and Quality Assurance Standards: Guidelines for the Application of ISO 9001 to the Development, Supply and Maintenance of Software. Geneva: ISO.

James, LR. 1982. Aggregation bias in estimates of perceptual agreement. J Appl Psychol 67:219–229.

James, LR and AP Jones. 1974. Organizational climate: A review of theory and research. Psychol Bull 81(12):1096–1112.
Janis, IL and L Mann. 1977. Decision-making: A Psychological Analysis of Conflict, Choice and Commitment. New York: Free Press.

Johnson, BB. 1991. Risk and culture research: Some caution. J Cross Cult Psychol 22:141–149.

Johnson, EJ and A Tversky. 1983. Affect, generalization, and the perception of risk. J Personal Soc Psychol 45:20–31.

Jones, AP and LR James. 1979. Psychological climate: Dimensions and relationships of individual and aggregated work environment perceptions. Organ Behav Hum Perform 23:201–250.

Joyce, WF and JWJ Slocum. 1984. Collective climate: Agreement as a basis for defining aggregate climates in organizations. Acad Manage J 27:721–742.

Jungermann, H and P Slovic. 1987. Die Psychologie der Kognition und Evaluation von Risiko. Unpublished manuscript. Technische Universität Berlin.

Kahneman, D and A Tversky. 1979. Prospect theory: An analysis of decision under risk. Econometrica 47:263–291.

—. 1984. Choices, values, and frames. Am Psychol 39:341–350.

Kahnemann, D, P Slovic and A Tversky. 1982. Judgement under Uncertainty: Heuristics and Biases. New York: Cambridge University Press.

Kasperson, RE. 1986. Six propositions on public participation and their relevance for risk communication. Risk Anal 6:275–281.

Kleinhesselink, RR and EA Rosa. 1991. Cognitive representation of risk perception. J Cross Cult Psychol 22:11–28.

Komaki, J, KD Barwick and LR Scott. 1978. A behavioral approach to occupational safety: Pinpointing and reinforcing safe performance in a food manufacturing plant. J Appl Psychol 4:434–445.

Komaki, JL. 1986. Promoting job safety and accident precention. In Health and Industry: A Behavioral Medicine Perspective, edited by MF Cataldo and TJ Coats. New York: Wiley.

Konradt, U. 1994. Handlungsstrategien bei der Störungsdiagnose an flexiblen Fertigungs-einrichtungen. Zeitschrift für Arbeits-und Organisations-pychologie 38:54–61.

Koopman, P and J Pool. 1991. Organizational decision making: Models, contingencies and strategies. In Distributed Decision Making. Cognitive Models for Cooperative Work, edited by J Rasmussen, B Brehmer and J Leplat. Chichester: Wiley.

Koslowski, M and B Zimolong. 1992. Gefahrstoffe am Arbeitsplatz: Organisatorische Einflüsse auf Gefahrenbewußstein und Risikokompetenz. In Workshop Psychologie der Arbeitssicherheit, edited by B Zimolong and R Trimpop. Heidelberg: Asanger.

Koys, DJ and TA DeCotiis. 1991. Inductive measures of psychological climate. Hum Relat 44(3):265–285.

Krause, TH, JH Hidley and SJ Hodson. 1990. The Behavior-based Safety Process. New York: Van Norstrand Reinhold.
Lanier, EB. 1992. Reducing injuries and costs through team safety. ASSE J July:21–25.

Lark, J. 1991. Leadership in safety. Prof Saf 36(3):33–35.

Lawler, EE. 1986. High-involvement Management. San Francisco: Jossey Bass.

Lehto, MR. 1992. Designing warning signs and warnings labels: Scientific basis for initial guideline. Int J Ind Erg 10:115–119.

Lehto, MR and JD Papastavrou. 1993. Models of the warning process: Important implications towards effectiveness. Safety Science 16:569–595.

Lewin, K. 1951. Field Theory in Social Science. New York: Harper and Row.

Likert, R. 1967. The Human Organization. New York: McGraw Hill.

Lopes, LL and P-HS Ekberg. 1980. Test of an ordering hypothesis in risky decision making. Acta Physiol 45:161–167.

Machlis, GE and EA Rosa. 1990. Desired risk: Broadening the social amplification of risk framework. Risk Anal 10:161–168.

March, J and H Simon. 1993. Organizations. Cambridge: Blackwell.

March, JG and Z Shapira. 1992. Variable risk preferences and the focus of attention. Psychol Rev 99:172–183.

Manson, WM, GY Wong and B Entwisle. 1983. Contextual analysis through the multilevel linear model. In Sociologic Methodology, 1983–1984. San Francisco: Jossey-Bass.

Mattila, M, M Hyttinen and E Rantanen. 1994. Effective supervisory behavior and safety at the building site. Int J Ind Erg 13:85–93.

Mattila, M, E Rantanen and M Hyttinen. 1994. The quality of work environment, supervision and safety in building construction. Saf Sci 17:257–268.

McAfee, RB and AR Winn. 1989. The use of incentives/feedback to enhance work place safety: A critique of the literature. J Saf Res 20(1):7–19.

McSween, TE. 1995. The Values-based Safety Process. New York: Van Norstrand Reinhold.

Melia, JL, JM Tomas and A Oliver. 1992. Concepciones del clima organizacional hacia la seguridad laboral: Replication del modelo confirmatorio de Dedobbeleer y Béland. Revista de Psicologia del Trabajo y de las Organizaciones 9(22).

Minter, SG. 1991. Creating the safety culture. Occup Hazards August:17–21.

Montgomery, H and O Svenson. 1989. Process and Structure of Human Decision Making. Chichester: Wiley.

Moravec, M. 1994. The 21st century employer-employee partnership. HR Mag January:125–126.

Morgan, G. 1986. Images of Organizations. Beverly Hills: Sage.

Nadler, D and ML Tushman. 1990. Beyond the charismatic leader. Leadership and organizational change. Calif Manage Rev 32:77–97.

Näsänen, M and J Saari. 1987. The effects of positive feedback on housekeeping and accidents at a shipyard. J Occup Acc 8:237–250.

National Research Council. 1989. Improving Risk Communication. Washington, DC: National Academy Press.

Naylor, JD, RD Pritchard and DR Ilgen. 1980. A Theory of Behavior in Organizations. New York: Academic Press.

Neumann, PJ and PE Politser. 1992. Risk and optimality. In Risk-taking Behaviour, edited by FJ Yates. Chichester: Wiley.

Nisbett, R and L Ross. 1980. Human Inference: Strategies and Shortcomings of Social Judgement. Englewood Cliffs: Prentice-Hall.

Nunnally, JC. 1978. Psychometric Theory. New York: McGraw-Hill.

Oliver, A, JM Tomas and JL Melia. 1993. Una segunda validacion cruzada de la escala de clima organizacional de seguridad de Dedobbeleer y Béland. Ajuste confirmatorio de los modelos unofactorial, bifactorial y trifactorial. Psicologica 14:59–73.

Otway, HJ and D von Winterfeldt. 1982. Beyond acceptable risk: On the social acceptability of technologies. Policy Sci 14:247–256.

Perrow, C. 1984. Normal Accidents: Living with High-risk Technologies. New York: Basic Books.

Petersen, D. 1993. Establishing good “safety culture” helps mitigate workplace dangers. Occup Health Saf 62(7):20–24.

Pidgeon, NF. 1991. Safety culture and risk management in organizations. J Cross Cult Psychol 22:129–140.

Rabash, J and G Woodhouse. 1995. MLn command reference. Version 1.0 March 1995, ESRC.

Rachman, SJ. 1974. The Meanings of Fear. Harmondsworth: Penguin.

Rasmussen, J. 1983. Skills, rules, knowledge, signals, signs and symbols and other distinctions. IEEE T Syst Man Cyb 3:266–275.

Reason, JT. 1990. Human Error. Cambridge: CUP.

Rees, JV. 1988. Self-regulation: An effective alternative to direct regulation by OSHA? Stud J 16:603–614.

Renn, O. 1981. Man, technology and risk: A study on intuitive risk assessment and attitudes towards nuclear energy. Spezielle Berichte der Kernforschungsanlage Jülich.

Rittel, HWJ and MM Webber. 1973. Dilemmas in a general theory of planning. Pol Sci 4:155-169.

Robertson, A and M Minkler. 1994. New health promotion movement: A critical examination. Health Educ Q 21(3):295–312.

Rogers, CR. 1961. On Becoming a Person. Boston: Houghton Mifflin.

Rohrmann, B. 1992a. The evaluation of risk communication effectiveness. Acta Physiol 81:169–192.

—. 1992b. Risiko Kommunikation, Aufgaben-Konzepte-Evaluation. In Psychologie der Arbeitssicherheit, edited by B Zimolong and R Trimpop. Heidelberg: Asanger.

—. 1995. Risk perception research: Review and documentation. In Arbeiten zur Risikokommunikation. Heft 48. Jülich: Forschungszentrum Jülich.

—. 1996. Perception and evaluation of risks: A cross cultural comparison. In Arbeiten zur Risikokommunikation Heft 50. Jülich: Forschungszentrum Jülich.

Rosenhead, J. 1989. Rational Analysis for a Problematic World. Chichester: Wiley.

Rumar, K. 1988. Collective risk but individual safety. Ergonomics 31:507–518.

Rummel, RJ. 1970. Applied Factor Analysis. Evanston, IL: Northwestern University Press.

Ruppert, E. 1987. Gefahrenwahrnehmung—ein Modell zur Anforderungsanalyse für die verhaltensabbhängige Kontrolle von Arbeitsplatzgefahren. Zeitschrift für Arbeitswissenschaft 2:84–87.

Saari, J. 1976. Characteristics of tasks associated with the occurrence of accidents. J Occup Acc 1:273–279.

Saari, J. 1990. On strategies and methods in company safety work: From informational to motivational strategies. J Occup Acc 12:107–117.

Saari, J and M Näsänen. 1989. The effect of positive feedback on industrial housekeeping and accidents: A long-term study at a shipyard. Int J Ind Erg 4:3:201–211.

Sarkis, H. 1990. What really causes accidents. Presentation at Wausau Insurance Safety Excellence Seminar. Canandaigua, NY, US, June 1990.

Sass, R. 1989. The implications of work organization for occupational health policy: The case of Canada. Int J Health Serv 19(1):157–173.

Savage, LJ. 1954. The Foundations of Statistics. New York: Wiley.

Schäfer, RE. 1978. What Are We Talking About When We Talk About “Risk”? A Critical Survey of Risk and Risk Preferences Theories. R.M.-78-69. Laxenber, Austria: International Institute for Applied System Analysis.

Schein, EH. 1989. Organizational Culture and Leadership. San Francisco: Jossey-Bass.

Schneider, B. 1975a. Organizational climates: An essay. Pers Psychol 28:447–479.

—. 1975b. Organizational climate: Individual preferences and organizational realities revisited. J Appl Psychol 60:459–465.

Schneider, B and AE Reichers. 1983. On the etiology of climates. Pers Psychol 36:19–39.

Schneider, B, JJ Parkington and VM Buxton. 1980. Employee and customer perception of service in banks. Adm Sci Q 25:252–267.

Shannon, HS, V Walters, W Lewchuk, J Richardson, D Verma, T Haines and LA Moran. 1992. Health and safety approaches in the workplace. Unpublished report. Toronto: McMaster University.

Short, JF. 1984. The social fabric at risk: Toward the social transformation of risk analysis. Amer Social R 49:711–725.

Simard, M. 1988. La prise de risque dans le travail: un phénomène organisationnel. In La prise de risque dans le travail, edited by P Goguelin and X Cuny. Marseille: Editions Octares.

Simard, M and A Marchand. 1994. The behaviour of first-line supervisors in accident prevention and effectiveness in occupational safety. Saf Sci 19:169–184.

Simard, M et A Marchand. 1995. L’adaptation des superviseurs à la gestion parcipative de la prévention des accidents. Relations Industrielles 50: 567-589.

Simon, HA. 1959. Theories of decision making in economics and behavioural science. Am Econ Rev 49:253–283.

Simon, HA et al. 1992. Decision making and problem solving. In Decision Making: Alternatives to Rational Choice Models, edited by M Zev. London: Sage.

Simonds, RH and Y Shafai-Sahrai. 1977. Factors apparently affecting the injury frequency in eleven matched pairs of companies. J Saf Res 9(3):120–127.

Slovic, P. 1987. Perception of risk. Science 236:280–285.

—. 1993. Perceptions of environmental hazards: Psychological perspectives. In Behaviour and Environment, edited by GE Stelmach and PA Vroon. Amsterdam: North Holland.

Slovic, P, B Fischhoff and S Lichtenstein. 1980. Perceived risk. In Societal Risk Assessment: How Safe Is Safe Enough?, edited by RC Schwing and WA Albers Jr. New York: Plenum Press.

—. 1984. Behavioural decision theory perspectives on risk and safety. Acta Physiol 56:183–203.

Slovic, P, H Kunreuther and GF White. 1974. Decision processes, rationality, and adjustment to natural hazards. In Natural Hazards, Local, National and Global, edited by GF White. New York: Oxford University Press.

Smith, MJ, HH Cohen, A Cohen and RJ Cleveland. 1978. Characteristics of successful safety programs. J Saf Res 10:5–15.

Smith, RB. 1993. Construction industry profile: Getting to the bottom of high accident rates. Occup Health Saf June:35–39.

Smith, TA. 1989. Why you should put your safety program under statistical control. Prof Saf 34(4):31–36.

Starr, C. 1969. Social benefit vs. technological risk. Science 165:1232–1238.

Sulzer-Azaroff, B. 1978. Behavioral ecology and accident prevention. J Organ Behav Manage 2:11–44.

Sulzer-Azaroff, B and D Fellner. 1984. Searching for performance targets in the behavioral analysis of occupational health and safety: An assessment strategy. J Organ Behav Manage 6:2:53–65.

Sulzer-Azaroff, B, TC Harris and KB McCann. 1994. Beyond training: Organizational performance management techniques. Occup Med: State Art Rev 9:2:321–339.

Swain, AD and HE Guttmann. 1983. Handbook of Human Reliability Analysis with Emphasis on Nuclear Power Plant Applications. Sandia National Laboratories, NUREG/CR-1278, Washington, DC: US Nuclear Regulatory Commission.

Taylor, DH. 1981. The hermeneutics of accidents and safety. Ergonomics 24:48–495.

Thompson, JD and A Tuden. 1959. Strategies, structures and processes of organizational decisions. In Comparative Studies in Administration, edited by JD Thompson, PB Hammond, RW Hawkes, BH Junker, and A Tuden. Pittsburgh: Pittsburgh University Press.

Trimpop, RM. 1994. The Psychology of Risk Taking Behavior. Amsterdam: Elsevier.

Tuohy, C and M Simard. 1992. The impact of joint health and safety committees in Ontario and Quebec. Unpublished report, Canadian Association of Administrators of Labour Laws, Ottawa.

Tversky, A and D Kahneman. 1981. The framing of decisions and the psychology of choice. Science 211:453–458.

Vlek, C and G Cvetkovich. 1989. Social Decision Methodology for Technological Projects. Dordrecht, Holland: Kluwer.

Vlek, CAJ and PJ Stallen. 1980. Rational and personal aspects of risk. Acta Physiol 45:273–300.

von Neumann, J and O Morgenstern. 1947. Theory of Games and Ergonomic Behaviour. Princeton, NJ: Princeton University Press.

von Winterfeldt, D and W Edwards. 1984. Patterns of conflict about risky technologies. Risk Anal 4:55–68.

von Winterfeldt, D, RS John and K Borcherding. 1981. Cognitive components of risk ratings. Risk Anal 1:277–287.

Wagenaar, W. 1990. Risk evaluation and causes of accidents. Ergonomics 33, Nos. 10/11.

Wagenaar, WA. 1992. Risk taking and accident causation. In Risk-taking Behaviour, edited by JF Yates. Chichester: Wiley.

Wagenaar, W, J Groeneweg, PTW Hudson and JT Reason. 1994. Promoting safety in the oil industry. Ergonomics 37, No. 12:1,999–2,013.

Walton, RE. 1986. From control to commitment in the workplace. Harvard Bus Rev 63:76–84.

Wilde, GJS. 1986. Beyond the concept of risk homeostasis: Suggestions for research and application towards the prevention of accidents and lifestyle-related disease. Accident Anal Prev 18:377–401.

—. 1993. Effects of mass media communications on health and safety habits: An overview of issues and evidence. Addiction 88:983–996.

—. 1994. Risk homeostatasis theory and its promise for improved safety. In Challenges to Accident Prevention: The Issue of Risk Compensation Behaviour, edited by R Trimpop and GJS Wilde. Groningen, The Netherlands: STYX Publications.

Yates, JF. 1992a. The risk construct. In Risk Taking Behaviour, edited by JF Yates. Chichester: Wiley.

—. 1992b. Risk Taking Behaviour. Chichester: Wiley.

Yates, JF and ER Stone. 1992. The risk construct. In Risk Taking Behaviour, edited by JF Yates. Chichester: Wiley.

Zembroski, EL. 1991. Lessons learned from man-made catastrophes. In Risk Management. New York: Hemisphere.


Zey, M. 1992. Decision Making: Alternatives to Rational Choice Models. London: Sage.

Zimolong, B. 1985. Hazard perception and risk estimation in accident causation. In Trends in Ergonomics/Human Factors II, edited by RB Eberts and CG Eberts. Amsterdam: Elsevier.

Zimolong, B. 1992. Empirical evaluation of THERP, SLIM and ranking to estimate HEPs. Reliab Eng Sys Saf 35:1–11.

Zimolong, B and R Trimpop. 1994. Managing human reliability in advanced manufacturing systems. In Design of Work and Development of Personnel in Advanced Manufacturing Systems, edited by G Salvendy and W Karwowski. New York: Wiley.

Zohar, D. 1980. Safety climate in industrial organizations: Theoretical and applied implications. J Appl Psychol 65, No.1:96–102.

Zuckerman, M. 1979. Sensation Seeking: Beyond the Optimal Level of Arousal. Hillsdale: Lawrence Erlbaum.