27. Biological Monitoring
Chapter Editor: Robert Lauwerys
Table of Contents
General Principles
Vito Foà and Lorenzo Alessio
Quality Assurance
D. Gompertz
Metals and Organometallic Compounds
P. Hoet and Robert Lauwerys
Organic Solvents
Masayuki Ikeda
Genotoxic Chemicals
Marja Sorsa
Pesticides
Marco Maroni and Adalberto Ferioli
Click a link below to view table in article context.
1. ACGIH, DFG & other limit values for metals
2. Examples of chemicals & biological monitoring
3. Biological monitoring for organic solvents
4. Genotoxicity of chemicals evaluated by IARC
5. Biomarkers & some cell/tissue samples & genotoxicity
6. Human carcinogens, occupational exposure & cytogenetic end points
8. Exposure from production & use of pesticides
9. Acute OP toxicity at different levels of ACHE inhibition
10. Variations of ACHE & PCHE & selected health conditions
11. Cholinesterase activities of unexposed healthy people
12. Urinary alkyl phosphates & OP pesticides
13. Urinary alkyl phosphates measurements & OP
14. Urinary carbamate metabolites
15. Urinary dithiocarbamate metabolites
16. Proposed indices for biological monitoring of pesticides
17. Recommended biological limit values (as of 1996)
Point to a thumbnail to see figure caption, click to see figure in article context.
28. Epidemiology and Statistics
Chapter Editors: Franco Merletti, Colin L. Soskolne and Paolo Vineis
Epidemiological Method Applied to Occupational Health and Safety
Franco Merletti, Colin L. Soskolne and Paolo Vineis
Exposure Assessment
M. Gerald Ott
Summary Worklife Exposure Measures
Colin L. Soskolne
Measuring Effects of Exposures
Shelia Hoar Zahm
Case Study: Measures
Franco Merletti, Colin L. Soskolne and Paola Vineis
Options in Study Design
Sven Hernberg
Validity Issues in Study Design
Annie J. Sasco
Impact of Random Measurement Error
Paolo Vineis and Colin L. Soskolne
Statistical Methods
Annibale Biggeri and Mario Braga
Causality Assessment and Ethics in Epidemiological Research
Paolo Vineis
Case Studies Illustrating Methodological Issues in the Surveillance of Occupational Diseases
Jung-Der Wang
Questionnaires in Epidemiological Research
Steven D. Stellman and Colin L. Soskolne
Asbestos Historical Perspective
Lawrence Garfinkel
Click a link below to view table in article context.
1. Five selected summary measures of worklife exposure
2. Measures of disease occurrence
3. Measures of association for a cohort study
4. Measures of association for case-control studies
5. General frequency table layout for cohort data
6. Sample layout of case-control data
7. Layout case-control data - one control per case
8. Hypothetical cohort of 1950 individuals to T2
9. Indices of central tendency & dispersion
10. A binomial experiment & probabilities
11. Possible outcomes of a binomial experiment
12. Binomial distribution, 15 successes/30 trials
13. Binomial distribution, p = 0.25; 30 trials
14. Type II error & power; x = 12, n = 30, a = 0.05
15. Type II error & power; x = 12, n = 40, a = 0.05
16. 632 workers exposed to asbestos 20 years or longer
17. O/E number of deaths among 632 asbestos workers
Point to a thumbnail to see figure caption, click to see the figure in article context.
29. Ergonomics
Chapter Editors: Wolfgang Laurig and Joachim Vedder
Table of Contents
Overview
Wolfgang Laurig and Joachim Vedder
The Nature and Aims of Ergonomics
William T. Singleton
Analysis of Activities, Tasks and Work Systems
Véronique De Keyser
Ergonomics and Standardization
Friedhelm Nachreiner
Checklists
Pranab Kumar Nag
Anthropometry
Melchiorre Masali
Muscular Work
Juhani Smolander and Veikko Louhevaara
Postures at Work
Ilkka Kuorinka
Biomechanics
Frank Darby
General Fatigue
Étienne Grandjean
Fatigue and Recovery
Rolf Helbig and Walter Rohmert
Mental Workload
Winfried Hacker
Vigilance
Herbert Heuer
Mental Fatigue
Peter Richter
Work Organization
Eberhard Ulich and Gudela Grote
Sleep Deprivation
Kazutaka Kogi
Workstations
Roland Kadefors
Tools
T.M. Fraser
Controls, Indicators and Panels
Karl H. E. Kroemer
Information Processing and Design
Andries F. Sanders
Designing for Specific Groups
Joke H. Grady-van den Nieuwboer
Case Study: The International Classification of Functional Limitation in People
Cultural Differences
Houshang Shahnavaz
Elderly Workers
Antoine Laville and Serge Volkoff
Workers with Special Needs
Joke H. Grady-van den Nieuwboer
System Design in Diamond Manufacturing
Issachar Gilad
Disregarding Ergonomic Design Principles: Chernobyl
Vladimir M. Munipov
Click a link below to view table in article context.
1. Basic anthropometric core list
2. Fatigue & recovery dependent on activity levels
3. Rules of combination effects of two stress factors on strain
4. Differenting among several negative consequences of mental strain
5. Work-oriented principles for production structuring
6. Participation in organizational context
7. User participation in the technology process
8. Irregular working hours & sleep deprivation
9. Aspects of advance, anchor & retard sleeps
10. Control movements & expected effects
11. Control-effect relations of common hand controls
12. Rules for arrangement of controls
Point to a thumbnail to see figure caption, click to see the figure in the article context.
30. Occupational Hygiene
Chapter Editor: Robert F. Herrick
Table of Contents
Goals, Definitions and General Information
Berenice I. Ferrari Goelzer
Recognition of Hazards
Linnéa Lillienberg
Evaluation of the Work Environment
Lori A. Todd
Occupational Hygiene: Control of Exposures Through Intervention
James Stewart
The Biological Basis for Exposure Assessment
Dick Heederik
Occupational Exposure Limits
Dennis J. Paustenbach
1. Hazards of chemical; biological & physical agents
2. Occupational exposure limits (OELs) - various countries
31. Personal Protection
Chapter Editor: Robert F. Herrick
Table of Contents
Overview and Philosophy of Personal Protection
Robert F. Herrick
Eye and Face Protectors
Kikuzi Kimura
Foot and Leg Protection
Toyohiko Miura
Head Protection
Isabelle Balty and Alain Mayer
Hearing Protection
John R. Franks and Elliott H. Berger
Protective Clothing
S. Zack Mansdorf
Respiratory Protection
Thomas J. Nelson
Click a link below to view table in article context.
1. Transmittance requirements (ISO 4850-1979)
2. Scales of protection - gas-welding & braze-welding
3. Scales of protection - oxygen cutting
4. Scales of protection - plasma arc cutting
5. Scales of protection - electric arc welding or gouging
6. Scales of protection - plasma direct arc welding
7. Safety helmet: ISO Standard 3873-1977
8. Noise Reduction Rating of a hearing protector
9. Computing the A-weighted noise reduction
10. Examples of dermal hazard categories
11. Physical, chemical & biological performance requirements
12. Material hazards associated with particular activities
13. Assigned protection factors from ANSI Z88 2 (1992)
Point to a thumbnail to see figure caption, click to see figure in article context.
32. Record Systems and Surveillance
Chapter Editor: Steven D. Stellman
Table of Contents
Occupational Disease Surveillance and Reporting Systems
Steven B. Markowitz
Occupational Hazard Surveillance
David H. Wegman and Steven D. Stellman
Surveillance in Developing Countries
David Koh and Kee-Seng Chia
Development and Application of an Occupational Injury and Illness Classification System
Elyce Biddle
Risk Analysis of Nonfatal Workplace Injuries and Illnesses
John W. Ruser
Case Study: Worker Protection and Statistics on Accidents and Occupational Diseases - HVBG, Germany
Martin Butz and Burkhard Hoffmann
Case Study: Wismut - A Uranium Exposure Revisited
Heinz Otten and Horst Schulz
Measurement Strategies and Techniques for Occupational Exposure Assessment in Epidemiology
Frank Bochmann and Helmut Blome
Case Study: Occupational Health Surveys in China
Click a link below to view the table in article context.
1. Angiosarcoma of the liver - world register
2. Occupational illness, US, 1986 versus 1992
3. US Deaths from pneumoconiosis & pleural mesothelioma
4. Sample list of notifiable occupational diseases
5. Illness & injury reporting code structure, US
6. Nonfatal occupational injuries & illnesses, US 1993
7. Risk of occupational injuries & illnesses
8. Relative risk for repetitive motion conditions
9. Workplace accidents, Germany, 1981-93
10. Grinders in metalworking accidents, Germany, 1984-93
11. Occupational disease, Germany, 1980-93
12. Infectious diseases, Germany, 1980-93
13. Radiation exposure in the Wismut mines
14. Occupational diseases in Wismut uranium mines 1952-90
Point to a thumbnail to see figure caption, click to see the figure in article context.
33. Toxicology
Chapter Editor: Ellen K. Silbergeld
Introduction
Ellen K. Silbergeld, Chapter Editor
Definitions and Concepts
Bo Holmberg, Johan Hogberg and Gunnar Johanson
Toxicokinetics
Dušan Djuríc
Target Organ And Critical Effects
Marek Jakubowski
Effects Of Age, Sex And Other Factors
Spomenka Telišman
Genetic Determinants Of Toxic Response
Daniel W. Nebert and Ross A. McKinnon
Introduction And Concepts
Philip G. Watanabe
Cellular Injury And Cellular Death
Benjamin F. Trump and Irene K. Berezesky
Genetic Toxicology
R. Rita Misra and Michael P. Waalkes
Immunotoxicology
Joseph G. Vos and Henk van Loveren
Target Organ Toxicology
Ellen K. Silbergeld
Biomarkers
Philippe Grandjean
Genetic Toxicity Assessment
David M. DeMarini and James Huff
In Vitro Toxicity Testing
Joanne Zurlo
Structure Activity Relationships
Ellen K. Silbergeld
Toxicology In Health And Safety Regulation
Ellen K. Silbergeld
Principles Of Hazard Identification - The Japanese Approach
Masayuki Ikeda
The United States Approach to Risk Assessment Of Reproductive Toxicants and Neurotoxic Agents
Ellen K. Silbergeld
Approaches To Hazard Identification - IARC
Harri Vainio and Julian Wilbourn
Appendix - Overall Evaluations of Carcinogenicity to Humans: IARC Monographs Volumes 1-69 (836)
Carcinogen Risk Assessment: Other Approaches
Cees A. van der Heijden
Click a link below to view table in article context.
Point to a thumbnail to see figure caption, click to see figure in article context.
The identification of carcinogenic risks to humans has been the objective of the IARC Monographs on the Evaluation of Carcinogenic Risks to Humans since 1971. To date, 69 volumes of monographs have been published or are in press, with evaluations of carcinogenicity of 836 agents or exposure circumstances (see Appendix).
These qualitative evaluations of carcinogenic risk to humans are equivalent to the hazard identification phase in the now generally accepted scheme of risk assessment, which involves identification of hazard, dose-response assessment (including extrapolation outside the limits of observations), exposure assessment and risk characterization.
The aim of the IARC Monographs programme has been to publish critical qualitative evaluations on the carcinogenicity to humans of agents (chemicals, groups of chemicals, complex mixtures, physical or biological factors) or exposure circumstances (occupational exposures, cultural habits) through international cooperation in the form of expert working groups. The working groups prepare monographs on a series of individual agents or exposures and each volume is published and widely distributed. Each monograph consists of a brief description of the physical and chemical properties of the agent; methods for its analysis; a description of how it is produced, how much is produced, and how it is used; data on occurrence and human exposure; summaries of case reports and epidemiological studies of cancer in humans; summaries of experimental carcinogenicity tests; a brief description of other relevant biological data, such as toxicity and genetic effects, that may indicate its possible mechanism of action; and an evaluation of its carcinogenicity. The first part of this general scheme is adjusted appropriately when dealing with agents other than chemicals or chemical mixtures.
The guiding principles for evaluating carcinogens have been drawn up by various ad-hoc groups of experts and are laid down in the Preamble to the Monographs (IARC 1994a).
Tools for Qualitative Carcinogenic Risk (Hazard) Identification
Associations are established by examining the available data from studies of exposed humans, the results of bioassays in experimental animals and studies of exposure, metabolism, toxicity and genetic effects in both humans and animals.
Studies of cancer in humans
Three types of epidemiological studies contribute to an assessment of carcinogenicity: cohort studies, case-control studies and correlation (or ecological) studies. Case reports of cancer may also be reviewed.
Cohort and case-control studies relate individual exposures under study to the occurrence of cancer in individuals and provide an estimate of relative risk (ratio of the incidence in those exposed to the incidence in those not exposed) as the main measure of association.
In correlation studies, the unit of investigation is usually whole populations (e.g., particular geographical areas) and cancer frequency is related to a summary measure of the exposure of the population to the agent. Because individual exposure is not documented, a causal relationship is less easy to infer from such studies than from cohort and case-control studies. Case reports generally arise from a suspicion, based on clinical experience, that the concurrence of two events—that is, a particular exposure and occurrence of a cancer—has happened rather more frequently than would be expected by chance. The uncertainties surrounding interpretation of case reports and correlation studies make them inadequate, except in rare cases, to form the sole basis for inferring a causal relationship.
In the interpretation of epidemiological studies, it is necessary to take into account the possible roles of bias and confounding. By bias is meant the operation of factors in study design or execution that lead erroneously to a stronger or weaker association than in fact exists between disease and an agent. By confounding is meant a situation in which the relationship with disease is made to appear stronger or weaker than it truly is as a result of an association between the apparent causal factor and another factor that is associated with either an increase or decrease in the incidence of the disease.
In the assessment of the epidemiological studies, a strong association (i.e., a large relative risk) is more likely to indicate causality than a weak association, although it is recognized that relative risks of small magnitude do not imply lack of causality and may be important if the disease is common. Associations that are replicated in several studies of the same design or using different epidemiological approaches or under different circumstances of exposure are more likely to represent a causal relationship than isolated observations from single studies. An increase in risk of cancer with increasing amounts of exposure is considered to be a strong indication of causality, although the absence of a graded response is not necessarily evidence against a causal relationship. Demonstration of a decline in risk after cessation of or reduction in exposure in individuals or in whole populations also supports a causal interpretation of the findings.
When several epidemiological studies show little or no indication of an association between an exposure and cancer, the judgement may be made that, in the aggregate, they show evidence suggesting lack of carcinogenicity. The possibility that bias, confounding or misclassification of exposure or outcome could explain the observed results must be considered and excluded with reasonable certainty. Evidence suggesting lack of carcinogenicity obtained from several epidemiological studies can apply only to those type(s) of cancer, dose levels and intervals between first exposure and observation of disease that were studied. For some human cancers, the period between first exposure and the development of clinical disease is seldom less than 20 years; latent periods substantially shorter than 30 years cannot provide evidence suggesting lack of carcinogenicity.
The evidence relevant to carcinogenicity from studies in humans is classified into one of the following categories:
Sufficient evidence of carcinogenicity. A causal relationship has been established between exposure to the agent, mixture or exposure circumstance and human cancer. That is, a positive relationship has been observed between the exposure and cancer in studies in which chance, bias and confounding could be ruled out with reasonable confidence.
Limited evidence of carcinogenicity. A positive association has been observed between exposure to the agent, mixture or exposure circumstance and cancer for which a causal interpretation is considered to be credible, but chance, bias or confounding cannot be ruled out with reasonable confidence.
Inadequate evidence of carcinogenicity. The available studies are of insufficient quality, consistency or statistical power to permit a conclusion regarding the presence or absence of a causal association, or no data on cancer in humans are available.
Evidence suggesting lack of carcinogenicity. There are several adequate studies covering the full range of levels of exposure that human beings are known to encounter, which are mutually consistent in not showing a positive association between exposure to the agent and the studied cancer at any observed level of exposure. A conclusion of “evidence suggesting lack of carcinogenicity” is inevitably limited to the cancer sites, conditions and levels of exposure and length of observation covered by the available studies.
The applicability of an evaluation of the carcinogenicity of a mixture, process, occupation or industry on the basis of evidence from epidemiological studies depends on time and place. The specific exposure, process or activity considered most likely to be responsible for any excess risk should be sought and the evaluation focused as narrowly as possible. The long latent period of human cancer complicates the interpretation of epidemiological studies. A further complication is the fact that humans are exposed simultaneously to a variety of chemicals, which can interact either to increase or decrease the risk for neoplasia.
Studies on carcinogenicity in experimental animals
Studies in which experimental animals (usually mice and rats) are exposed to potential carcinogens and examined for evidence of cancer were introduced about 50 years ago with the aim of introducing a scientific approach to the study of chemical carcinogenesis and to avoid some of the disadvantages of using only epidemiological data in humans. In the IARC Monographs all available, published studies of carcinogenicity in animals are summarized, and the degree of evidence of carcinogenicity is then classified into one of the following categories:
Sufficient evidence of carcinogenicity. A causal relationship has been established between the agent or mixture and an increased incidence of malignant neoplasms or of an appropriate combination of benign and malignant neoplasms in two or more species of animals or in two or more independent studies in one species carried out at different times or in different laboratories or under different protocols. Exceptionally, a single study in one species might be considered to provide sufficient evidence of carcinogenicity when malignant neoplasms occur to an unusual degree with regard to incidence, site, type of tumour or age at onset.
Limited evidence of carcinogenicity. The data suggest a carcinogenic effect but are limited for making a definitive evaluation because, for example, (a) the evidence of carcinogenicity is restricted to a single experiment; or (b) there are some unresolved questions regarding the adequacy of the design, conduct or interpretation of the study; or (c) the agent or mixture increases the incidence only of benign neoplasms or lesions of uncertain neoplastic potential, or of certain neoplasms which may occur spontaneously in high incidences in certain strains.
Inadequate evidence of carcinogenicity. The studies cannot be interpreted as showing either the presence or absence of a carcinogenic effect because of major qualitative or quantitative limitations, or no data on cancer in experimental animals are available.
Evidence suggesting lack of carcinogenicity. Adequate studies involving at least two species are available which show that, within the limits of the tests used, the agent or mixture is not carcinogenic. A conclusion of evidence suggesting lack of carcinogenicity is inevitably limited to the species, tumour sites and levels of exposure studied.
Other data relevant to an evaluationof carcinogenicity
Data on biological effects in humans that are of particular relevance include toxicological, kinetic and metabolic considerations and evidence of DNA binding, persistence of DNA lesions or genetic damage in exposed humans. Toxicological information, such as that on cytotoxicity and regeneration, receptor binding and hormonal and immunological effects, and data on kinetics and metabolism in experimental animals are summarized when considered relevant to the possible mechanism of the carcinogenic action of the agent. The results of tests for genetic and related effects are summarized for whole mammals including man, cultured mammalian cells and nonmammalian systems. Structure-activity relationships are mentioned when relevant.
For the agent, mixture or exposure circumstance being evaluated, the available data on end-points or other phenomena relevant to mechanisms of carcinogenesis from studies in humans, experimental animals and tissue and cell test systems are summarized within one or more of the following descriptive dimensions:
These dimensions are not mutually exclusive, and an agent may fall within more than one. Thus, for example, the action of an agent on the expression of relevant genes could be summarized under both the first and second dimension, even if it were known with reasonable certainty that those effects resulted from genotoxicity.
Overall evaluations
Finally, the body of evidence is considered as a whole, in order to reach an overall evaluation of the carcinogenicity to humans of an agent, mixture or circumstance of exposure. An evaluation may be made for a group of chemicals when supporting data indicate that other, related compounds for which there is no direct evidence of capacity to induce cancer in humans or in animals may also be carcinogenic, a statement describing the rationale for this conclusion is added to the evaluation narrative.
The agent, mixture or exposure circumstance is described according to the wording of one of the following categories, and the designated group is given. The categorization of an agent, mixture or exposure circumstance is a matter of scientific judgement, reflecting the strength of the evidence derived from studies in humans and in experimental animals and from other relevant data.
Group 1
The agent (mixture) is carcinogenic to humans. The exposure circumstance entails exposures that are carcinogenic to humans.
This category is used when there is sufficient evidence of carcinogenicity in humans. Exceptionally, an agent (mixture) may be placed in this category when evidence in humans is less than sufficient but there is sufficient evidence of carcinogenicity in experimental animals and strong evidence in exposed humans that the agent (mixture) acts through a relevant mechanism of carcinogenicity.
Group 2
This category includes agents, mixtures and exposure circumstances for which, at one extreme, the degree of evidence of carcinogenicity in humans is almost sufficient, as well as those for which, at the other extreme, there are no human data but for which there is evidence of carcinogenicity in experimental animals. Agents, mixtures and exposure circumstances are assigned to either group 2A (probably carcinogenic to humans) or group 2B (possibly carcinogenic to humans) on the basis of epidemiological and experimental evidence of carcinogenicity and other relevant data.
Group 2A. The agent (mixture) is probably carcinogenic to humans. The exposure circumstance entails exposures that are probably carcinogenic to humans. This category is used when there is limited evidence of carcinogenicity in humans and sufficient evidence of carcinogenicity in experimental animals. In some cases, an agent (mixture) may be classified in this category when there is inadequate evidence of carcinogenicity in humans and sufficient evidence of carcinogenicity in experimental animals and strong evidence that the carcinogenesis is mediated by a mechanism that also operates in humans. Exceptionally, an agent, mixture or exposure circumstance may be classified in this category solely on the basis of limited evidence of carcinogenicity in humans.
Group 2B. The agent (mixture) is possibly carcinogenic to humans. The exposure circumstance entails exposures that are possibly carcinogenic to humans. This category is used for agents, mixtures and exposure circumstances for which there is limited evidence of carcinogenicity in humans and less than sufficient evidence of carcinogenicity in experimental animals. It may also be used when there is inadequate evidence of carcinogenicity in humans but there is sufficient evidence of carcinogenicity in experimental animals. In some instances, an agent, mixture or exposure circumstance for which there is inadequate evidence of carcinogenicity in humans but limited evidence of carcinogenicity in experimental animals together with supporting evidence from other relevant data may be placed in this group.
Group 3
The agent (mixture or exposure circumstance) is not classifiable as to its carcinogenicity to humans. This category is used most commonly for agents, mixtures and exposure circumstances for which the evidence of carcinogenicity is inadequate in humans and inadequate or limited in experimental animals.
Exceptionally, agents (mixtures) for which the evidence of carcinogenicity is inadequate in humans but sufficient in experimental animals may be placed in this category when there is strong evidence that the mechanism of carcinogenicity in experimental animals does not operate in humans.
Group 4
The agent (mixture) is probably not carcinogenic to humans. This category is used for agents or mixtures for which there is evidence suggesting lack of carcinogenicity in humans and in experimental animals. In some instances, agents or mixtures for which there is inadequate evidence of carcinogenicity in humans but evidence suggesting lack of carcinogenicity experimental animals, consistently and strongly supported by a broad range of other relevant data, may be classified in this group.
Classification systems made by humans are not sufficiently perfect to encompass all the complex entities of biology. They are, however, useful as guiding principles and may be modified as new knowledge of carcinogenesis becomes more firmly established. In the categorization of an agent, mixture or exposure circumstance, it is essential to rely on scientific judgements formulated by the group of experts.
Results to Date
To date, 69 volumes of IARC Monographs have been published or are in press, in which evaluations of carcinogenicity to humans have been made for 836 agents or exposure circumstances. Seventy-four agents or exposures have been evaluated as carcinogenic to humans (Group 1), 56 as probably carcinogenic to humans (Group 2A), 225 as possibly carcinogenic to humans (Group 2B) and one as probably not carcinogenic to humans (Group 4). For 480 agents or exposures, the available epidemiological and experimental data did not allow an evaluation of their carcinogenicity to humans (Group 3).
Importance of Mechanistic Data
The revised Preamble, which first appeared in volume 54 of the IARC Monographs, allows for the possibility that an agent for which epidemiological evidence of cancer is less than sufficient can be placed in Group 1 when there is sufficient evidence of carcinogenicity in experimental animals and strong evidence in exposed humans that the agent acts through a relevant mechanism of carcinogenicity. Conversely, an agent for which there is inadequate evidence of carcinogenicity in humans together with sufficient evidence in experimental animals and strong evidence that the mechanism of carcinogenesis does not operate in humans may be placed in Group 3 instead of the normally assigned Group 2B—possibly carcinogenic to humans—category.
The use of such data on mechanisms has been discussed on three recent occasions:
While it is generally accepted that solar radiation is carcinogenic to humans (Group 1), epidemiological studies on cancer in humans for UVA and UVB radiation from sun lamps provide only limited evidence of carcinogenicity. Special tandem base substitutions (GCTTT) have been observed in p53 tumour suppression genes in squamous-cell tumours at sun-exposed sites in humans. Although UVR can induce similar transitions in some experimental systems and UVB, UVA and UVC are carcinogenic in experimental animals, the available mechanistic data were not considered strong enough to allow the working group to classify UVB, UVA and UVC higher than Group 2A (IARC 1992). In a study published after the meeting (Kress et al. 1992), CCTTT transitions in p53 have been demonstrated in UVB-induced skin tumours in mice, which might suggest that UVB should also be classified as carcinogenic to humans (Group 1).
The second case in which the possibility of placing an agent in Group 1 in the absence of sufficient epidemiological evidence was considered was 4,4´-methylene-bis(2-chloroaniline) (MOCA). MOCA is carcinogenic in dogs and rodents and is comprehensively genotoxic. It binds to DNA through reaction with N-hydroxy MOCA and the same adducts that are formed in target tissues for carcinogenicity in animals have been found in urothelial cells from a small number of exposed humans. After lengthy discussions on the possibility of an upgrading, the working group finally made an overall evaluation of Group 2A, probably carcinogenic to humans (IARC 1993).
During a recent evaluation of ethylene oxide (IARC 1994b), the available epidemiological studies provided limited evidence of carcinogenicity in humans, and studies in experimental animals provided sufficient evidence of carcinogenicity. Taking into account the other relevant data that (1) ethylene oxide induces a sensitive, persistent, dose-related increase in the frequency of chromosomal aberrations and sister chromatid exchanges in peripheral lymphocytes and micronuclei in bone-marrow cells from exposed workers; (2) it has been associated with malignancies of the lymphatic and haematopoietic system in both humans and experimental animals; (3) it induces a dose-related increase in the frequency of haemoglobin adducts in exposed humans and dose-related increases in the numbers of adducts in both DNA and haemoglobin in exposed rodents; (4) it induces gene mutations and heritable translocations in germ cells of exposed rodents; and (5) it is a powerful mutagen and clastogen at all phylogenetic levels; ethylene oxide was classified as carcinogenic to humans (Group 1).
In the case where the Preamble allows for the possibility that an agent for which there is sufficient evidence of carcinogenicity in animals can be placed in Group 3 (instead of Group 2B, in which it would normally be categorized) when there is strong evidence that the mechanism of carcinogenicity in animals does not operate in humans, this possibility has not yet been used by any working group. Such a possibility could have been envisaged in the case of d-limonene had there been sufficient evidence of its carcinogenicity in animals, since there are data suggesting that α2-microglobulin production in male rat kidney is linked to the renal tumours observed.
Among the many chemicals nominated as priorities by an ad-hoc working group in December 1993, some common postulated intrinsic mechanisms of action appeared or certain classes of agents based upon their biological properties were identified. The working group recommended that before evaluations are made on such agents as peroxisome proliferators, fibres, dusts and thyrostatic agents within the Monographs programme, special ad-hoc groups should be convened to discuss the latest state of the art on their particular mechanisms of action.
Workplace exposure assessment is concerned with identifying and evaluating agents with which a worker may come in contact, and exposure indices can be constructed to reflect the amount of an agent present in the general environment or in inhaled air, as well as to reflect the amount of agent that is actually inhaled, swallowed or otherwise absorbed (the intake). Other indices include the amount of agent that is resorbed (the uptake) and the exposure at the target organ. Dose is a pharmacological or toxicological term used to indicate the amount of a substance administered to a subject. Dose rate is the amount administered per unit of time. The dose of a workplace exposure is difficult to determine in a practical situation, since physical and biological processes, like inhalation, uptake and distribution of an agent in the human body cause exposure and dose to have complex, non-linear relationships. The uncertainty about the actual level of exposure to agents also makes it difficult to quantify relationships between exposure and health effects.
For many occupational exposures there exists a time window during which the exposure or dose is most relevant to the development of a particular health-related problem or symptom. Hence, the biologically relevant exposure, or dose, would be that exposure which occurs during the relevant time window. Some exposures to occupational carcinogens are believed to have such a relevant time window of exposure. Cancer is a disease with a long latency period, and hence it could be that the exposure which is related to the ultimate development of the disease took place many years before the cancer actually manifested itself. This phenomenon is counter-intuitive, since one would have expected that cumulative exposure over a working lifetime would have been the relevant parameter. The exposure at the time of manifestation of disease may not be of particular importance.
The pattern of exposure—continuous exposure, intermittent exposure and exposure with or without sharp peaks—may be relevant as well. Taking exposure patterns into account is important for both epidemiological studies and for environmental measurements which may be used to monitor compliance with health standards or for environmental control as part of control and prevention programmes. For example, if a health effect is caused by peak exposures, such peak levels must be monitorable in order to be controlled. Monitoring which provides data only about long-term average exposures is not useful since the peak excursion values may well be masked by averaging, and certainly cannot be controlled as they occur.
The biologically relevant exposure or dose for a certain endpoint is often not known because the patterns of intake, uptake, distribution and elimination, or the mechanisms of biotransformation, are not understood in sufficient detail. Both the rate at which an agent enters and leaves the body (the kinetics) and the biochemical processes for handling the substance (biotransformation) will help determine the relationships between exposure, dose and effect.
Environmental monitoring is the measurement and assessment of agents at the workplace to evaluate ambient exposure and related health risks. Biological monitoring is the measurement and assessment of workplace agents or their metabolites in tissue, secreta or excreta to evaluate exposure and assess health risks. Sometimes biomarkers, such as DNA-adducts, are used as measures of exposure. Biomarkers may also be indicative of the mechanisms of the disease process itself, but this is a complex subject, which is covered more fully in the chapter Biological Monitoring and later in the discussion here.
A simplification of the basic model in exposure-response modelling is as follows:
exposure uptake distribution,
elimination, transformationtarget dosephysiopathologyeffect
Depending on the agent, exposure-uptake and exposure-intake relationships can be complex. For many gases, simple approximations can be made, based on the concentration of the agent in the air during the course of a working day and on the amount of air that is inhaled. For dust sampling, deposition patterns are also related to particle size. Size considerations may also lead to a more complex relationship. The chapter Respiratory System provides more detail on the aspect of respiratory toxicity.
Exposure and dose assessment are elements of quantitative risk assessment. Health risk assessment methods often form the basis upon which exposure limits are established for emission levels of toxic agents in the air for environmental as well as for occupational standards. Health risk analysis provides an estimate of the probability (risk) of occurrence of specific health effects or an estimate of the number of cases with these health effects. By means of health risk analysis an acceptable concentration of a toxicant in air, water or food can be provided, given an a priori chosen acceptable magnitude of risk. Quantitative risk analysis has found an application in cancer epidemiology, which explains the strong emphasis on retrospective exposure assessment. But applications of more elaborate exposure assessment strategies can be found in both retrospective as well as prospective exposure assessment, and exposure assessment principles have found applications in studies focused on other endpoints as well, such as benign respiratory disease (Wegman et al. 1992; Post et al. 1994). Two directions in research predominate at this moment. One uses dose estimates obtained from exposure monitoring information, and the other relies on biomarkers as measures of exposure.
Exposure Monitoring and Prediction of Dose
Unfortunately, for many exposures few quantitative data are available for predicting the risk for developing a certain endpoint. As early as 1924, Haber postulated that the severity of the health effect (H) is proportional to the product of exposure concentration (X) and time of exposure (T):
H=X x T
Haber’s law, as it is called, formed the basis for development of the concept that time-weighted average (TWA) exposure measurements—that is, measurements taken and averaged over a certain period of time—would be a useful measure for the exposure. This assumption about the adequacy of the time-weighted average has been questioned for many years. In 1952, Adams and co-workers stated that “there is no scientific basis for the use of the time-weighted average to integrate varying exposures …” (in Atherly 1985). The problem is that many relations are more complex than the relationship that Haber’s law represents. There are many examples of agents where the effect is more strongly determined by concentration than by length of time. For example, interesting evidence from laboratory studies has shown that in rats exposed to carbon tetrachloride, the pattern of exposure (continuous versus intermittent and with or without peaks) as well as the dose can modify the observed risk of the rats developing liver enzyme level changes (Bogers et al. 1987). Another example is bio-aerosols, such as α-amylase enzyme, a dough improver, which can cause allergic diseases in people who work in the bakery industry (Houba et al. 1996). It is unknown whether the risk of developing such a disease is mainly determined by peak exposures, average exposure, or cumulative level of exposure. (Wong 1987; Checkoway and Rice 1992). Information on temporal patterns is not available for most agents, especially not for agents that have chronic effects.
The first attempts to model exposure patterns and estimate dose were published in the 1960s and 1970s by Roach (1966; 1977). He showed that the concentration of an agent reaches an equilibrium value at the receptor after an exposure of infinite duration because elimination counterbalances the uptake of the agent. In an eight-hour exposure, a value of 90% of this equilibrium level can be reached if the half-life of the agent at the target organ is smaller than approximately two-and-a-half hours. This illustrates that for agents with a short half-life, the dose at the target organ is determined by an exposure shorter than an eight-hour period. Dose at the target organ is a function of the product of exposure time and concentration for agents with a long half-life. A similar but more elaborate approach has been applied by Rappaport (1985). He showed that intra-day variability in exposure has a limited influence when dealing with agents with long half-lives. He introduced the term dampening at the receptor.
The information presented above has mainly been used to draw conclusions on appropriate averaging times for exposure measurements for compliance purposes. Since Roach’s papers it is common knowledge that for irritants, grab samples with short averaging times have to be taken, while for agents with long half-lives, such as asbestos, long-term average of cumulative exposure has to be approximated. One should however realize that the dichotomization into grab sample strategies and eight-hour time average exposure strategies as adopted in many countries for compliance purposes is an extremely crude translation of the biological principles discussed above.
An example of improving an exposure assessment strategy based on pharmocokinetic principles in epidemiology can be found in a paper of Wegman et al. (1992). They applied an interesting exposure assessment strategy by using continuous monitoring devices to measure personal dust exposure peak levels and relating these to acute reversible respiratory symptoms occurring every 15 minutes.A conceptual problem in this kind of study, extensively discussed in their paper, is the definition of a health-relevant peak exposure. The definition of a peak will, again, depend on biological considerations. Rappaport (1991) gives two requirements for peak exposures to be of aetiological relevance in the disease process: (1) the agent is eliminated rapidly from the body and (2) there is a non-linear rate of biological damage during a peak exposure. Non-linear rates of biological damage may be related to changes in uptake, which in turn are related to exposure levels, host susceptibility, synergy with other exposures, involvement of other disease mechanisms at higher exposures or threshold levels for disease processes.
These examples also show that pharmacokinetic approaches can lead elsewhere than to dose estimates. The results of pharmacokinetic modelling can also be used to explore the biological relevance of existing indices of exposure and to design new health-relevant exposure assessment strategies.
Pharmacokinetic modelling of the exposure may also generate estimates of the actual dose at the target organ. For instance in the case of ozone, an acute irritant gas, models have been developed which predict the tissue concentration in the airways as a function of the average ozone concentration in the airspace of the lung at a certain distance from the trachea, the radius of the airways, the average air velocity, the effective dispersion, and the ozone flux from air to lung surface (Menzel 1987; Miller and Overton 1989). Such models can be used to predict ozone dose in a particular region of the airways, dependent on environmental ozone concentrations and breathing patterns.
In most cases estimates of target dose are based on information on the exposure pattern over time, job history and pharmacokinetic information on uptake, distribution, elimination and transformation of the agent. The whole process can be described by a set of equations which can be mathematically solved. Often information on pharmacokinetic parameters is not available for humans, and parameter estimates based on animal experiments have to be used. There are several examples by now of the use of pharmacokinetic modelling of exposure in order to generate dose estimates. The first references to modelling of exposure data into dose estimates in the literature go back to the paper of Jahr (1974).
Although dose estimates have generally not been validated and have found limited application in epidemiological studies, the new generation of exposure or dose indices is expected to result in optimal exposure-response analyses in epidemiological studies (Smith 1985, 1987). A problem not yet tackled in pharmacokinetic modelling is that large interspecies differences exist in kinetics of toxic agents, and therefore effects of intra-individual variation in pharmacokinetic parameters are of interest (Droz 1992).
Biomonitoring and Biomarkers of Exposure
Biological monitoring offers an estimate of dose and therefore is often considered superior to environmental monitoring. However, the intra-individual variability of biomonitoring indices can be considerable. In order to derive an acceptable estimate of a worker’s dose, repeated measurements have to be taken, and sometimes the measurement effort can become larger than for environmental monitoring.
This is illustrated by an interesting study on workers producing boats made of plastic reinforced with glass fibre (Rappaport et al. 1995). The variability of styrene exposure was assessed by measuring styrene in air repeatedly. Styrene in exhaled air of exposed workers was monitored, as well as sister chromatid exchanges (SCEs). They showed that an epidemiological study using styrene in the air as a measure of exposure would be more efficient, in terms of numbers of measurements required, than a study using the other indices of exposure. For styrene in air three repeats were required to estimate the long-term average exposure with a given precision. For styrene in exhaled air, four repeats per worker were necessary, while for the SCEs 20 repeats were necessary. The explanation for this observation is the signal-to-noise ratio, determined by the day-to-day and between-worker variability in exposure, which was more favourable for styrene in air than for the two biomarkers of exposure. Thus, although the biological relevance of a certain exposure surrogate might be optimal, the performance in an exposure-response analysis can still be poor because of a limited signal-to-noise ratio, leading to misclassification error.
Droz (1991) applied pharmacokinetic modelling to study advantages of exposure assessment strategies based on air sampling compared to biomonitoring strategies dependent on the half-life of the agent. He showed that biological monitoring is also greatly affected by biological variability, which is not related to variability of the toxicological test. He suggested that no statistical advantage exists in using biological indicators when the half-life of the agent considered is smaller than about ten hours.
Although one might tend to decide to measure the environmental exposure instead of a biological indicator of an effect because of variability in the variable measured, additional arguments can be found for choosing a biomarker, even when this would lead to a greater measurement effort, such as when a considerable dermal exposure is present. For agents like pesticides and some organic solvents, dermal exposure can be of greater relevance that the exposure through the air. A biomarker of exposure would include this route of exposure, while measuring of dermal exposure is complex and results are not easily interpretable (Boleij et al. 1995). Early studies among agricultural workers using “pads” to assess dermal exposure showed remarkable distributions of pesticides over the body surface, depending on the tasks of the worker. However, because little information is available on skin uptake, exposure profiles cannot yet be used to estimate an internal dose.
Biomarkers can also have considerable advantages in cancer epidemiology. When a biomarker is an early marker of the effect, its use could result in reduction of the follow-up period. Although validation studies are required, biomarkers of exposure or individual susceptibility could result in more powerful epidemiological studies and more precise risk estimates.
Time Window Analysis
Parallel to the development of pharmacokinetic modelling, epidemiologists have explored new approaches in the data analysis phase such as “time frame analysis” to relate relevant exposure periods to endpoints, and to implement effects of temporal patterns in the exposure or peak exposures in occupational cancer epidemiology (Checkoway and Rice 1992). Conceptually this technique is related to pharmacokinetic modelling since the relationship between exposure and outcome is optimized by putting weights on different exposure periods, exposure patterns and exposure levels. In pharmacokinetic modelling these weights are believed to have a physiological meaning and are estimated beforehand. In time frame analysis the weights are estimated from the data on the basis of statistical criteria. Examples of this approach are given by Hodgson and Jones (1990), who analysed the relationship between radon gas exposure and lung cancer in a cohort of UK tin miners, and by Seixas, Robins and Becker (1993), who analysed the relationship between dust exposure and respiratory health in a cohort of US coal miners. A very interesting study underlining the relevance of time window analysis is the one by Peto et al. (1982).
They showed that mesothelioma death rates appeared to be proportional to some function of time since first exposure and cumulative exposure in a cohort of insulation workers. Time since first exposure was of particular relevance because this variable was an approximation of the time required for a fibre to migrate from its place of deposition in the lungs to the pleura. This example shows how kinetics of deposition and migration determine the risk function to a large extent. A potential problem with time frame analysis is that it requires detailed information on exposure periods and exposure levels, which hampers its application in many studies of chronic disease outcomes.
Concluding Remarks
In conclusion, the underlying principles of pharmacokinetic modelling and time frame or time window analysis are widely recognized. Knowledge in this area has mainly been used to develop exposure assessment strategies. More elaborate use of these approaches, however, requires a considerable research effort and has to be developed. The number of applications is therefore still limited. Relatively simple applications, such as the development of more optimal exposure assessment strategies dependent on the endpoint, have found wider use. An important issue in the development of biomarkers of exposure or effect is validation of these indices. It is often assumed that a measurable biomarker can predict health risk better than traditional methods. However, unfortunately, very few validation studies substantiate this assumption.
Group 1—Carcinogenic to Humans (74)
Agents and groups of agents
Aflatoxins [1402-68-2] (1993)
4-Aminobiphenyl [92-67-1]
Arsenic [7440-38-2] and arsenic compounds2
Asbestos [1332-21-4]
Azathioprine [446-86-6]
Benzene [71-43-2]
Benzidine [92-87-5]
Beryllium [7440-41-7] and beryllium compounds (1993)3
Bis(2-chloroethyl)-2-naphthylamine (Chlornaphazine)[494-03-1]
Bis(chloromethyl)ether [542-88-1] and chloromethyl methyl ether [107-30-2] (technical-grade)
1,4-Butanediol dimethanesulphonate (Myleran) [55-98-1]
Cadmium [7440-43-9] and cadmium compounds (1993)3
Chlorambucil [305-03-3]
1-(2-Chloroethyl)-3-(4-methylcyclohexyl)-1-nitrosourea (Methyl-CCNU; Semustine) [13909-09-6]
Chromium[VI] compounds (1990)3
Ciclosporin [79217-60-0] (1990)
Cyclophosphamide [50-18-0] [6055-19-2]
Diethylstilboestrol [56-53-1]
Erionite [66733-21-9]
Ethylene oxide4 [75-21-8] (1994)
Helicobacter pylori (infection with) (1994)
Hepatitis B virus (chronic infection with) (1993)
Hepatitis C virus (chronic infection with) (1993)
Human papillomavirus type 16 (1995)
Human papillomavirus type 18 (1995)
Human T-cell lymphotropic virus type I (1996)
Melphalan [148-82-3]
8-Methoxypsoralen (Methoxsalen) [298-81-7] plus ultraviolet A radiation
MOPP and other combined chemotherapy including alkylating agents
Mustard gas (Sulphur mustard) [505-60-2]
2-Naphthylamine [91-59-8]
Nickel compounds (1990)3
Oestrogen replacement therapy
Oestrogens, nonsteroidal2
Oestrogens, steroidal2
Opisthorchis viverrini (infection with) (1994)
Oral contraceptives, combined5
Oral contraceptives, sequential
Radon [10043-92-2] and its decay products (1988)
Schistosoma haematobium (infection with) (1994)
Silica [14808-60-7] crystalline (inhaled in the form of quartz or cristobalite from occupational sources)
Solar radiation (1992)
Talc containing asbestiform fibres
Tamoxifen [10540-29-1]6
Thiotepa [52-24-4] (1990)
Treosulphan [299-75-2]
Vinyl chloride [75-01-4]
Mixtures
Alcoholic beverages (1988)
Analgesic mixtures containing phenacetin
Betel quid with tobacco
Coal-tar pitches [65996-93-2]
Coal-tars [8007-45-2]
Mineral oils, untreated and mildly treated
Salted fish (Chinese-style) (1993)
Shale oils [68308-34-9]
Soots
Tobacco products, smokeless
Tobacco smoke
Wood dust
Exposure circumstances
Aluminium production
Auramine, manufacture of
Boot and shoe manufacture and repair
Coal gasification
Coke production
Furniture and cabinet making
Haematite mining (underground) with exposure to radon
Iron and steel founding
Isopropanol manufacture (strong-acid process)
Magenta, manufacture of (1993)
Painter (occupational exposure as a) (1989)
Rubber industry
Strong-inorganic-acid mists containing sulphuric acid (occupational exposure to) (1992)
Group 2A—Probably carcinogenic to humans (56)
Agents and groups of agents
Acrylamide [79-06-1] (1994)8
Acrylonitrile [107-13-1]
Adriamycin8 [23214-92-8]
Androgenic (anabolic) steroids
Azacitidine8 [320-67-2] (1990)
Benz[a]anthracene8 [56-55-3]
Benzidine-based dyes8
Benzo[a]pyrene8 [50-32-8]
Bischloroethyl nitrosourea (BCNU) [154-93-8]
1,3-Butadiene [106-99-0] (1992)
Captafol [2425-06-1] (1991)
Chloramphenicol [56-75-7] (1990)
1-(2-Chloroethyl)-3-cyclohexyl-1-nitrosourea8 (CCNU)[13010-47-4]
p-Chloro-o-toluidine [95-69-2] and its strong acid salts (1990)3
Chlorozotocin8 [54749-90-5] (1990)
Cisplatin8 [15663-27-1]
Clonorchis sinensis (infection with)8 (1994)
Dibenz[a,h]anthracene8 [53-70-3]
Diethyl sulphate [64-67-5] (1992)
Dimethylcarbamoyl chloride8 [79-44-7]
Dimethyl sulphate8 [77-78-1]
Epichlorohydrin8 [106-89-8]
Ethylene dibromide8 [106-93-4]
N-Ethyl-N-nitrosourea8 [759-73-9]
Formaldehyde [50-00-0])
IQ8 (2-Amino-3-methylimidazo[4,5-f]quinoline) [76180-96-6] (1993)
5-Methoxypsoralen8 [484-20-8]
4,4´-Methylene bis(2-chloroaniline) (MOCA)8 [101-14-4] (1993)
N-Methyl-N´-nitro-N-nitrosoguanidine8 (MNNG) [70-25-7]
N-Methyl-N-nitrosourea8 [684-93-5]
Nitrogen mustard [51-75-2]
N-Nitrosodiethylamine8 [55-18-5]
N-Nitrosodimethylamine 8 [62-75-9]
Phenacetin [62-44-2]
Procarbazine hydrochloride8 [366-70-1]
Tetrachloroethylene [127-18-4]
Trichloroethylene [79-01-6]
Styrene-7,8-oxide8 [96-09-3] (1994)
Tris(2,3-dibromopropyl)phosphate8 [126-72-7]
Ultraviolet radiation A8 (1992)
Ultraviolet radiation B8 (1992)
Ultraviolet radiation C8 (1992)
Vinyl bromide6 [593-60-2]
Vinyl fluoride [75-02-5]
Mixtures
Creosotes [8001-58-9]
Diesel engine exhaust (1989)
Hot mate (1991)
Non-arsenical insecticides (occupational exposures in spraying and application of) (1991)
Polychlorinated biphenyls [1336-36-3]
Exposure circumstances
Art glass, glass containers and pressed ware (manufacture of) (1993)
Hairdresser or barber (occupational exposure as a) (1993)
Petroleum refining (occupational exposures in) (1989)
Sunlamps and sunbeds (use of) (1992)
Group 2B—Possibly carcinogenic to humans (225)
Agents and groups of agents
A–α–C (2-Amino-9H-pyrido[2,3-b]indole) [26148-68-5]
Acetaldehyde [75-07-0]
Acetamide [60-35-5]
AF-2 [2-(2-Furyl)-3-(5-nitro-2-furyl)acrylamide] [3688-53-7]
Aflatoxin M1 [6795-23-9] (1993)
p-Aminoazobenzene [60-09-3]
o-Aminoazotoluene [97-56-3]
2-Amino-5-(5-nitro-2-furyl)-1,3,4-thiadiazole [712-68-5]
Amitrole [61-82-5]
o-Anisidine [90-04-0]
Antimony trioxide [1309-64-4] (1989)
Aramite [140-57-8]
Atrazine9 [1912-24-9] (1991)
Auramine [492-80-8] (technical-grade)
Azaserine [115-02-6]
Benzo[b]fluoranthene [205-99-2]
Benzo[j]fluoranthene [205-82-3]
Benzo[k]fluoranthene [207-08-9]
Benzyl violet 4B [1694-09-3]
Bleomycins [11056-06-7]
Bracken fern
Bromodichloromethane [75-27-4] (1991)
Butylated hydroxyanisole (BHA) [25013-16-5]
β-Butyrolactone [3068-88-0]
Caffeic acid [331-39-5] (1993)
Carbon-black extracts
Carbon tetrachloride [56-23-5]
Ceramic fibres
Chlordane [57-74-9] (1991)
Chlordecone (Kepone) [143-50-0]
Chlorendic acid [115-28-6] (1990)
α-Chlorinated toluenes (benzyl chloride, benzal chloride,benzotrichloride)
p-Chloroaniline [106-47-8] (1993)
Chloroform [67-66-3]
1-Chloro-2-methylpropene [513-37-1]
Chlorophenols
Chlorophenoxy herbicides
4-Chloro-o-phenylenediamine [95-83-0]
CI Acid Red 114 [6459-94-5] (1993)
CI Basic Red 9 [569-61-9] (1993)
CI Direct Blue 15 [2429-74-5] (1993)
Citrus Red No. 2 [6358-53-8]
Cobalt [7440-48-4] and cobalt compounds3 (1991)
p-Cresidine [120-71-8]
Cycasin [14901-08-7]
Dacarbazine [4342-03-4]
Dantron (Chrysazin; 1,8-Dihydroxyanthraquinone) [117-10-2] (1990)
Daunomycin [20830-81-3]
DDT´-DDT, 50-29-3] (1991)
N,N´-Diacetylbenzidine [613-35-4]
2,4-Diaminoanisole [615-05-4]
4,4´-Diaminodiphenyl ether [101-80-4]
2,4-Diaminotoluene [95-80-7]
Dibenz[a,h]acridine [226-36-8]
Dibenz[a,j]acridine [224-42-0]
7H-Dibenzo[c,g]carbazole [194-59-2]
Dibenzo[a,e]pyrene [192-65-4]
Dibenzo[a,h]pyrene [189-64-0]
Dibenzo[a,i]pyrene [189-55-9]
Dibenzo[a,l]pyrene [191-30-0]
1,2-Dibromo-3-chloropropane [96-12-8]
p-Dichlorobenzene [106-46-7]
3,3´-Dichlorobenzidine [91-94-1]
3,3´-Dichloro-4,4´-diaminodiphenyl ether [28434-86-8]
1,2-Dichloroethane [107-06-2]
Dichloromethane (methylene chloride) [75-09-2]
1,3-Dichloropropene [542-75-6] (technical grade)
Dichlorvos [62-73-7] (1991)
Diepoxybutane [1464-53-5]
Di(2-ethylhexyl)phthalate [117-81-7]
1,2-Diethylhydrazine [1615-80-1]
Diglycidyl resorcinol ether [101-90-6]
Dihydrosafrole [94-58-6]
Diisopropyl sulphate [2973-10-6] (1992)
3,3´-Dimethoxybenzidine (o-Dianisidine) [119-90-4]
p-Dimethylaminoazobenzene [60-11-7]
trans-2-[(Dimethylamino)methylimino]-5-[2-(5-nitro-2-furyl)-vinyl]-1,3,4-oxadiazole [25962-77-0]
2,6-Dimethylaniline (2,6-xylidine) [87-62-7] (1993)
3,3´-Dimethylbenzidine (o-tolidine) [119-93-7]
Dimethylformamide [68-12-2] (1989)
1,1-Dimethylhydrazine [57-14-7]
1,2-Dimethylhydrazine [540-73-8]
3,7-Dinitrofluoranthene [105735-71-5]
3,9-Dinitrofluoranthene [22506-53-2]
1,6-Dinitropyrene [42397-64-8] (1989)
1,8-Dinitropyrene [42397-65-9] (1989)
2,4-Dinitrotoluene [121-14-2]
2,6-Dinitrotoluene [606-20-2]
1,4-Dioxane [123-91-1]
Disperse Blue 1 [2475-45-8] (1990)
Ethyl acrylate [140-88-5]
Ethylene thiourea [96-45-7]
Ethyl methanesulphonate [62-50-0]
2-(2-Formylhydrazino)-4-(5-nitro-2-furyl)thiazole [3570-75-0]
Glass wool (1988)
Glu-P-1 (2-amino-6-methyldipyrido[1,2-a:3´,2´-d]imidazole)[67730-11-4]
Glu-P-2 (2-aminodipyrido[1,2-a:3´,2´-d]imidazole) [67730-10-3]
Glycidaldehyde [765-34-4]
Griseofulvin [126-07-8]
HC Blue No. 1 [2784-94-3] (1993)
Heptachlor [76-44-8] (1991)
Hexachlorobenzene [118-74-1]
Hexachlorocyclohexanes
Hexamethylphosphoramide [680-31-9]
Human immunodeficiency virus type 2 (infection with) (1996)
Human papillomaviruses: some types other than 16, 18, 31 and 33 (1995)
Hydrazine [302-01-2]
Indeno[1,2,3-cd]pyrene [193-39-5]
Iron-dextran complex [9004-66-4]
Isoprene [78-79-5] (1994)
Lasiocarpine [303-34-4]
Lead [7439-92-1] and lead compounds, inorganic3
Magenta [632-99-5] (containing CI Basic Red 9) (1993)
MeA-α-C (2-Amino-3-methyl-9H-pyrido[2,3-b]indole)[68006-83-7]
Medroxyprogesterone acetate [71-58-9]
MeIQ (2-Amino-3,4-dimethylimidazo[4,5-f]quinoline)[77094-11-2] (1993)
MeIQx (2-Amino-3,8-dimethylimidazo[4,5-f]quinoxaline) [77500-04-0] (1993)
Merphalan [531-76-0]
2-Methylaziridine (propyleneimine) [75-55-8]
Methylazoxymethanol acetate [592-62-1]
5-Methylchrysene [3697-24-3]
4,4´-Methylene bis(2-methylaniline) [838-88-0]
4,4´-Methylenedianiline [101-77-9]
Methylmercury compounds (1993)3
Methyl methanesulphonate [66-27-3]
2-Methyl-1-nitroanthraquinone [129-15-7] (uncertain purity)
N-Methyl-N-nitrosourethane [615-53-2]
Methylthiouracil [56-04-2]
Metronidazole [443-48-1]
Mirex [2385-85-5]
Mitomycin C [50-07-7]
Monocrotaline [315-22-0]
5-(Morpholinomethyl)-3-[(5-nitrofurfurylidene)amino]-2-oxazolidinone [3795-88-8]
Nafenopin [3771-19-5]
Nickel, metallic [7440-02-0] (1990)
Niridazole [61-57-4]
Nitrilotriacetic acid [139-13-9] and its salts (1990)3
5-Nitroacenaphthene [602-87-9]
2-Nitroanisole [91-23-6] (1996)
Nitrobenzene [98-95-3] (1996)
6-Nitrochrysene [7496-02-8] (1989)
Nitrofen [1836-75-5], technical-grade
2-Nitrofluorene [607-57-8] (1989)
1-[(5-Nitrofurfurylidene)amino]-2-imidazolidinone [555-84-0]
N-[4-(5-Nitro-2-furyl)-2-thiazolyl]acetamide [531-82-8]
Nitrogen mustard N-oxide [126-85-2]
2-Nitropropane [79-46-9]
1-Nitropyrene [5522-43-0] (1989)
4-Nitropyrene [57835-92-4] (1989)
N-Nitrosodi-n-butylamine [924-16-3]
N-Nitrosodiethanolamine [1116-54-7]
N-Nitrosodi-n-propylamine [621-64-7]
3-(N-Nitrosomethylamino)propionitrile [60153-49-3]
4-(N-Nitrosomethylamino)-1-(3-pyridyl)-1-butanone (NNK) [64091-91-4]
N-Nitrosomethylethylamine [10595-95-6]
N-Nitrosomethylvinylamine [4549-40-0]
N-Nitrosomorpholine [59-89-2]
N‘-Nitrosonornicotine [16543-55-8]
N-Nitrosopiperidine [100-75-4]
N-Nitrosopyrrolidine [930-55-2]
N-Nitrososarcosine [13256-22-9]
Ochratoxin A [303-47-9] (1993)
Oil Orange SS [2646-17-5]
Oxazepam [604-75-1] (1996)
Palygorskite (attapulgite) [12174-11-7] (long fibres, >>5 micro-meters) (1997)
Panfuran S (containing dihydroxymethylfuratrizine [794-93-4])
Pentachlorophenol [87-86-5] (1991)
Phenazopyridine hydrochloride [136-40-3]
Phenobarbital [50-06-6]
Phenoxybenzamine hydrochloride [63-92-3]
Phenyl glycidyl ether [122-60-1] (1989)
Phenytoin [57-41-0]
PhIP (2-Amino-1-methyl-6-phenylimidazo[4,5-b]pyridine) [105650-23-5] (1993)
Ponceau MX [3761-53-3]
Ponceau 3R [3564-09-8]
Potassium bromate [7758-01-2]
Progestins
1,3-Propane sultone [1120-71-4]
β-Propiolactone [57-57-8]
Propylene oxide [75-56-9] (1994)
Propylthiouracil [51-52-5]
Rockwool (1988)
Saccharin [81-07-2]
Safrole [94-59-7]
Schistosoma japonicum (infection with) (1994)
Slagwool (1988)
Sodium o-phenylphenate [132-27-4]
Sterigmatocystin [10048-13-2]
Streptozotocin [18883-66-4]
Styrene [100-42-5] (1994)
Sulfallate [95-06-7]
Tetranitromethane [509-14-8] (1996)
Thioacetamide [62-55-5]
4,4´-Thiodianiline [139-65-1]
Thiourea [62-56-6]
Toluene diisocyanates [26471-62-5]
o-Toluidine [95-53-4]
Trichlormethine (Trimustine hydrochloride) [817-09-4] (1990)
Trp-P-1 (3-Amino-1,4-dimethyl-5H-pyrido[4,3-b]indole) [62450-06-0]
Trp-P-2 (3-Amino-1-methyl-5H-pyrido[4,3-b]indole) [62450-07-1]
Trypan blue [72-57-1]
Uracil mustard [66-75-1]
Urethane [51-79-6]
Vinyl acetate [108-05-4] (1995)
4-Vinylcyclohexene [100-40-3] (1994)
4-Vinylcyclohexene diepoxide [107-87-6] (1994)
Mixtures
Bitumens [8052-42-4], extracts of steam-refined and air-refined
Carrageenan [9000-07-1], degraded
Chlorinated paraffins of average carbon chain length C12 and average degree of chlorination approximately 60% (1990)
Coffee (urinary bladder)9 (1991)
Diesel fuel, marine (1989)
Engine exhaust, gasoline (1989)
Fuel oils, residual (heavy) (1989)
Gasoline (1989)
Pickled vegetables (traditional in Asia) (1993)
Polybrominated biphenyls [Firemaster BP-6, 59536-65-1]
Toxaphene (Polychlorinated camphenes) [8001-35-2]
Toxins derived from Fusarium moniliforme (1993)
Welding fumes (1990)
Exposure circumstances
Carpentry and joinery
Dry cleaning (occupational exposures in) (1995)
Printing processes (occupational exposures in) (1996)
Textile manufacturing industry (work in) (1990)
Group 3—Unclassifiable as to carcinogenicity to humans (480)
Agents and groups of agents
Acridine orange [494-38-2]
Acriflavinium chloride [8018-07-3]
Acrolein [107-02-8]
Acrylic acid [79-10-7]
Acrylic fibres
Acrylonitrile-butadiene-styrene copolymers
Actinomycin D [50-76-0]
Aldicarb [116-06-3] (1991)
Aldrin [309-00-2]
Allyl chloride [107-05-1]
Allyl isothiocyanate [57-06-7]
Allyl isovalerate [2835-39-4]
Amaranth [915-67-3]
5-Aminoacenaphthene [4657-93-6]
2-Aminoanthraquinone [117-79-3]
p-Aminobenzoic acid [150-13-0]
1-Amino-2-methylanthraquinone [82-28-0]
2-Amino-4-nitrophenol [99-57-0] (1993)
2-Amino-5-nitrophenol [121-88-0] (1993)
4-Amino-2-nitrophenol [119-34-6]
2-Amino-5-nitrothiazole [121-66-4]
11-Aminoundecanoic acid [2432-99-7]
Ampicillin [69-53-4] (1990)
Anaesthetics, volatile
Angelicin [523-50-2] plus ultraviolet A radiation
Aniline [62-53-3]
p-Anisidine [104-94-9]
Anthanthrene [191-26-4]
Anthracene [120-12-7]
Anthranilic acid [118-92-3]
Antimony trisulphide [1345-04-6] (1989)
Apholate [52-46-0]
p-Aramid fibrils [24938-64-5] (1997)
Aurothioglucose [12192-57-3]
Aziridine [151-56-4]
2-(1-Aziridinyl)ethanol [1072-52-2]
Aziridyl benzoquinone [800-24-8]
Azobenzene [103-33-3]
Benz[a]acridine [225-11-6]
Benz[c]acridine [225-51-4]
Benzo[ghi]fluoranthene [203-12-3]
Benzo[a]fluorene [238-84-6]
Benzo[b]fluorene [243-17-4]
Benzo[c]fluorene [205-12-9]
Benzo[ghi]perylene [191-24-2]
Benzo[c]phenanthrene [195-19-7]
Benzo[e]pyrene [192-97-2]
p-Benzoquinone dioxime [105-11-3]
Benzoyl chloride [98-88-4]
Benzoyl peroxide [94-36-0]
Benzyl acetate [140-11-4]
Bis(1-aziridinyl)morpholinophosphine sulphide [2168-68-5]
Bis(2-chloroethyl)ether [111-44-4]
1,2-Bis(chloromethoxy)ethane [13483-18-6]
1,4-Bis(chloromethoxymethyl)benzene [56894-91-8]
Bis(2-chloro-1-methylethyl)ether [108-60-1]
Bis(2,3-epoxycyclopentyl)ether [2386-90-5] (1989)
Bisphenol A diglycidyl ether [1675-54-3] (1989)
Bisulphites (1992)
Blue VRS [129-17-9]
Brilliant Blue FCF, disodium salt [3844-45-9]
Bromochloroacetonitrile [83463-62-1] (1991)
Bromoethane [74-96-4] (1991)
Bromoform [75-25-2] (1991)
n-Butyl acrylate [141-32-2]
Butylated hydroxytoluene (BHT) [128-37-0]
Butyl benzyl phthalate [85-68-7]
γ-Butyrolactone [96-48-0]
Caffeine [58-08-2] (1991)
Cantharidin [56-25-7]
Captan [133-06-2]
Carbaryl [63-25-2]
Carbazole [86-74-8]
3-Carbethoxypsoralen [20073-24-9]
Carmoisine [3567-69-9]
Carrageenan [9000-07-1], native
Catechol [120-80-9]
Chloral [75-87-6] (1995)
Chloral hydrate [302-17-0] (1995)
Chlordimeform [6164-98-3]
Chlorinated dibenzodioxins (other than TCDD)
Chlorinated drinking-water (1991)
Chloroacetonitrile [107-14-2] (1991)
Chlorobenzilate [510-15-6]
Chlorodibromomethane [124-48-1] (1991)
Chlorodifluoromethane [75-45-6]
Chloroethane [75-00-3] (1991)
Chlorofluoromethane [593-70-4]
3-Chloro-2-methylpropene [563-47-3] (1995)
4-Chloro-m-phenylenediamine [5131-60-2]
Chloronitrobenzenes [88-73-3; 121-73-3; 100-00-5] (1996)
Chloroprene [126-99-8]
Chloropropham [101-21-3]
Chloroquine [54-05-7]
Chlorothalonil [1897-45-6]
2-Chloro-1,1,1-trifluoroethane [75-88-7]
Cholesterol [57-88-5]
Chromium[III] compounds (1990)
Chromium [7440-47-3], metallic (1990)
Chrysene [218-01-9]
Chrysoidine [532-82-1]
CI Acid Orange 3 [6373-74-6] (1993)
Cimetidine [51481-61-9] (1990)
Cinnamyl anthranilate [87-29-6]
CI Pigment Red 3 [2425-85-6] (1993)
Citrinin [518-75-2]
Clofibrate [637-07-0]
Clomiphene citrate [50-41-9]
Coal dust (1997)
Copper 8-hydroxyquinoline [10380-28-6]
Coronene [191-07-1]
Coumarin [91-64-5]
m-Cresidine [102-50-1]
Crotonaldehyde [4170-30-3] (1995)
Cyclamates [sodium cyclamate, 139-05-9]
Cyclochlorotine [12663-46-6]
Cyclohexanone [108-94-1] (1989)
Cyclopenta[cd]pyrene [27208-37-3]
D & C Red No. 9 [5160-02-1] (1993)
Dapsone [80-08-0]
Decabromodiphenyl oxide [1163-19-5] (1990)
Deltamethrin [52918-63-5] (1991)
Diacetylaminoazotoluene [83-63-6]
Diallate [2303-16-4]
1,2-Diamino-4-nitrobenzene [99-56-9]
1,4-Diamino-2-nitrobenzene [5307-14-2] (1993)
2,5-Diaminotoluene [95-70-5]
Diazepam [439-14-5]
Diazomethane [334-88-3]
Dibenz[a,c]anthracene [215-58-7]
Dibenz[a,j]anthracene [224-41-9]
Dibenzo-p-dioxin (1997)
Dibenzo[a,e]fluoranthene [5385-75-1]
Dibenzo[h,rst]pentaphene [192-47-2]
Dibromoacetonitrile [3252-43-5] (1991)
Dichloroacetic acid [79-43-6] (1995)
Dichloroacetonitrile [3018-12-0] (1991)
Dichloroacetylene [7572-29-4]
o-Dichlorobenzene [95-50-1]
trans-1,4-Dichlorobutene [110-57-6]
2,6-Dichloro-para-phenylenediamine [609-20-1]
1,2-Dichloropropane [78-87-5]
Dicofol [115-32-2]
Dieldrin [60-57-1]
Di(2-ethylhexyl)adipate [103-23-1]
Dihydroxymethylfuratrizine [794-93-4]
Dimethoxane [828-00-2]
3,3´-Dimethoxybenzidine-4,4´-diisocyanate [91-93-0]
p-Dimethylaminoazobenzenediazo sodium sulphonate[140-56-7]
4,4´-Dimethylangelicin [22975-76-4] plus ultraviolet Aradiation
4,5´-Dimethylangelicin [4063-41-6] plus ultraviolet A
N,N-Dimethylaniline [121-69-7] (1993)
Dimethyl hydrogen phosphite [868-85-9] (1990)
1,4-Dimethylphenanthrene [22349-59-3]
1,3-Dinitropyrene [75321-20-9] (1989)
Dinitrosopentamethylenetetramine [101-25-7]
2,4´-Diphenyldiamine [492-17-1]
Disperse Yellow 3 [2832-40-8] (1990)
Disulfiram [97-77-8]
Dithranol [1143-38-0]
Doxefazepam [40762-15-0] (1996)
Droloxifene [82413-20-5] (1996)
Dulcin [150-69-6]
Endrin [72-20-8]
Eosin [15086-94-9]
1,2-Epoxybutane [106-88-7] (1989)
3,4-Epoxy-6-methylcyclohexylmethyl-3,4-epoxy-6-methylcyclohexane carboxylate [141-37-7]
cis-9,10-Epoxystearic acid [2443-39-2]
Estazolam [29975-16-4] (1996)
Ethionamide [536-33-4]
Ethylene [74-85-1] (1994)
Ethylene sulphide [420-12-2]
2-Ethylhexyl acrylate [103-11-7] (1994)
Ethyl selenac [5456-28-0]
Ethyl tellurac [20941-65-5]
Eugenol [97-53-0]
Evans blue [314-13-6]
Fast Green FCF [2353-45-9]
Fenvalerate [51630-58-1] (1991)
Ferbam [14484-64-1]
Ferric oxide [1309-37-1]
Fluometuron [2164-17-2]
Fluoranthene [206-44-0]
Fluorene [86-73-7]
Fluorescent lighting (1992)
Fluorides (inorganic, used in drinking-water)
5-Fluorouracil [51-21-8]
Furazolidone [67-45-8]
Furfural [98-01-1] (1995)
Furosemide (Frusemide) [54-31-9] (1990)
Gemfibrozil [25812-30-0] (1996)
Glass filaments (1988)
Glycidyl oleate [5431-33-4]
Glycidyl stearate [7460-84-6]
Guinea Green B [4680-78-8]
Gyromitrin [16568-02-8]
Haematite [1317-60-8]
HC Blue No. 2 [33229-34-4] (1993)
HC Red No. 3 [2871-01-4] (1993)
HC Yellow No. 4 [59820-43-8] (1993)
Hepatitis D virus (1993)
Hexachlorobutadiene [87-68-3]
Hexachloroethane [67-72-1]
Hexachlorophene [70-30-4]
Human T-cell lymphotropic virus type II (1996)
Hycanthone mesylate [23255-93-8]
Hydralazine [86-54-4]
Hydrochloric acid [7647-01-0] (1992)
Hydrochlorothiazide [58-93-5] (1990)
Hydrogen peroxide [7722-84-1]
Hydroquinone [123-31-9]
4-Hydroxyazobenzene [1689-82-3]
8-Hydroxyquinoline [148-24-3]
Hydroxysenkirkine [26782-43-4]
Hypochlorite salts (1991)
Iron-dextrin complex [9004-51-7]
Iron sorbitol-citric acid complex [1338-16-5]
Isatidine [15503-86-3]
Isonicotinic acid hydrazide (Isoniazid) [54-85-3]
Isophosphamide [3778-73-2]
Isopropanol [67-63-0]
Isopropyl oils
Isosafrole [120-58-1]
Jacobine [6870-67-3]
Kaempferol [520-18-3]
Lauroyl peroxide [105-74-8]
Lead, organo [75-74-1], [78-00-2]
Light Green SF [5141-20-8]
d-Limonene [5989-27-5] (1993)
Luteoskyrin [21884-44-6]
Malathion [121-75-5]
Maleic hydrazide [123-33-1]
Malonaldehyde [542-78-9]
Maneb [12427-38-2]
Mannomustine dihydrochloride [551-74-6]
Medphalan [13045-94-8]
Melamine [108-78-1]
6-Mercaptopurine [50-44-2]
Mercury [7439-97-6] and inorganic mercury compounds (1993)
Metabisulphites (1992)
Methotrexate [59-05-2]
Methoxychlor [72-43-5]
Methyl acrylate [96-33-3]
5-Methylangelicin [73459-03-7] plus ultraviolet A radiation
Methyl bromide [74-83-9]
Methyl carbamate [598-55-0]
Methyl chloride [74-87-3]
1-Methylchrysene [3351-28-8]
2-Methylchrysene [3351-32-4]
3-Methylchrysene [3351-31-3]
4-Methylchrysene [3351-30-2]
6-Methylchrysene [1705-85-7]
N-Methyl-N,4-dinitrosoaniline [99-80-9]
4,4´-Methylenebis(N,N-dimethyl)benzenamine [101-61-1]
4,4´-Methylenediphenyl diisocyanate [101-68-8]
2-Methylfluoranthene [33543-31-6]
3-Methylfluoranthene [1706-01-0]
Methylglyoxal [78-98-8] (1991)
Methyl iodide [74-88-4]
Methyl methacrylate [80-62-6] (1994)
N-Methylolacrylamide [90456-67-0] (1994)
Methyl parathion [298-00-0]
1-Methylphenanthrene [832-69-9]
7-Methylpyrido[3,4-c]psoralen [85878-62-2]
Methyl red [493-52-7]
Methyl selenac [144-34-3]
Modacrylic fibres
Monuron [150-68-5] (1991)
Morpholine [110-91-8] (1989)
Musk ambrette [83-66-9] (1996)
Musk xylene [81-15-2] (1996)
1,5-Naphthalenediamine [2243-62-1]
1,5-Naphthalene diisocyanate [3173-72-6]
1-Naphthylamine [134-32-7]
1-Naphthylthiourea (ANTU) [86-88-4]
Nithiazide [139-94-6]
5-Nitro-o-anisidine [99-59-2]
9-Nitroanthracene [602-60-8]
7-Nitrobenz[a]anthracene [20268-51-3] (1989
6-Nitrobenzo[a]pyrene [63041-90-7] (1989)
4-Nitrobiphenyl [92-93-3]
3-Nitrofluoranthene [892-21-7]
Nitrofural (Nitrofurazone) [59-87-0] (1990)
Nitrofurantoin [67-20-9] (1990)
1-Nitronaphthalene [86-57-7] (1989)
2-Nitronaphthalene [581-89-5] (1989)
3-Nitroperylene [20589-63-3] (1989)
2-Nitropyrene [789-07-1] (1989)
N´-Nitrosoanabasine [37620-20-5]
N-Nitrosoanatabine [71267-22-6]
N-Nitrosodiphenylamine [86-30-6]
p-Nitrosodiphenylamine [156-10-5]
N-Nitrosofolic acid [29291-35-8]
N-Nitrosoguvacine [55557-01-2]
N-Nitrosoguvacoline [55557-02-3]
N-Nitrosohydroxyproline [30310-80-6]
3-(N-Nitrosomethylamino)propionaldehyde [85502-23-4]
4-(N-Nitrosomethylamino)-4-(3-pyridyl)-1-butanal (NNA) [64091-90-3]
N-Nitrosoproline [7519-36-0]
5-Nitro-o-toluidine [99-55-8] (1990)
Nitrovin [804-36-4]
Nylon 6 [25038-54-4]
Oestradiol mustard [22966-79-6]
Oestrogen-progestin replacement therapy
Opisthorchis felineus (infection with) (1994)
Orange I [523-44-4]
Orange G [1936-15-8]
Oxyphenbutazone [129-20-4]
Palygorskite (attapulgite) [12174-11-7] (short fibres, <<5 micro-meters) (1997)
Paracetamol (Acetaminophen) [103-90-2] (1990)
Parasorbic acid [10048-32-5]
Parathion [56-38-2]
Patulin [149-29-1]
Penicillic acid [90-65-3]
Pentachloroethane [76-01-7]
Permethrin [52645-53-1] (1991)
Perylene [198-55-0]
Petasitenine [60102-37-6]
Phenanthrene [85-01-8]
Phenelzine sulphate [156-51-4]
Phenicarbazide [103-03-7]
Phenol [108-95-2] (1989)
Phenylbutazone [50-33-9]
m-Phenylenediamine [108-45-2]
p-Phenylenediamine [106-50-3]
N-Phenyl-2-naphthylamine [135-88-6]
o-Phenylphenol [90-43-7]
Picloram [1918-02-1] (1991)
Piperonyl butoxide [51-03-6]
Polyacrylic acid [9003-01-4]
Polychlorinated dibenzo-p-dioxins (other than 2,3,7,8-tetra-chlorodibenzo-p-dioxin) (1997)
Polychlorinated dibenzofurans (1997)
Polychloroprene [9010-98-4]
Polyethylene [9002-88-4]
Polymethylene polyphenyl isocyanate [9016-87-9]
Polymethyl methacrylate [9011-14-7]
Polypropylene [9003-07-0]
Polystyrene [9003-53-6]
Polytetrafluoroethylene [9002-84-0]
Polyurethane foams [9009-54-5]
Polyvinyl acetate [9003-20-7]
Polyvinyl alcohol [9002-89-5]
Polyvinyl chloride [9002-86-2]
Polyvinyl pyrrolidone [9003-39-8]
Ponceau SX [4548-53-2]
Potassium bis(2-hydroxyethyl)dithiocarbamate[23746-34-1]
Prazepam [2955-38-6] (1996)
Prednimustine [29069-24-7] (1990)
Prednisone [53-03-2]
Proflavine salts
Pronetalol hydrochloride [51-02-5]
Propham [122-42-9]
n-Propyl carbamate [627-12-3]
Propylene [115-07-1] (1994)
Ptaquiloside [87625-62-5]
Pyrene [129-00-0]
Pyrido[3,4-c]psoralen [85878-62-2]
Pyrimethamine [58-14-0]
Quercetin [117-39-5]
p-Quinone [106-51-4]
Quintozene (Pentachloronitrobenzene) [82-68-8]
Reserpine [50-55-5]
Resorcinol [108-46-3]
Retrorsine [480-54-6]
Rhodamine B [81-88-9]
Rhodamine 6G [989-38-8]
Riddelliine [23246-96-0]
Rifampicin [13292-46-1]
Ripazepam [26308-28-1] (1996)
Rugulosin [23537-16-8]
Saccharated iron oxide [8047-67-4]
Scarlet Red [85-83-6]
Schistosoma mansoni (infection with) (1994)
Selenium [7782-49-2] and selenium compounds
Semicarbazide hydrochloride [563-41-7]
Seneciphylline [480-81-9]
Senkirkine [2318-18-5]
Sepiolite [15501-74-3]
Shikimic acid [138-59-0]
Silica [7631-86-9], amorphous
Simazine [122-34-9] (1991)
Sodium chlorite [7758-19-2] (1991)
Sodium diethyldithiocarbamate [148-18-5]
Spironolactone [52-01-7]
Styrene-acrylonitrile copolymers [9003-54-7]
Styrene-butadiene copolymers [9003-55-8]
Succinic anhydride [108-30-5]
Sudan I [842-07-9]
Sudan II [3118-97-6]
Sudan III [85-86-9]
Sudan Brown RR [6416-57-5]
Sudan Red 7B [6368-72-5]
Sulphafurazole (Sulphisoxazole) [127-69-5]
Sulphamethoxazole [723-46-6]
Sulphites (1992)
Sulphur dioxide [7446-09-5] (1992)
Sunset Yellow FCF [2783-94-0]
Symphytine [22571-95-5]
Talc [14807-96-6], not containing asbestiform fibres
Tannic acid [1401-55-4] and tannins
Temazepam [846-50-4] (1996)
2,2´,5,5´-Tetrachlorobenzidine [15721-02-5]
1,1,1,2-Tetrachloroethane [630-20-6]
1,1,2,2-Tetrachloroethane [79-34-5]
Tetrachlorvinphos [22248-79-9]
Tetrafluoroethylene [116-14-3]
Tetrakis(hydroxymethyl)phosphonium salts (1990)
Theobromine [83-67-0] (1991)
Theophylline [58-55-9] (1991)
Thiouracil [141-90-2]
Thiram [137-26-8] (1991)
Titanium dioxide [13463-67-7] (1989)
Toluene [108-88-3] (1989)
Toremifene [89778-26-7] (1996)
Toxins derived from Fusarium graminearum, F. culmorum andF. crookwellense (1993)
Toxins derived from Fusarium sporotrichioides (1993)
Trichlorfon [52-68-6]
Trichloroacetic acid [76-03-9] (1995)
Trichloroacetonitrile [545-06-2] (1991)
1,1,1-Trichloroethane [71-55-6]
1,1,2-Trichloroethane [79-00-5] (1991)
Triethylene glycol diglydicyl ether [1954-28-5]
Trifluralin [1582-09-8] (1991)
4,4´,6-Trimethylangelicin [90370-29-9] plus ultravioletradiation
2,4,5-Trimethylaniline [137-17-7]
2,4,6-Trimethylaniline [88-05-1]
4,5´,8-Trimethylpsoralen [3902-71-4]
2,4,6-Trinitrotoluene [118-96-7] (1996)
Triphenylene [217-59-4]
Tris(aziridinyl)-p-benzoquinone (Triaziquone) [68-76-8]
Tris(1-aziridinyl)phosphine oxide [545-55-1]
2,4,6-Tris(1-aziridinyl)-s-triazine [51-18-3]
Tris(2-chloroethyl)phosphate [115-96-8] (1990)
1,2,3-Tris(chloromethoxy)propane [38571-73-2]
Tris(2-methyl-1-aziridinyl)phosphine oxide [57-39-6]
Vat Yellow 4 [128-66-5] (1990)
Vinblastine sulphate [143-67-9]
Vincristine sulphate [2068-78-2]
Vinyl acetate [108-05-4]
Vinyl chloride-vinyl acetate copolymers [9003-22-9]
Vinylidene chloride [75-35-4]
Vinylidene chloride-vinyl chloride copolymers [9011-06-7]
Vinylidene fluoride [75-38-7]
N-Vinyl-2-pyrrolidone [88-12-0]
Vinyl toluene [25013-15-4] (1994)
Wollastonite [13983-17-0]
Xylene [1330-20-7] (1989)
2,4-Xylidine [95-68-1]
2,5-Xylidine [95-78-3]
Yellow AB [85-84-7]
Yellow OB [131-79-3]
Zectran [315-18-4]
Zeolites [1318-02-1] other than erionite (clinoptilolite,phillipsite, mordenite, non-fibrous Japanese zeolite,synthetic zeolites) (1997)
Zineb [12122-67-7]
Ziram [137-30-4] (1991)
Mixtures
Betel quid, without tobacco
Bitumens [8052-42-4], steam-refined, cracking-residue and air-refined
Crude oil [8002-05-9] (1989)
Diesel fuels, distillate (light) (1989)
Fuel oils, distillate (light) (1989)
Jet fuel (1989)
Mate (1990)
Mineral oils, highly refined
Petroleum solvents (1989)
Printing inks (1996)
Tea (1991)
Terpene polychlorinates (StrobaneR) [8001-50-1]
Exposure circumstances
Flat-glass and specialty glass (manufacture of) (1993)
Hair colouring products (personal use of) (1993)
Leather goods manufacture
Leather tanning and processing
Lumber and sawmill industries (including logging)
Paint manufacture (occupational exposure in) (1989)
Pulp and paper manufacture
Group 4—Probably not carcinogenic to humans (1)
Caprolactam [105-60-2]
The History of Occupational Exposure Limits
Over the past 40 years, many organizations in numerous countries have proposed occupational exposure limits (OELs) for airborne contaminants. The limits or guidelines that have gradually become the most widely accepted both in the United States and in most other countries are those issued annually by the American Conference of Governmental Industrial Hygienists (ACGIH), which are termed threshold limit values (TLVs) (LaNier 1984; Cook 1986; ACGIH 1994).
The usefulness of establishing OELs for potentially harmful agents in the working environment has been demonstrated repeatedly since their inception (Stokinger 1970; Cook 1986; Doull 1994). The contribution of OELs to the prevention or minimization of disease is now widely accepted, but for many years such limits did not exist, and even when they did, they were often not observed (Cook 1945; Smyth 1956; Stokinger 1981; LaNier 1984; Cook 1986).
It was well understood as long ago as the fifteenth century, that airborne dusts and chemicals could bring about illness and injury, but the concentrations and lengths of exposure at which this might be expected to occur were unclear (Ramazinni 1700).
As reported by Baetjer (1980), “early in this century when Dr. Alice Hamilton began her distinguished career in occupational disease, no air samples and no standards were available to her, nor indeed were they necessary. Simple observation of the working conditions and the illness and deaths of the workers readily proved that harmful exposures existed. Soon however, the need for determining standards for safe exposure became obvious.”
The earliest efforts to set an OEL were directed to carbon monoxide, the toxic gas to which more persons are occupationally exposed than to any other (for a chronology of the development of OELs, see figure 1. The work of Max Gruber at the Hygienic Institute at Munich was published in 1883. The paper described exposing two hens and twelve rabbits to known concentrations of carbon monoxide for up to 47 hours over three days; he stated that “the boundary of injurious action of carbon monoxide lies at a concentration in all probability of 500 parts per million, but certainly (not less than) 200 parts per million”. In arriving at this conclusion, Gruber had also inhaled carbon monoxide himself. He reported no symptoms or uncomfortable sensations after three hours on each of two consecutive days at concentrations of 210 parts per million and 240 parts per million (Cook 1986).
Figure 1. Chronology of occupational exposure levels (OELS).
The earliest and most extensive series of animal experiments on exposure limits were those conducted by K.B. Lehmann and others under his direction. In a series of publications spanning 50 years they reported on studies on ammonia and hydrogen chloride gas, chlorinated hydrocarbons and a large number of other chemical substances (Lehmann 1886; Lehmann and Schmidt-Kehl 1936).
Kobert (1912) published one of the earlier tables of acute exposure limits. Concentrations for 20 substances were listed under the headings: (1) rapidly fatal to man and animals, (2) dangerous in 0.5 to one hour, (3) 0.5 to one hour without serious disturbances and (4) only minimal symptoms observed. In his paper “Interpretations of permissible limits”, Schrenk (1947) notes that the “values for hydrochloric acid, hydrogen cyanide, ammonia, chlorine and bromine as given under the heading ‘only minimal symptoms after several hours’ in the foregoing Kobert paper agree with values as usually accepted in present-day tables of MACs for reported exposures”. However, values for some of the more toxic organic solvents, such as benzene, carbon tetrachloride and carbon disulphide, far exceeded those currently in use (Cook 1986).
One of the first tables of exposure limits to originate in the United States was that published by the US Bureau of Mines (Fieldner, Katz and Kenney 1921). Although its title does not so indicate, the 33 substances listed are those encountered in workplaces. Cook (1986) also noted that most of the exposure limits through the 1930s, except for dusts, were based on rather short animal experiments. A notable exception was the study of chronic benzene exposure by Leonard Greenburg of the US Public Health Service, conducted under the direction of a committee of the National Safety Council (NSC 1926). An acceptable exposure for human beings based on long-term animal experiments was derived from this work.
According to Cook (1986), for dust exposures, permissible limits established before 1920 were based on exposures of workers in the South African gold mines, where the dust from drilling operations was high in crystalline free silica. In 1916, an exposure limit of 8.5 million particles per cubic foot of air (mppcf) for the dust with an 80 to 90% quartz content was set (Phthisis Prevention Committee 1916). Later, the level was lowered to 5 mppcf. Cook also reported that, in the United States, standards for dust, also based on exposure of workers, were recommended by Higgins and co-workers following a study at the south-western Missouri zinc and lead mines in 1917. The initial level established for high quartz dusts was ten mppcf, appreciably higher than was established by later dust studies conducted by the US Public Health Service. In 1930, the USSR Ministry of Labour issued a decree that included maximum allowable concentrations for 12 industrial toxic substances.
The most comprehensive list of occupational exposure limits up to 1926 was for 27 substances (Sayers 1927). In 1935 Sayers and Dalle Valle published physiological responses to five concentrations of 37 substances, the fifth being the maximum allowable concentration for prolonged exposure. Lehmann and Flury (1938) and Bowditch et al. (1940) published papers that presented tables with a single value for repeated exposures to each substance.
Many of the exposure limits developed by Lehmann were included in a monograph initially published in 1927 by Henderson and Haggard (1943), and a little later in Flury and Zernik’s Schadliche Gase (1931). According to Cook (1986), this book was considered the authoritative reference on effects of injurious gases, vapours and dusts in the workplace until Volume II of Patty’s Industrial Hygiene and Toxicology (1949) was published.
The first lists of standards for chemical exposures in industry, called maximum allowable concentrations (MACs), were prepared in 1939 and 1940 (Baetjer 1980). They represented a consensus of opinion of the American Standard Association and a number of industrial hygienists who had formed the ACGIH in 1938. These “suggested standards” were published in 1943 by James Sterner. A committee of the ACGIH met in early 1940 to begin the task of identifying safe levels of exposure to workplace chemicals, by assembling all the data which would relate the degree of exposure to a toxicant to the likelihood of producing an adverse effect (Stokinger 1981; LaNier 1984). The first set of values were released in 1941 by this committee, which was composed of Warren Cook, Manfred Boditch (reportedly the first hygienist employed by industry in the United States), William Fredrick, Philip Drinker, Lawrence Fairhall and Alan Dooley (Stokinger 1981).
In 1941, a committee (designated as Z-37) of the American Standards Association, which later became the American National Standards Institute, developed its first standard of 100 ppm for carbon monoxide. By 1974 the committee had issued separate bulletins for 33 exposure standards for toxic dusts and gases.
At the annual meeting of the ACGIH in 1942, the newly appointed Subcommittee on Threshold Limits presented in its report a table of 63 toxic substances with the “maximum allowable concentrations of atmospheric contaminants” from lists furnished by the various state industrial hygiene units. The report contains the statement, “The table is not to be construed as recommended safe concentrations. The material is presented without comment” (Cook 1986).
In 1945 a list of 132 industrial atmospheric contaminants with maximum allowable concentrations was published by Cook, including the then current values for six states, as well as values presented as a guide for occupational disease control by federal agencies and maximum allowable concentrations that appeared best supported by the references on original investigations (Cook 1986).
At the 1946 annual meeting of ACGIH, the Subcommittee on Threshold Limits presented their second report with the values of 131 gases, vapours, dusts, fumes and mists, and 13 mineral dusts. The values were compiled from the list reported by the subcommittee in 1942, from the list published by Warren Cook in Industrial Medicine (1945) and from published values of the Z-37 Committee of the American Standards Association. The committee emphasized that the “list of M.A.C. values is presented … with the definite understanding that it be subject to annual revision.”
Intended use of OELs
The ACGIH TLVs and most other OELs used in the United States and some other countries are limits which refer to airborne concentrations of substances and represent conditions under which “it is believed that nearly all workers may be repeatedly exposed day after day without adverse health effects” (ACGIH 1994). (See table 1). In some countries the OEL is set at a concentration which will protect virtually everyone. It is important to recognize that unlike some exposure limits for ambient air pollutants, contaminated water, or food additives set by other professional groups or regulatory agencies, exposure to the TLV will not necessarily prevent discomfort or injury for everyone who is exposed (Adkins et al. 1990). The ACGIH recognized long ago that because of the wide range in individual susceptibility, a small percentage of workers may experience discomfort from some substances at concentrations at or below the threshold limit and that a smaller percentage may be affected more seriously by aggravation of a pre-existing condition or by development of an occupational illness (Cooper 1973; ACGIH 1994). This is clearly stated in the introduction to the ACGIH’s annual booklet Threshold Limit Values for Chemical Substances and Physical Agents and Biological Exposure Indices (ACGIH 1994).
Table 1. Occupational exposure limits (OELs) in various countries (as of 1986)
Country/Province |
Type of standard |
Argentina |
OELs are essentially the same as those of the 1978 ACGIH TLVs. The principal difference from the ACGIH list is that, for the 144 substances (of the total of 630) for which no STELs are listed by ACGIH, the values used for the Argentina TWAs are entered also under this heading. |
Australia |
The National Health and Medical Research Council (NHMRC) adopted a revised edition of the Occupational Health Guide Threshold Limit Values (1990-91) in 1992. The OELs have no legal status in Australia, except where specifically incorporated into law by reference. The ACGIHTLVs are published in Australia as an appendix to the occupational health guides, revised with the ACGIH revisions in odd-numbered years. |
Austria |
The values recommended by the Expert Committee of the Worker Protection Commission for Appraisal of MAC (maximal acceptable concentration) Values in cooperation with the General Accident Prevention Institute of the Chemical Workers Trade Union, is considered obligatory by the Federal Ministry for Social Administration. They are applied by the Labour Inspectorate under the Labour Protection Law. |
Belgium |
The Administration of Hygiene and Occupational Medicine of the Ministry of Employment and of Labour uses the TLVs of the ACGIH as a guideline. |
Brazil |
The TLVs of the ACGIH have been used as the basis for the occupational health legislation of Brazil since 1978. As the Brazilian work week is usually 48 hours, the values of the ACGIH were adjusted in conformity with a formula developed for this purpose. The ACGIH list was adopted only for those air contaminants which at the time had nationwide application. The Ministry of Labour has brought the limits up to date with establishment of values for additional contaminants in accordance with recommendations from the Fundacentro Foundation of Occupational Safety and Medicine. |
Canada (and Provinces) |
Each province has its own regulations: |
Alberta |
OELs are under the Occupational Health and Safety Act, Chemical Hazard Regulation, which requires the employer to ensure that workers are not exposed above the limits. |
British Columbia |
The Industrial Health and Safety Regulations set legal requirements for most of British Columbia industry, which refer to the current schedule of TLVs for atmospheric contaminants published by the ACGIH. |
Manitoba |
The Department of Environment and Workplace Safety and Health is responsible for legislation and its administration concerning the OELs. The guidelines currently used to interpret risk to health are the ACGIH TLVs with the exception that carcinogens are given a zero exposure level “so far as is reasonably practicable”. |
New Brunswick |
The applicable standards are those published in the latest ACGIH issue and, in case of an infraction, it is the issue in publication at the time of infraction that dictates compliance. |
Northwest Territories |
The Northwest Territories Safety Division of the Justice and Service Department regulates workplace safety for non-federal employees under the latest edition of the ACGIH TLVs. |
Nova Scotia |
The list of OELs is the same as that of the ACGIH as published in 1976 and its subsequent amendments and revisions. |
Ontario |
Regulations for a number of hazardous substances are enforced under the Occupational Health and Safety Act, published each in a separate booklet that includes the permissible exposure level and codes for respiratory equipment, techniques for measuring airborne concentrations and medical surveillance approaches. |
Quebec |
Permissible exposure levels are similar to the ACGIH TLVs and compliance with the permissible exposure levels for workplace air contaminants is required. |
Chile |
The maximum concentration of eleven substances having the capacity of causing acute, severe or fatal effects cannot be exceeded for even a moment. The values in the Chile standard are those of the ACGIH TLVs to which a factor of 0.8 is applied in view of the 48-hour week. |
Denmark |
OELs include values for 542 chemical substances and 20 particulates. It is legally required that these not be exceeded as time-weighted averages. Data from the ACGIH are used in the preparation of the Danish standards. About 25 per cent of the values are different from those of ACGIH with nearly all of these being somewhat more stringent. |
Ecuador |
Ecuador does not have a list of permissible exposure levels incorporated in its legislation. The TLVs of the ACGIH are used as a guide for good industrial hygiene practice. |
Finland |
OELs are defined as concentrations that are deemed to be hazardous to at least some workers on long-term exposure. Whereas the ACGIH has as their philosophy that nearly all workers may be exposed to substances below the TLV without adverse effect, the viewpoint in Finland is that where exposures are above the limiting value, deleterious effects on health may occur. |
Germany |
The MAC value is “the maximum permissible concentration of a chemical compound present in the air within a working area (as gas, vapour, particulate matter) which, according to current knowledge, generally does not impair the health of the employee nor cause undue annoyance. Under these conditions, exposure can be repeated and of long duration over a daily period of eight hours, constituting an average work week of 40 hours (42 hours per week as averaged over four successive weeks for firms having four work shifts).- Scientifically based criteria for health protection, rather than their technical or economical feasibility, are employed.” |
Ireland |
The latest TLVs of the ACGIH are normally used. However, the ACGIH list is not incorporated in the national laws or regulations. |
Netherlands |
MAC values are taken largely from the list of the ACGIH, as well as from the Federal Republic of Germany and NIOSH. The MAC is defined as “that concentration in the workplace air which, according to present knowledge, after repeated long-term exposure even up to a whole working life, in general does not harm the health of workers or their offspring.” |
Philippines |
The 1970 TLVs of the ACGIH are used, except 50 ppm for vinyl chloride and 0.15 mg/m(3) for lead, inorganic compounds, fume and dust. |
Russian Federation |
The former USSR established many of its limits with the goal of eliminating any possibility for even reversible effects. Such subclinical and fully reversible responses to workplace exposures have, thus far, been considered too restrictive to be useful in the United States and in most other countries. In fact, due to the economic and engineering difficulties in achieving such low levels of air contaminants in the workplace, there is little indication that these limits have actually been achieved in countries which have adopted them. Instead, the limits appear to serve more as idealized goals rather than limits which manufacturers are legally bound or morally committed to achieve. |
United States |
At least six groups recommend exposure limits for the workplace: the TLVs of the ACGIH, the Recommended Exposure Limits (RELs) suggested by the National Institute for Occupational Safety and Health (NIOSH), the Workplace Environment Exposure Limits (WEEL) developed by the American Industrial Hygiene Association (AIHA), standards for workplace air contaminants suggested by the Z-37 Committee of the American National Standards Institute (EAL), the proposed workplace guides of the American Public Health Association (APHA 1991), and recommendations by local, state or regional governments. In addition, permissible exposure limits (PELs), which are regulations that must be met in the workplace because they are law, have been promulgated by the Department of Labor and are enforced by the Occupational Safety and Health Administration (OSHA). |
Source: Cook 1986.
This limitation, although perhaps less than ideal, has been considered a practical one since airborne concentrations so low as to protect hypersusceptibles have traditionally been judged infeasible due to either engineering or economic limitations. Until about 1990, this shortcoming in the TLVs was not considered a serious one. In light of the dramatic improvements since the mid-1980s in our analytical capabilities, personal monitoring/sampling devices, biological monitoring techniques and the use of robots as a plausible engineering control, we are now technologically able to consider more stringent occupational exposure limits.
The background information and rationale for each TLV are published periodically in the Documentation of the Threshold Limit Values (ACGIH 1995). Some type of documentation is occasionally available for OELs set in other countries. The rationale or documentation for a particular OEL should always be consulted before interpreting or adjusting an exposure limit, as well as the specific data that were considered in establishing it (ACGIH 1994).
TLVs are based on the best available information from industrial experience and human and animal experimental studies—when possible, from a combination of these sources (Smith and Olishifski 1988; ACGIH 1994). The rationale for choosing limiting values differs from substance to substance. For example, protection against impairment of health may be a guiding factor for some, whereas reasonable freedom from irritation, narcosis, nuisance or other forms of stress may form the basis for others. The age and completeness of the information available for establishing occupational exposure limits also varies from substance to substance; consequently, the precision of each TLV is different. The most recent TLV and its documentation (or its equivalent) should always be consulted in order to evaluate the quality of the data upon which that value was set.
Even though all of the publications which contain OELs emphasize that they were intended for use only in establishing safe levels of exposure for persons in the workplace, they have been used at times in other situations. It is for this reason that all exposure limits should be interpreted and applied only by someone knowledgeable of industrial hygiene and toxicology. The TLV Committee (ACGIH 1994) did not intend that they be used, or modified for use:
The TLV Committee and other groups which set OELs warn that these values should not be “directly used” or extrapolated to predict safe levels of exposure for other exposure settings. However, if one understands the scientific rationale for the guideline and the appropriate approaches for extrapolating data, they can be used to predict acceptable levels of exposure for many different kinds of exposure scenarios and work schedules (ACGIH 1994; Hickey and Reist 1979).
Philosophy and approaches in setting exposure limits
TLVs were originally prepared to serve only for the use of industrial hygienists, who could exercise their own judgement in applying these values. They were not to be used for legal purposes (Baetjer 1980). However, in 1968 the United States Walsh-Healey Public Contract Act incorporated the 1968 TLV list, which covered about 400 chemicals. In the United States, when the Occupational Safety and Health Act (OSHA) was passed it required all standards to be national consensus standards or established federal standards.
Exposure limits for workplace air contaminants are based on the premise that, although all chemical substances are toxic at some concentration when experienced for a period of time, a concentration (e.g., dose) does exist for all substances at which no injurious effect should result no matter how often the exposure is repeated. A similar premise applies to substances whose effects are limited to irritation, narcosis, nuisance or other forms of stress (Stokinger 1981; ACGIH 1994).
This philosophy thus differs from that applied to physical agents such as ionizing radiation, and for some chemical carcinogens, since it is possible that there may be no threshold or no dose at which zero risk would be expected (Stokinger 1981). The issue of threshold effects is controversial, with reputable scientists arguing both for and against threshold theories (Seiler 1977; Watanabe et al. 1980, Stott et al. 1981; Butterworth and Slaga 1987; Bailer et al. 1988; Wilkinson 1988; Bus and Gibson 1994). With this in mind, some occupational exposure limits proposed by regulatory agencies in the early 1980s were set at levels which, although not completely without risk, posed risks that were no greater than classic occupational hazards such as electrocution, falls, and so on. Even in those settings which do not use industrial chemicals, the overall workplace risks of fatal injury are about one in one thousand. This is the rationale that has been used to justify selecting this theoretical cancer risk criterion for setting TLVs for chemical carcinogens (Rodricks, Brett and Wrenn 1987; Travis et al. 1987).
Occupational exposure limits established both in the United States and elsewhere are derived from a wide variety of sources. The 1968 TLVs (those adopted by OSHA in 1970 as federal regulations) were based largely on human experience. This may come as a surprise to many hygienists who have recently entered the profession, since it indicates that, in most cases, the setting of an exposure limit has come after a substance has been found to have toxic, irritational or otherwise undesirable effects on humans. As might be anticipated, many of the more recent exposure limits for systemic toxins, especially those internal limits set by manufacturers, have been based primarily on toxicology tests conducted on animals, in contrast to waiting for observations of adverse effects in exposed workers (Paustenbach and Langner 1986). However, even as far back as 1945, animal tests were acknowledged by the TLV Committee to be very valuable and they do, in fact, constitute the second most common source of information upon which these guidelines have been based (Stokinger 1970).
Several approaches for deriving OELs from animal data have been proposed and put into use over the past 40 years. The approach used by the TLV Committee and others is not markedly different from that which has been used by the US Food and Drug Administration (FDA) in establishing acceptable daily intakes (ADI) for food additives. An understanding of the FDA approach to setting exposure limits for food additives and contaminants can provide good insight to industrial hygienists who are involved in interpreting OELs (Dourson and Stara 1983).
Discussions of methodological approaches which can be used to establish workplace exposure limits based exclusively on animal data have also been presented (Weil 1972; WHO 1977; Zielhuis and van der Kreek 1979a, 1979b; Calabrese 1983; Dourson and Stara 1983; Leung and Paustenbach 1988a; Finley et al. 1992; Paustenbach 1995). Although these approaches have some degree of uncertainty, they seem to be much better than a qualitative extrapolation of animal test results to humans.
Approximately 50% of the 1968 TLVs were derived from human data, and approximately 30% were derived from animal data. By 1992, almost 50% were derived primarily from animal data. The criteria used to develop the TLVs may be classified into four groups: morphological, functional, biochemical and miscellaneous (nuisance, cosmetic). Of those TLVs based on human data, most are derived from effects observed in workers who were exposed to the substance for many years. Consequently, most of the existing TLVs have been based on the results of workplace monitoring, compiled with qualitative and quantitative observations of the human response (Stokinger 1970; Park and Snee 1983). In recent times, TLVs for new chemicals have been based primarily on the results of animal studies rather than human experience (Leung and Paustenbach 1988b; Leung et al. 1988).
It is noteworthy that in 1968 only about 50% of the TLVs were intended primarily to prevent systemic toxic effects. Roughly 40% were based on irritation and about two per cent were intended to prevent cancer. By 1993, about 50% were meant to prevent systemic effects, 35% to prevent irritation, and five per cent to prevent cancer. Figure 2 provides a summary of the data often used in developing OELs.
Figure 2. Data often used in developing an occupational exposure.
Limits for irritants
Prior to 1975, OELs designed to prevent irritation were largely based on human experiments. Since then, several experimental animal models have been developed (Kane and Alarie 1977; Alarie 1981; Abraham et al. 1990; Nielsen 1991). Another model based on chemical properties has been used to set preliminary OELs for organic acids and bases (Leung and Paustenbach 1988).
Limits for carcinogens
In 1972, the ACGIH Committee began to distinguish between human and animal carcinogens in its TLV list. According to Stokinger (1977), one reason for this distinction was to assist the stakeholders in discussions (union representatives, workers and the public) in focusing on those chemicals with more probable workplace exposures.
Do the TLVs Protect Enough Workers?
Beginning in 1988, concerns were raised by numerous persons regarding the adequacy or health protectiveness of TLVs. The key question raised was, what percentage of the working population is truly protected from adverse health effects when exposed to the TLV?
Castleman and Ziem (1988) and Ziem and Castleman (1989) argued both that the scientific basis of the standards was inadequate and that they were formulated by hygienists with vested interests in the industries being regulated.
These papers engendered an enormous amount of discussion, both supportive of and opposed to the work of the ACGIH (Finklea 1988; Paustenbach 1990a, 1990b, 1990c; Tarlau 1990).
A follow-up study by Roach and Rappaport (1990) attempted to quantify the safety margin and scientific validity of the TLVs. They concluded that there were serious inconsistencies between the scientific data available and the interpretation given in the 1976 Documentation by the TLV Committee. They also note that the TLVs were probably reflective of what the Committee perceived to be realistic and attainable at the time. Both the Roach and Rappaport and the Castleman and Ziem analyses have been responded to by the ACGIH, who have insisted on the inaccuracy of the criticisms.
Although the merit of the Roach and Rappaport analysis, or for that matter, that of Ziem and Castleman, will be debated for a number of years, it is clear that the process by which TLVs and other OELs will be set will probably never be as it was between 1945 and 1990. It is likely that in coming years, the rationale, as well as the degree of risk inherent in a TLV, will be more explicitly described in the documentation for each TLV. Also, it is certain that the definition of “virtually safe” or “insignificant risk” with respect to workplace exposure will change as the values of society change (Paustenbach 1995, 1997).
The degree of reduction in TLVs or other OELs that will undoubtedly occur in the coming years will vary depending on the type of adverse health effect to be prevented (central nervous system depression, acute toxicity, odour, irritation, developmental effects, or others). It is unclear to what degree the TLV committee will rely on various predictive toxicity models, or what risk criteria they will adopt, as we enter the next century.
Standards and Nontraditional Work Schedules
The degree to which shift work affects a worker’s capabilities, longevity, mortality, and overall well-being is still not well understood. So-called nontraditional work shifts and work schedules have been implemented in a number of industries in an attempt to eliminate, or at least reduce, some of the problems caused by normal shift work, which consists of three eight-hour work shifts per day. One kind of work schedule which is classified as nontraditional is the type involving work periods longer than eight hours and varying (compressing) the number of days worked per week (e.g., a 12-hours-per-day, three-day workweek). Another type of nontraditional work schedule involves a series of brief exposures to a chemical or physical agent during a given work schedule (e.g., a schedule where a person is exposed to a chemical for 30 minutes, five times per day with one hour between exposures). The last category of nontraditional schedule is that involving the “critical case” wherein persons are continuously exposed to an air contaminant (e.g., spacecraft, submarine).
Compressed workweeks are a type of nontraditional work schedule that has been used primarily in non-manufacturing settings. It refers to full-time employment (virtually 40 hours per week) which is accomplished in less than five days per week. Many compressed schedules are currently in use, but the most common are: (a) four-day workweeks with ten-hour days; (b) three-day workweeks with 12-hour days; (c) 4-1/2–day workweeks with four nine-hour days and one four-hour day (usually Friday); and (d) the five/four, nine plan of alternating five-day and four-day workweeks of nine-hour days (Nollen and Martin 1978; Nollen 1981).
Of all workers, those on nontraditional schedules represent only about 5% of the working population. Of this number, only about 50,000 to 200,000 Americans who work nontraditional schedules are employed in industries where there is routine exposure to significant levels of airborne chemicals. In Canada, the percentage of chemical workers on nontraditional schedules is thought to be greater (Paustenbach 1994).
One Approach to Setting International OELs
As noted by Lundberg (1994), a challenge facing all national committees is to identify a common scientific approach to setting OELs. Joint international ventures are advantageous to the parties involved since writing criteria documents is both a time- and cost-consuming process (Paustenbach 1995).
This was the idea when the Nordic Council of Ministers in 1977 decided to establish the Nordic Expert Group (NEG). The task of the NEG was to develop scientifically-based criteria documents to be used as a common scientific basis of OELs by the regulatory authorities in the five Nordic countries (Denmark, Finland, Iceland, Norway and Sweden). The criteria documents from the NEG lead to the definition of a critical effect and dose-response/dose-effect relationships. The critical effect is the adverse effect that occurs at the lowest exposure. There is no discussion of safety factors and a numerical OEL is not proposed. Since 1987, criteria documents are published by the NEG concurrently in English on a yearly basis.
Lundberg (1994) has suggested a standardized approach that each county would use. He suggested building a document with the following characteristics:
There are in practice only minor differences in the way OELs are set in the various countries that develop them. It should, therefore, be relatively easy to agree upon the format of a standardized criteria document containing the key information. From this point, the decision as to the size of the margin of safety that is incorporated in the limit would then be a matter of national policy.
Whereas the principles and methods of risk assessment for non-carcinogenic chemicals are similar in different parts of the world, it is striking that approaches for risk assessment of carcinogenic chemicals vary greatly. There are not only marked differences between countries, but even within a country different approaches are applied or advocated by various regulatory agencies, committees and scientists in the field of risk assessment. Risk assessment for non-carcinogens is rather consistent and pretty well established partly because of the long history and better understanding of the nature of toxic effects in comparison with carcinogens and a high degree of consensus and confidence by both scientists and the general public on methods used and their outcome.
For non-carcinogenic chemicals, safety factors were introduced to compensate for uncertainties in the toxicology data (which are derived mostly from animal experiments) and in their applicability to large, heterogeneous human populations. In doing so, recommended or required limits on safe human exposures were usually set at a fraction (the safety or uncertainty factor approach) of the exposure levels in animals that could be clearly documented as the no observed adverse effects level (NOAEL) or the lowest observed adverse effects level (LOAEL). It was then assumed that as long as human exposure did not exceed the recommended limits, the hazardous properties of chemical substances would not be manifest. For many types of chemicals, this practice, in somewhat refined form, continues to this day in toxicological risk assessment.
During the late 1960s and early 1970s regulatory bodies, starting in the United States, were confronted with an increasingly important problem for which many scientists considered the safety factor approach to be inappropriate, and even dangerous. This was the problem with chemicals that under certain conditions had been shown to increase the risk of cancers in humans or experimental animals. These substances were operationally referred to as carcinogens. There is still debate and controversy on the definition of a carcinogen, and there is a wide range of opinion about techniques to identify and classify carcinogens and the process of cancer induction by chemicals as well.
The initial discussion started much earlier, when scientists in the 1940s discovered that chemical carcinogens caused damage by a biological mechanism that was of a totally different kind from those that produced other forms of toxicity. These scientists, using principles from the biology of radiation-induced cancers, put forth what is referred to as the “non-threshold” hypothesis, which was considered applicable to both radiation and carcinogenic chemicals. It was hypothesized that any exposure to a carcinogen that reaches its critical biological target, especially the genetic material, and interacts with it, can increase the probability (the risk) of cancer development.
Parallel to the ongoing scientific discussion on thresholds, there was a growing public concern on the adverse role of chemical carcinogens and the urgent need to protect the people from a set of diseases collectively called cancer. Cancer, with its insidious character and long latency period together with data showing that cancer incidences in the general population were increasing, was regarded by the general public and politicians as a matter of concern that warranted optimal protection. Regulators were faced with the problem of situations in which large numbers of people, sometimes nearly the entire population, were or could be exposed to relatively low levels of chemical substances (in consumer products and medicines, at the workplace as well as in air, water, food and soils) that had been identified as carcinogenic in humans or experimental animals under conditions of relatively intense exposures.
Those regulatory officials were confronted with two fundamental questions which, in most cases, could not be fully answered using available scientific methods:
Regulators recognized the need for assumptions, sometimes scientifically based but often also unsupported by experimental evidence. In order to achieve consistency, definitions and specific sets of assumptions were adapted that would be generically applied to all carcinogens.
Carcinogenesis Is a Multistage Process
Several lines of evidence support the conclusion that chemical carcinogenesis is a multistage process driven by genetic damage and epigenetic changes, and this theory is widely accepted in the scientific community all over the world (Barrett 1993). Although the process of chemical carcinogenesis is often separated into three stages—initiation, promotion and progression—the number of relevant genetic changes is not known.
Initiation involves the induction of an irreversibly altered cell and is for genotoxic carcinogens always equated with a mutational event. Mutagenesis as a mechanism of carcinogenesis was already hypothesized by Theodor Boveri in 1914, and many of his assumptions and predictions have subsequently been proven to be true. Because irreversible and self-replicating mutagenic effects can be caused by the smallest amount of a DNA-modifying carcinogen, no threshold is assumed. Promotion is the process by which the initiated cell expands (clonally) by a series of divisions, and forms (pre)neoplastic lesions. There is considerable debate as to whether during this promotion phase initiated cells undergo additional genetic changes.
Finally in the progression stage “immortality” is obtained and full malignant tumours can develop by influencing angiogenesis, escaping the reaction of the host control systems. It is characterized by invasive growth and frequently metastatic spread of the tumour. Progression is accompanied by additional genetic changes due to the instability of proliferating cells and selection.
Therefore, there are three general mechanisms by which a substance can influence the multistep carcinogenic process. A chemical can induce a relevant genetic alteration, promote or facilitate clonal expansion of an initiated cell or stimulate progression to malignancy by somatic and/or genetic changes.
Risk Assessment Process
Risk can be defined as the predicted or actual frequency of occurrence of an adverse effect on humans or the environment, from a given exposure to a hazard. Risk assessment is a method of systematically organizing the scientific information and its attached uncertainties for description and qualification of the health risks associated with hazardous substances, processes, actions or events. It requires evaluation of relevant information and selection of the models to be used in drawing inferences from that information. Further, it requires explicit recognition of uncertainties and appropriate acknowledgement that alternative interpretation of the available data may be scientifically plausible. The current terminology used in risk assessment was proposed in 1984 by the US National Academy of Sciences. Qualitative risk assessment changed into hazard characterization/identification and quantitative risk assessment was divided into the components dose-response, exposure assessment and risk characterization.
In the following section these components will be briefly discussed in view of our current knowledge of the process of (chemical) carcinogenesis. It will become clear that the dominant uncertainty in the risk assessment of carcinogens is the dose-response pattern at low dose levels characteristic for environmental exposure.
Hazard identification
This process identifies which compounds have the potential to cause cancer in humans—in other words it identifies their intrinsic genotoxic properties. Combining information from various sources and on different properties serves as a basis for classification of carcinogenic compounds. In general the following information will be used:
Classification of chemicals into groups based on the assessment of the adequacy of the evidence of carcinogenesis in animals or in man, if epidemiological data are available, is a key process in hazard identification. The best known schemes for categorizing carcinogenic chemicals are those of IARC (1987), EU (1991) and the EPA (1986). An overview of their criteria for classification (e.g., low-dose extrapolation methods) is given in table 1.
Table 1. Comparison of low-dose extrapolations procedures
Current US EPA | Denmark | EEC | UK | Netherlands | Norway | |
Genotoxic carcinogen | Linearized multistage procedure using most appropriate low-dose model | MLE from 1- and 2-hit models plus judgement of best outcome | No procedure specified | No model, scientific expertise and judgement from all available data | Linear model using TD50 (Peto method) or “Simple Dutch Method” if no TD50 | No procedure specified |
Non-genotoxic carcinogen | Same as above | Biologically-based model of Thorslund or multistage or Mantel-Bryan model, based on tumour origin and dose-response | Use NOAEL and safety factors | Use NOEL and safety factors to set ADI | Use NOEL and safety factors to set ADI |
One important issue in classifying carcinogens, with sometimes far-reaching consequences for their regulation, is the distinction between genotoxic and non-genotoxic mechanisms of action. The US Environmental Protection Agency (EPA) default assumption for all substances showing carcinogenic activity in animal experiments is that no threshold exists (or at least none can be demonstrated), so there is some risk with any exposure. This is com- monly referred to as the non-threshold assumption for genotoxic (DNA-damaging) compounds. The EU and many of its members, such as the United Kingdom, the Netherlands and Denmark, make a distinction between carcinogens that are genotoxic and those believed to produce tumours by non-genotoxic mechanisms. For genotoxic carcinogens quantitative dose-response estimation procedures are followed that assume no threshold, although the procedures might differ from those used by the EPA. For non-genotoxic substances it is assumed that a threshold exists, and dose-response procedures are used that assume a threshold. In the latter case, the risk assessment is generally based on a safety factor approach, similar to the approach for non-carcinogens.
It is important to keep in mind that these different schemes were developed to deal with risk assessments in different contexts and settings. The IARC scheme was not produced for regulatory purposes, although it has been used as a basis for developing regulatory guidelines. The EPA scheme was designed to serve as a decision point for entering quantitative risk assessment, whereas the EU scheme is currently used to assign a hazard (classification) symbol and risk phrases to the chemical's label. A more extended discussion on this subject is presented in a recent review (Moolenaar 1994) covering procedures used by eight governmental agencies and two often-cited independent organizations, the Inter- national Agency for Research on Cancer (IARC) and the American Conference of Governmental Industrial Hygienists (ACGIH).
The classification schemes generally do not take into account the extensive negative evidence that may be available. Also, in recent years a greater understanding of the mechanism of action of carcinogens has emerged. Evidence has accumulated that some mechanisms of carcinogenicity are species-specific and are not relevant for man. The following examples will illustrate this important phenomenon. First, it has been recently demonstrated in studies on the carcinogenicity of diesel particles, that rats respond with lung tumours to a heavy loading of the lung with particles. However, lung cancer is not seen in coal miners with very heavy lung burdens of particles. Secondly, there is the assertion of the nonrelevance of renal tumours in the male rat on the basis that the key element in the tumourgenic response is the accumulation in the kidney of α-2 microglobulin, a protein that does not exist in humans (Borghoff, Short and Swenberg 1990). Disturbances of rodent thyroid function and peroxisome proliferation or mitogenesis in the mouse liver have also to be mentioned in this respect.
This knowledge allows a more sophisticated interpretation of the results of a carcinogenicity bioassay. Research towards a better understanding of the mechanisms of action of carcinogenicity is encouraged because it may lead to an altered classification and to the addition of a category in which chemicals are classified as not carcinogenic to humans.
Exposure assessment
Exposure assessment is often thought to be the component of risk assessment with the least inherent uncertainty because of the ability to monitor exposures in some cases and the availability of relatively well-validated exposure models. This is only partially true, however, because most exposure assessments are not conducted in ways that take full advantage of the range of available information. For that reason there is a great deal of room for improving exposure distribution estimates. This holds for both external as well as for internal exposure assessments. Especially for carcinogens, the use of target tissue doses rather than external exposure levels in modelling dose-response relationships would lead to more relevant predictions of risk, although many assumptions on default values are involved. Physiologically based pharmacokinetic (PBPK) models to determine the amount of reactive metabolites that reaches the target tissue are potentially of great value to estimate these tissue doses.
Risk Characterization
Current approaches
The dose level or exposure level that causes an effect in an animal study and the likely dose causing a similar effect in humans is a key consideration in risk characterization. This includes both dose-response assessment from high to low dose and interspecies extrapolation. The extrapolation presents a logical problem, namely that data are being extrapolated many orders of magnitude below the experimental exposure levels by empirical models that do not reflect the underlying mechanisms for carcinogenicity. This violates a basic principle in fitting of empirical models, namely not to extrapolate outside the range of the observable data. Therefore, this empirical extrapolation results in large uncertainties, both from a statistical and from a biological point of view. At present no single mathematical procedure is recognized as the most appropriate one for low-dose extrapolation in carcinogenesis. The mathematical models that have been used to describe the relation between the administered external dose, the time and the tumour incidence are based on either tolerance-distribution or mechanistic assumptions, and sometimes based on both. A summary of the most frequently cited models (Kramer et al. 1995) is listed in table 2.
Table 2. Frequently cited models in carcinogen risk characterization
Tolerance distribution models | Mechanistic models | |
Hit-models | Biologically based models | |
Logit | One-hit | Moolgavkar (MVK)1 |
Probit | Multihit | Cohen and Ellwein |
Mantel-Bryan | Weibull (Pike)1 | |
Weibull | Multistage (Armitage-Doll)1 | |
Gamma Multihit | Linearized Multistage, |
1 Time-to-tumour models.
These dose-response models are usually applied to tumour-incidence data corresponding to only a limited number of experimental doses. This is due to the standard design of the applied bioassay. Instead of determining the complete dose-response curve, a carcinogenicity study is in general limited to three (or two) relatively high doses, using the maximum tolerated dose (MTD) as highest dose. These high doses are used to overcome the inherent low statistical sensitivity (10 to 15% over background) of such bioassays, which is due to the fact that (for practical and other reasons) a relatively small number of animals is used. Because data for the low-dose region are not available (i.e., cannot be determined experimentally), extrapolation outside the range of observation is required. For almost all data sets, most of the above-listed models fit equally well in the observed dose range, due to the limited number of doses and animals. However, in the low-dose region these models diverge several orders of magnitude, thereby introducing large uncertainties to the risk estimated for these low exposure levels.
Because the actual form of the dose-response curve in the low-dose range cannot be generated experimentally, mechanistic insight into the process of carcinogenicity is crucial to be able to discriminate on this aspect between the various models. Comprehensive reviews discussing the various aspects of the different mathematical extrapolation models are presented in Kramer et al. (1995) and Park and Hawkins (1993).
Other approaches
Besides the current practice of mathematical modelling several alternative approaches have been proposed recently.
Biologically motivated models
Currently, the biologically based models such as the Moolgavkar-Venzon-Knudson (MVK) models are very promising, but at present these are not sufficiently well advanced for routine use and require much more specific information than currently is obtained in bioassays. Large studies (4,000 rats) such as those carried out on N-nitrosoalkylamines indicate the size of the study which is required for the collection of such data, although it is still not possible to extrapolate to low doses. Until these models are further developed they can be used only on a case-by-case basis.
Assessment factor approach
The use of mathematical models for extrapolation below the experimental dose range is in effect equivalent to a safety factor approach with a large and ill-defined uncertainty factor. The simplest alternative would be to apply an assessment factor to the apparent “no effect level”, or the “lowest level tested”. The level used for this assessment factor should be determined on a case-by-case basis considering the nature of the chemical and the population being exposed.
Benchmark dose (BMD)
The basis of this approach is a mathematical model fitted to the experimental data within the observable range to estimate or interpolate a dose corresponding to a defined level of effect, such as one, five or ten per cent increase in tumour incidence (ED01, ED05, ED10). As a ten per cent increase is about the smallest change that statistically can be determined in a standard bioassay, the ED10 is appropriate for cancer data. Using a BMD that is within the observable range of the experiment avoids the problems associated with dose extrapolation. Estimates of the BMD or its lower confidence limit reflect the doses at which changes in tumour incidence occurred, but are quite insensitive to the mathematical model used. A benchmark dose can be used in risk assessment as a measure of tumour potency and combined with appropriate assessment factors to set acceptable levels for human exposure.
Threshold of regulation
Krewski et al. (1990) have reviewed the concept of a “threshold of regulation” for chemical carcinogens. Based on data obtained from the carcinogen potency database (CPDB) for 585 experiments, the dose corresponding to 10-6 risk was roughly log-normally distributed around a median of 70 to 90 ng/kg/d. Exposure to dose levels greater than this range would be considered unacceptable. The dose was estimated by linear extrapolation from the TD50 (the dose inducing toxicity is 50% of the animals tested) and was within a factor of five to ten of the figure obtained from the linearized multistage model. Unfortunately, the TD50 values will be related to the MTD, which again casts doubt on the validity of the measurement. However the TD50 will often be within or very close to the experimental data range.
Such an approach as using a threshold of regulation would require much more consideration of biological, analytical and mathematical issues and a much wider database before it could be considered. Further investigation into the potencies of various carcinogens may throw further light onto this area.
Objectives and Future of CarcinogenRisk Assessment
Looking back to the original expectations on the regulation of (environmental) carcinogens, namely to achieve a major reduction in cancer, it appears that the results at present are disappointing. Over the years it became apparent that the number of cancer cases estimated to be produced by regulatable carcinogens was disconcertingly small. Considering the high expectations that launched the regulatory efforts in the 1970s, a major anticipated reduction in the cancer death rate has not been achieved in terms of the estimated effects of environmental carcinogens, not even with ultraconservative quantitative assessment procedures. The main characteristic of the EPA procedures is that low-dose extrapolations are made in the same way for each chemical regardless of the mechanism of tumour formation in experimental studies. It should be noted, however, that this approach stands in sharp contrast to approaches taken by other governmental agencies. As indicated above, the EU and several European governments—Denmark, France, Germany, Italy, the Netherlands, Sweden, Switzerland, UK—distinguish between genotoxic and non-genotoxic carcinogens, and approach risk estimation differently for the two categories. In general, non-genotoxic carcinogens are treated as threshold toxicants. No effect levels are determined, and uncertainty factors are used to provide an ample margin of safety. To determine whether or not a chemical should be regarded as non-genotoxic is a matter of scientific debate and requires clear expert judgement.
The fundamental issue is: What is the cause of cancer in humans and what is the role of environmental carcinogens in that causation? The hereditary aspects of cancer in humans are much more important than previously anticipated. The key to signifi- cant advancement in the risk assessment of carcinogens is a better understanding of the causes and mechanisms of cancer. The field of cancer research is entering a very exciting area. Molecular research may radically alter the way we view the impact of environmental carcinogens and the approaches to control and prevent cancer, both for the general public and the workplace. Risk assessment of carcinogens needs to be based on concepts of the mechanisms of action that are, in fact, just emerging. One of the important aspects is the mechanism of heritable cancer and the interaction of carcinogens with this process. This knowledge will have to be incorporated into the systematic and consistent methodology that already exists for the risk assessment of carcinogens.
An Integrated Approach in the Design of Workstations
In ergonomics, the design of workstations is a critical task. There is general agreement that in any work setting, whether blue-collar or white-collar, a well-designed workstation furthers not only the health and well-being of the workers, but also productivity and the quality of the products. Conversely, the poorly designed workstation is likely to cause or contribute to the development of health complaints or chronic occupational diseases, as well as to problems with keeping product quality and productivity at a prescribed level.
To every ergonomist, the above statement may seem trivial. It is also recognized by every ergonomist that working life worldwide is full of not only ergonomic shortcomings, but blatant violations of basic ergonomic principles. It is clearly evident that there is a widespread unawareness with respect to the importance of workstation design among those responsible: production engineers, supervisors and managers.
It is noteworthy that there is an international trend with respect to industrial work which would seem to underline the importance of ergonomic factors: the increasing demand for improved product quality, flexibility and product delivery precision. These demands are not compatible with a conservative view regarding the design of work and workplaces.
Although in the present context it is the physical factors of workplace design that are of chief concern, it should be borne in mind that the physical design of the workstation cannot in practice be separated from the organization of work. This principle will be made evident in the design process described in what follows. The quality of the end result of the process relies on three supports: ergonomic knowledge, integration with productivity and quality demands, and participation. The process of implementation of a new workstation must cater to this integration, and it is the main focus of this article.
Design considerations
Workstations are meant for work. It must be recognized that the point of departure in the workstation design process is that a certain production goal has to be achieved. The designer—often a production engineer or other person at middle-management level—develops internally a vision of the workplace, and starts to implement that vision through his or her planning media. The process is iterative: from a crude first attempt, the solutions become gradually more and more refined. It is essential that ergonomic aspects be taken into account in each iteration as the work progresses.
It should be noted that ergonomic design of workstations is closely related to ergonomic assessment of workstations. In fact, the structure to be followed here applies equally to the cases where the workstation already exists or when it is in a planning stage.
In the design process, there is a need for a structure which ensures that all relevant aspects be considered. The traditional way to handle this is to use checklists containing a series of those variables which should be taken into account. However, general purpose checklists tend to be voluminous and difficult to use, since in a particular design situation only a fraction of the checklist may be relevant. Furthermore, in a practical design situation, some variables stand out as being more important than others. A methodology to consider these factors jointly in a design situation is required. Such a methodology will be proposed in this article.
Recommendations for workstation design must be based on a relevant set of demands. It should be noted that it is in general not enough to take into account threshold limit values for individual variables. A recognized combined goal of productivity and conservation of health makes it necessary to be more ambitious than in a traditional design situation. In particular, the question of musculoskeletal complaints is a major aspect in many industrial situations, although this category of problems is by no means limited to the industrial environment.
A Workstation Design Process
Steps in the process
In the workstation design and implementation process, there is always an initial need to inform users and to organize the project so as to allow for full user participation and in order to increase the chance of full employee acceptance of the final result. A treatment of this goal is not within the scope of the present treatise, which concentrates on the problem of arriving at an optimal solution for the physical design of the workstation, but the design process nonetheless allows the integration of such a goal. In this process, the following steps should always be considered:
The focus here is on steps one through five. Many times, only a subset of all these steps is actually included in the design of workstations. There may be various reasons for this. If the workstation is a standard design, such as in some VDU working situations, some steps may duly be excluded. However, in most cases the exclusion of some of the steps listed would lead to a workstation of lower quality than what can be considered acceptable. This can be the case when economic or time constraints are too severe, or when there is sheer neglect due to lack of knowledge or insight at management level.
Collection of user-specified demands
It is essential to identify the user of the workplace as any member of the production organization who may be able to contribute qualified views on its design. Users may include, for instance, the workers, the supervisors, the production planners and production engineers, as well as the safety steward. Experience shows clearly that these actors all have their unique knowledge which should be made use of in the process.
The collection of the user-specified demands should meet a number of criteria:
The above set of criteria may be met by using a methodology based on quality function deployment (QFD) according to Sullivan (1986). Here, the user demands may be collected in a session where a mixed group of actors (not more than eight to ten people) is present. All participants are given a pad of removable self-sticking notes. They are asked to write down all workplace demands which they find relevant, each one on a separate slip of paper. Aspects relating to work environment and safety, productivity and quality should be covered. This activity may continue for as long as found necessary, typically ten to fifteen minutes. After this session, one after the other of the participants is asked to read out his or her demands and to stick the notes on a board in the room where everyone in the group can see them. The demands are grouped into natural categories such as lighting, lifting aids, production equipment, reaching requirements and flexibility demands. After the completion of the round, the group is given the opportunity to discuss and to comment on the set of demands, one category at a time, with respect to relevance and priority.
The set of user-specified demands collected in a process such as the one described in the above forms one of the bases for the development of the demand specification. Additional information in the process may be produced by other categories of actors, for example, product designers, quality engineers, or economists; however, it is vital to realize the potential contribution that the users can make in this context.
Prioritizing and demand specification
With respect to the specification process, it is essential that the different types of demands be given consideration according to their respective importance; otherwise, all aspects that have been taken into account will have to be considered in parallel, which may tend to make the design situation complex and difficult to handle. This is why checklists, which need to be elaborate if they are to serve the purpose, tend to be difficult to manage in a particular design situation.
It may be difficult to devise a priority scheme which serves all types of workstations equally well. However, on the assumption that manual handling of materials, tools or products is an essential aspect of the work to be carried out in the workstation, there is a high probability that aspects associated with musculoskeletal load will be at the top of the priority list. The validity of this assumption may be checked in the user demand collection stage of the process. Relevant user demands may be, for instance, associated with muscular strain and fatigue, reaching, seeing, or ease of manipulation.
It is essential to realize that it may not be possible to transform all user-specified demands into technical demand specifications. Although such demands may relate to more subtle aspects such as comfort, they may nevertheless be of high relevance and should be considered in the process.
Musculoskeletal load variables
In line with the above reasoning, we shall here apply the view that there is a set of basic ergonomic variables relating to musculoskeletal load which need to be taken into account as a priority in the design process, in order to eliminate the risk of work-related musculosketal disorders (WRMDs). This type of disorder is a pain syndrome, localized in the musculoskeletal system, which develops over long periods of time as a result of repeated stresses on a particular body part (Putz-Anderson 1988). The essential variables are (e.g., Corlett 1988):
With respect to muscular force, criteria setting may be based on a combination of biomechanical, physiological and psychological factors. This is a variable that is operationalized through measurement of output force demands, in terms of handled mass or required force for, say, the operation of handles. Also, peak loads in connection with highly dynamic work may have to be taken into account.
Working posture demands may be evaluated by mapping (a) situations where the joint structures are stretched beyond the natural range of movement, and (b) certain particularly awkward situations, such as kneeling, twisting, or stooped postures, or work with the hand held above shoulder level.
Time demands may be evaluated on the basis of mapping (a) short-cycle, repetitive work, and (b) static work. It should be noted that static work evaluation may not exclusively concern maintaining a working posture or producing a constant output force over lengthy periods of time; from the point of view of the stabilizing muscles, particularly in the shoulder joint, seemingly dynamic work may have a static character. It may thus be necessary to consider lengthy periods of joint mobilization.
The acceptability of a situation is of course based in practice on the demands on the part of the body that is under the highest strain.
It is important to note that these variables should not be considered one at a time but jointly. For instance, high force demands may be acceptable if they occur only occasionally; lifting the arm above shoulder level once in a while is not normally a risk factor. But combinations among such basic variables must be considered. This tends to make criteria setting difficult and involved.
In the Revised NIOSH equation for the design and evaluation of manual handling tasks (Waters et al. 1993), this problem is addressed by devising an equation for recommended weight limits which takes into account the following mediating factors: horizontal distance, vertical lifting height, lifting asymmetry, handle coupling and lifting frequency. In this way, the 23-kilogram acceptable load limit based on biomechanical, physiological and psychological criteria under ideal conditions, may be modified substantially upon taking into account the specifics of the working situation. The NIOSH equation provides a base for evaluation of work and workplaces involving lifting tasks. However, there are severe limitations as to the usability of the NIOSH equation: for instance, only two-handed lifts may be analysed; scientific evidence for analysis of one-handed lifts is still inconclusive. This illustrates the problem of applying scientific evidence exclusively as a basis for work and workplace design: in practice, scientific evidence must be merged with educated views of persons who have direct or indirect experience of the type of work considered.
The cube model
Ergonomic evaluation of workplaces, taking into account the complex set of variables which need to be considered, is to a large extent a communications problem. Based on the prioritizing discussion described above, a cube model for ergonomic evaluation of workplaces was developed (Kadefors 1993). Here the prime goal was to develop a didactic tool for communication purposes, based on the assumption that output force, posture and time measures in a great majority of situations constitute interrelated, prioritized basic variables.
For each one of the basic variables, it is recognized that the demands may be grouped with respect to severity. Here, it is proposed that such a grouping may be made in three classes: (1) low demands, (2) medium demands or (3) high demands. The demand levels may be set either by using whatever scientific evidence is available or by taking a consensus approach with a panel of users. These two alternatives are of course not mutually exclusive, and may well entail similar results, but probably with different degrees of generality.
As noted above, combinations of the basic variables determine to a large extent the risk level with respect to the development of musculoskeletal complaints and cumulative trauma disorders. For instance, high time demands may render a working situation unacceptable in cases where there are also at least medium level demands with respect to force and posture. It is essential in the design and assessment of workplaces that the most important variables be considered jointly. Here a cube model for such evaluation purposes is proposed. The basic variables—force, posture and time—constitute the three axes of the cube. For each combination of demands a subcube may be defined; in all, the model incorporates 27 such subcubes (see figure 1).
Figure 1. The "cube model" for ergonomics assessment. Each cube represents a combination of demands relating to force, posture and time. Light: acceptable combination; gray: conditionally acceptable; black: unacceptable
An essential aspect of the model is the degree of acceptability of the demand combinations. In the model, a three-zone classification scheme is proposed for acceptability: (1) the situation is acceptable, (2) the situation is conditionally acceptable or (3) the situation is unacceptable. For didactic purposes, each subcube may be given a certain texture or colour (say, green-yellow-red). Again, the assessment may be user-based or based on scientific evidence. The conditionally acceptable (yellow) zone means that “there exists a risk of disease or injury that cannot be neglected, for the whole or a part of the operator population in question” (CEN 1994).
In order to develop this approach, it is useful to consider a case: the evaluation of load on the shoulder in moderately paced one-handed materials handling. This is a good example, since in this type of situation, it is normally the shoulder structures that are under the heaviest strain.
With respect to the force variable, classification may be based in this case on handled mass. Here, low force demand is identified as levels below 10% of maximal voluntary lifting capacity (MVLC), which amounts to approximately 1.6 kg in an optimal working zone. High force demand requires more than 30% MVLC, approximately 4.8 kg. Medium force demand falls in between these limits. Low postural strain is when the upper arm is close to the thorax. High postural strain is when humeral abduction or flexion exceeds 45°. Medium postural strain is when the abduction/flexion angle is between 15° and 45°. Low time demand is when the handling occupies less than one hour per working day on and off, or continuously for less than 10 minutes per day. High time demand is when the handling takes place for more than four hours per working day, or continuously for more than 30 minutes (sustained or repetitively). Medium time demand is when the exposure falls between these limits.
In figure 1, degrees of acceptability have been assigned to combinations of demands. For instance, it is seen that high time demands may only be combined with combined low force and postural demands. Moving from unacceptable to acceptable may be undertaken by reducing demands in either dimension, but reduction in time demands is the most efficient way in many cases. In other words, in some cases workplace design should be altered, in other cases it may be more efficient to change the organization of work.
Using a consensus panel with a set of users for definition of demand levels and classification of degree of acceptability may enhance the workstation design process considerably, as considered below.
Additional variables
In addition to the basic variables considered above, a set of variables and factors characterizing the workplace from an ergonomics point of view has to be taken into account, depending upon the particular conditions of the situation to be analysed. They include:
To a large extent these factors may be considered one at a time; hence the checklist approach may be useful. Grandjean (1988) in his textbook covers the essential aspects that usually need to be taken into account in this context. Konz (1990) in his guidelines provides for workstation organization and design a set of leading questions focusing on worker-machine interfacing in manufacturing systems.
In the design process followed here, the checklist should be read in conjunction with the user-specified demands.
A Workstation Design Example: Manual Welding
As an illustrative (hypothetical) example, the design process leading to implementation of a workstation for manual welding (Sundin et al. 1994) is described here. Welding is an activity frequently combining high demands for muscular force with high demands for manual precision. The work has a static character. The welder is often doing welding exclusively. The welding work environment is generally hostile, with a combination of exposure to high noise levels, welding smoke and optical radiation.
The task was to devise a workplace for manual MIG (metal inert gas) welding of medium size objects (up to 300 kg) in a workshop environment. The workstation had to be flexible since there was a variety of objects to be manufactured. There were high demands for productivity and quality.
A QFD process was carried out in order to provide a set of workstation demands in user terms. Welders, production engineers and product designers were involved. User demands, which are not listed here, covered a wide range of aspects including ergonomics, safety, productivity and quality.
Using the cube model approach, the panel identified, by consensus, limits between high, moderate and low load:
It was clear from assessment using the cube model (figure 1) that high time demands could not be accepted if there were concurrent high or moderate demands in terms of force and postural strain. In order to reduce these demands, mechanized object handling and tool suspension was deemed a necessity. There was consensus developed around this conclusion. Using a simple computer-aided design (CAD) program (ROOMER), an equipment library was created. Various workplace station layouts could be developed very easily and modified in close interaction with the users. This design approach has significant advantages compared with merely looking at plans. It gives the user an immediate vision of what the intended workplace may look like.
Figure 2. A CAD version of a workstation for manual welding, arrived at in the design process
Figure 2 shows the welding workstation arrived at using the CAD system. It is a workplace which reduces the force and posture demands, and which meets nearly all the residual user demands put forward.
Figure 3. The welding workstation implemented
On the basis of the results of the first stages of the design process, a welding workplace (figure 3) was implemented. Assets of this workplace include:
In a real design situation, compromises of various kinds may have to be made, due to economic, space and other constraints. It should be noted, however, that licensed welders are hard to come by for the welding industry around the world, and they represent a considerable investment. Nearly no welders go into normal retirement as active welders. Keeping the skilled welder on the job is beneficial for all parties involved: welder, company and society. For instance, there are very good reasons why equipment for object handling and positioning should be an integral constituent of many welding workplaces.
Data for Workstation Design
In order to be able to design a workplace properly, extensive sets of basic information may be needed. Such information includes anthropometric data of user categories, lifting strength and other output force capacity data of male and female populations, specifications of what constitutes optimal working zones and so forth. In the present article, references to some key papers are given.
The most complete treatment of virtually all aspects of work and workstation design is probably still the textbook by Grandjean (1988). Information on a wide range of anthropometric aspects relevant to workstation design is presented by Pheasant (1986). Large amounts of biomechanical and anthropometric data are given by Chaffin and Andersson (1984). Konz (1990) has presented a practical guide to workstation design, including many useful rules of thumb. Evaluation criteria for the upper limb, particularly with reference to cumulative trauma disorders, have been presented by Putz-Anderson (1988). An assessment model for work with hand tools was given by Sperling et al. (1993). With respect to manual lifting, Waters and co-workers have developed the revised NIOSH equation, summarizing existing scientific knowledge on the subject (Waters et al. 1993). Specification of functional anthropometry and optimal working zones have been presented by, for example, Rebiffé, Zayana and Tarrière (1969) and Das and Grady (1983a, 1983b). Mital and Karwowski (1991) have edited a useful book reviewing various aspects relating in particular to the design of industrial workplaces.
The large amount of data needed to design workstations properly, taking all relevant aspects into account, will make necessary the use of modern information technology by production engineers and other responsible people. It is likely that various types of decision-support systems will be made available in the near future, for instance in the form of knowledge-based or expert systems. Reports on such developments have been given by, for example, DeGreve and Ayoub (1987), Laurig and Rombach (1989), and Pham and Onder (1992). However, it is an extremely difficult task to devise a system making it possible for the end-user to have easy access to all relevant data needed in a specific design situation.
The entire topic of personal protection must be considered in the context of control methods for preventing occupational injuries and diseases. This article presents a detailed technical discussion of the types of personal protection which are available, the hazards for which their use may be indicated and the criteria for selecting appropriate protective equipment. Where they are applicable, the approvals, certifications and standards which exist for protective devices and equipment are summarized. In using this information, it is essential to be constantly mindful that personal protection should be considered the method of last resort in reducing the risks found in the workplace. In the hierarchy of methods which may be used to control workplace hazards, personal protection is not the method of first choice. In fact, it is to be used only when the possible engineering controls which reduce the hazard (by methods such as isolation, enclosure, ventilation, substitution, or other process changes), and administrative controls (such as reducing work time at risk for exposure) have been implemented to the extent feasible. There are cases, however, where personal protection is necessary, whether as a short-term or a long-term control, to reduce occupational disease and injury risks. When such use is necessary, personal protective equipment and devices must be used as part of a comprehensive programme which includes full evaluation of the hazards, correct selection and fitting of the equipment, training and education for the people who use the equipment, maintenance and repair to keep the equipment in good working order and overall management and worker commitment to the success of the protection programme.
Elements of a Personal Protection Programme
The apparent simplicity of some personal protective equipment can result in a gross underestimation of the amount of effort and expense required to effectively use this equipment. While some devices are relatively simple, such as gloves and protective footwear, other equipment such as respirators can actually be very complex. The factors which make effective personal protection difficult to achieve are inherent in any method which relies upon modification of human behaviour to reduce risk, rather than on protection which is built into the process at the source of the hazard. Regardless of the particular type of protective equipment being considered, there is a set of elements which must be included in a personal protection programme.
Hazard evaluation
If personal protection is to be an effective answer to a problem of occupational risk, the nature of the risk itself and its relationship to the overall work environment must be fully understood. While this may seem so obvious that it barely needs to be mentioned, the apparent simplicity of many protective devices can present a strong temptation to short cut this evaluation step. The consequences of providing protective devices and equipment which are not suitable to the hazards and the overall work environment range from reluctance or refusal to wear inappropriate equipment, to impaired job performance, to risk of worker injury and death. In order to achieve a proper match between the risk and the protective measure, it is necessary to know the composition and magnitude (concentration) of the hazards (including chemical, physical or biological agents), the length of time for which the device will be expected to perform at a known level of protection, and the nature of the physical activity which may be performed while the equipment is in use. This preliminary evaluation of the hazards is an essential diagnostic step which must be accomplished before moving on to selecting the appropriate protection.
Selection
The selection step is dictated in part by the information obtained in hazard evaluation, matched with the performance data for the protective measure being considered for use and the level of exposure which will remain after the personal protective measure is in place. In addition to these performance-based factors, there are guidelines and standards of practice in selecting equipment, particularly for respiratory protection. The selection criteria for respiratory protection have been formalized in publications such as Respirator Decision Logic from the National Institute for Occupational Safety and Health (NIOSH) in the United States. The same sort of logic can be applied to selecting other types of protective equipment and devices, based upon the nature and magnitude of the hazard, the degree of protection provided by the device or equipment, and the quantity or concentration of the hazardous agent which will remain and be considered acceptable while the protective devices are in use. In selecting protective devices and equipment, it is important to recognize that they are not intended to reduce risks and exposures to zero. Manufacturers of devices such as respirators and hearing protectors supply data on the performance of their equipment, such as protection and attenuation factors. By combining three essential pieces of information—namely, the nature and magnitude of the hazard, the degree of protection provided, and the acceptable level of exposure and risk while the protection is in use—equipment and devices can be selected to adequately protect workers.
Fitting
Any protective device must be properly fitted if it is to provide the degree of protection for which it was designed. In addition to the performance of a protective device, proper fit is also an important factor in the acceptance of the equipment and the motivation of people to actually use it. Protection which is ill-fitting or uncomfortable is unlikely to be used as intended. In the worst case, poorly fitted equipment such as clothing and gloves can actually create a hazard when working around machinery. Manufacturers of protective equipment and devices offer a range of sizes and designs of these products, and workers should be provided with protection which fits properly to accomplish its intended purpose.
In the case of respiratory protection, specific requirements for fitting are included in standards such as the United States Occupational Safety and Health Administration’s respiratory protection standards. The principles of assuring proper fit apply over the full range of protective equipment and devices, regardless of whether they are required by a specific standard.
Training and education
Because the nature of protective devices requires modification of human behaviour to isolate the worker from the work environment (rather than to isolate the source of a hazard from the environment), personal protection programmes are unlikely to succeed unless they include comprehensive worker education and training. By comparison, a system (such as local exhaust ventilation) which controls exposure at the source may operate effectively without direct worker involvement. Personal protection, however, requires full participation and commitment by the people who use it and from the management which provides it.
Those responsible for the management and operation of a personal protection programme must be trained in the selection of the proper equipment, in assuring that it is correctly fitted to the people who use it, in the nature of the hazards the equipment is intended to protect against, and the consequences of poor performance or equipment failure. They must also know how to repair, maintain, and clean the equipment, as well as to recognize damage and wear which occurs during its use.
People who use protective equipment and devices must understand the need for the protection, the reasons it is being used in place of (or in addition to) other control methods, and the benefits they will derive from its use. The consequences of unprotected exposure should be clearly explained, as well as the ways users can recognize that the equipment is not functioning properly. Users must be trained in methods of inspecting, fitting, wearing, maintaining, and cleaning protective equipment, and they must also be aware of the limitations of the equipment, particularly in emergency situations.
Maintenance and repair
The costs of equipment maintenance and repair must be fully and realistically assessed in designing any personal protection programme. Protective devices are subject to gradual degradation in performance through normal use, as well as catastrophic failures in extreme conditions such as emergencies. In considering the costs and benefits of using personal protection as a means of hazard control it is very important to recognize that the costs of initiating a programme represent only a fraction of the total expense of operating the programme over time. Equipment maintenance, repair, and replacement must be considered as fixed costs of operating a programme, as they are essential to maintaining the effectiveness of protection. These programme considerations should include such basic decisions as whether single use (disposable) or reusable protective devices should be used, and in the case of reusable devices, the length of service which can be expected before replacement must be reasonably estimated. These decisions may be very clearly defined, as in cases where gloves or respirators are usable only once and are discarded, but in many cases a careful judgement must be made as to the efficacy of reusing protective suits or gloves which have been contaminated by previous use. The decision to discard an expensive protective device rather than risk worker exposure as a result of degraded protection, or contamination of the protective device itself must be made very carefully. Programmes of equipment maintenance and repair must be designed to include mechanisms for making decisions such as these.
Summary
Protective equipment and devices are essential parts of a hazard control strategy. They can be used effectively, provided their appropriate place in the hierarchy of controls is recognized. The use of protective equipment and devices must be supported by a personal protection programme, which assures that the protection actually performs as intended in conditions of use, and that the people who have to wear it can use it effectively in their work activities.
Commonly a tool comprises a head and a handle, with sometimes a shaft, or, in the case of the power tool, a body. Since the tool must meet the requirements of multiple users, basic conflicts can arise which may have to be met with compromise. Some of these conflicts derive from limitations in the capacities of the user, and some are intrinsic to the tool itself. It should be remembered, however, that human limitations are inherent and largely immutable, while the form and function of the tool are subject to a certain amount of modification. Thus, in order to effect desirable change, attention must be directed primarily to the form of the tool, and, in particular, to the interface between the user and the tool, namely the handle.
The Nature of Grip
The widely accepted characteristics of grip have been defined in terms of a power grip, a precision grip and a hook grip, by which virtually all human manual activities can be accomplished.
In a power grip, such as is used in hammering nails, the tool is held in a clamp formed by the partially flexed fingers and the palm, with counterpressure being applied by the thumb. In a precision grip, such as one uses when adjusting a set screw, the tool is pinched between the flexor aspects of the fingers and the opposing thumb. A modification of the precision grip is the pencil grip, which is self-explanatory and is used for intricate work. A precision grip provides only 20% of the strength of a power grip.
A hook grip is used where there is no requirement for anything other than holding. In the hook grip the object is suspended from the flexed fingers, with or without the support of the thumb. Heavy tools should be designed so that they can be carried in a hook grip.
Grip Thickness
For precision grips, recommended thicknesses have varied from 8 to 16 millimetres (mm) for screwdrivers, and 13 to 30 mm for pens. For power grips applied around a more or less cylindrical object, the fingers should surround more than half the circumference, but the fingers and thumb should not meet. Recommended diameters have ranged from as low as 25 mm to as much as 85 mm. The optimum, varying with hand size, is probably around 55 to 65 mm for males, and 50 to 60 mm for females. Persons with small hands should not perform repetitive actions in power grips of diameter greater than 60 mm.
Grip Strength and Hand Span
The use of a tool requires strength. Other than for holding, the greatest requirement for hand strength is found in the use of cross-lever action tools such as pliers and crushing tools. The effective force in crushing is a function of the grip strength and the required span of the tool. The maximum functional span between the end of the thumb and the ends of the grasping fingers averages about 145 mm for men and 125 mm for women, with ethnic variations. For an optimal span, which ranges from 45 to 55 mm for both men and women, the grip strength available for a single short-term action ranges from about 450 to 500 newtons for men and 250 to 300 newtons for women, but for repetitive action the recommended requirement is probably closer to 90 to 100 newtons for men, and 50 to 60 newtons for women. Many commonly used clamps or pliers are beyond the capacity of one-handed use, particularly in women.
When a handle is that of a screwdriver or similar tool the available torque is determined by the user’s ability to transmit force to the handle, and thus is determined by both the coefficient of friction between hand and handle and the diameter of the handle. Irregularities in the shape of the handle make little or no difference to the ability to apply torque, although sharp edges can cause discomfort and eventual tissue damage. The diameter of a cylindrical handle that allows the greatest application of torque is 50 to 65 mm, while that for a sphere is 65 to 75 mm.
Handles
Shape of handle
The shape of a handle should maximize contact between skin and handle. It should be generalized and basic, commonly of flattened cylindrical or elliptical section, with long curves and flat planes, or a sector of a sphere, put together in such a manner as to conform to the general contours of the grasping hand. Because of its attachment to the body of a tool, the handle may also take the form of a stirrup, a T-shape or an L-shape, but the portion that contacts the hand will be in the basic form.
The space enclosed by the fingers is, of course, complex. The use of simple curves is a compromise intended to meet the variations represented by different hands and different degrees of flexion. In this regard, it is undesirable to introduce any contour matching of flexed fingers into the handle in the form of ridges and valleys, flutings and indentations, since, in fact, these modifications would not fit a significant number of hands and might indeed, over a prolonged period, cause pressure injury to the soft tissues. In particular, recesses of greater that 3 mm are not recommended.
A modification of the cylindrical section is the hexagonal section, which is of particular value in the design of small calibre tools or instruments. It is easier to maintain a stable grip on a hexagonal section of small calibre than on a cylinder. Triangular and square sections have also been used with varying degrees of success. In these cases, the edges must be rounded to avert pressure injury.
Grip Surface and Texture
It is not by accident that for millennia wood has been the material of choice for tool handles other than those for crushing tools like pliers or clamps. In addition to its aesthetic appeal, wood has been readily available and easily worked by unskilled workers, and has qualities of elasticity, thermal conductivity, frictional resistance and relative lightness in relation to bulk that have made it very acceptable for this and other uses.
In recent years, metal and plastic handles have become more common for many tools, the latter in particular for use with light hammers or screwdrivers. A metal handle, however, transmits more force to the hand, and preferably should be encased in a rubber or plastic sheath. The grip surface should be slightly compressible, where feasible, nonconductive and smooth, and the surface area should be maximized to ensure pressure distribution over as large an area as possible. A foam rubber grip has been used to reduce the perception of hand fatigue and tenderness.
The frictional characteristics of the tool surface vary with the pressure exerted by the hand, with the nature of the surface and contamination by oil or sweat. A small amount of sweat increases the coefficient of friction.
Length of handle
The length of the handle is determined by the critical dimensions of the hand and the nature of the tool. For a hammer to be used by one hand in a power grip, for example, the ideal length ranges from a minimum of about 100 mm to a maximum of about 125 mm. Short handles are unsuitable for a power grip, while a handle shorter than 19 mm cannot be properly grasped between thumb and forefinger and is unsuitable for any tool.
Ideally, for a power tool, or a hand saw other than a coping or fret saw, the handle should accommodate at the 97.5th percentile level the width of the closed hand thrust into it, namely 90 to 100 mm in the long axis and 35 to 40 mm in the short.
Weight and Balance
Weight is not a problem with precision tools. For heavy hammers and power tools a weight between 0.9 kg and 1.5 kg is acceptable, with a maximum of about 2.3 kg. For weights greater than recommended, the tool should be supported by mechanical means.
In the case of a percussion tool such as a hammer, it is desirable to reduce the weight of the handle to the minimum compatible with structural strength and have as much weight as possible in the head. In other tools, the balance should be evenly distributed where possible. In tools with small heads and bulky handles this may not be possible, but the handle should then be made progressively lighter as the bulk increases relative to the size of the head and shaft.
Significance of Gloves
It is sometimes overlooked by tool designers that tools are not always held and operated by bare hands. Gloves are commonly worn for safety and comfort. Safety gloves are seldom bulky, but gloves worn in cold climates may be very heavy, interfering not only with sensory feedback but also with the ability to grasp and hold. The wearing of woollen or leather gloves can add 5 mm to hand thickness and 8 mm to hand breadth at the thumb, while heavy mittens can add as much as 25 to 40 mm respectively.
Handedness
The majority of the population in the western hemisphere favours the use of the right hand. A few are functionally ambidextrous, and all persons can learn to operate with greater or less efficiency with either hand.
Although the number of left-handed persons is small, wherever feasible the fitting of handles to tools should make the tool workable by either left-handed or right-handed persons (examples would include the positioning of the secondary handle in a power tool or the finger loops in scissors or clamps) unless it is clearly inefficient to do so, as in the case of screw-type fasteners which are designed to take advantage of the powerful supinating muscles of the forearm in a right-handed person while precluding the left-hander from using them with equal effectiveness. This sort of limitation has to be accepted since the provision of left-hand threads is not an acceptable solution.
Significance of Gender
In general, women tend to have smaller hand dimensions, smaller grasp and some 50 to 70% less strength than men, although of course a few women at the higher percentile end have larger hands and greater strength than some men at the lower percentile end. As a result there exists a significant although undetermined number of persons, mostly female, who have difficulty in manipulating various hand tools which have been designed with male use in mind, including in particular heavy hammers and heavy pliers, as well as metal cutting, crimping and clamping tools and wire strippers. The use of these tools by women may require an undesirable two-handed instead of single-handed function. In a mixed-gender workplace it is therefore essential to ensure that tools of suitable size are available not only to meet the requirements of women, but also to meet those of men who are at the low percentile end of hand dimensions.
Special considerations
The orientation of a tool handle, where feasible, should allow the operating hand to conform to the natural functional position of the arm and hand, namely with the wrist more than half-supinated, abducted about 15° and slightly dorsiflexed, with the little finger in almost full flexion, the others less so and the thumb adducted and slightly flexed, a posture sometimes erroneously called the handshake position. (In a handshake the wrist is not more than half-supinated.) The combination of adduction and dorsiflexion at the wrist with varying flexion of the fingers and thumb generates an angle of grasp comprising about 80° between the long axis of the arm and a line passing through the centre point of the loop created by the thumb and index finger, that is, the transverse axis of the fist.
Forcing the hand into a position of ulnar deviation, that is, with the hand bent towards the little finger, as is found in using a standard pliers, generates pressure on the tendons, nerves and blood vessels within the wrist structure and can give rise to the disabling conditions of tenosynovitis, carpal tunnel syndrome and the like. By bending the handle and keeping the wrist straight, (that is, by bending the tool and not the hand) compression of nerves, soft tissues and blood vessels can be avoided. While this principle has been long recognized, it has not been widely accepted by tool manufacturers or the using public. It has particular application in the design of cross-lever action tools such as pliers, as well as knives and hammers.
Pliers and cross-lever tools
Special consideration must be given to the shape of the handles of pliers and similar devices. Traditionally pliers have had curved handles of equal length, the upper curve approximating the curve of the palm of the hand and the lower curve approximating the curve of the flexed fingers. When the tool is held in the hand, the axis between the handles is in line with the axis of the jaws of the pliers. Consequently, in operation, it is necessary to hold the wrist in extreme ulnar deviation, that is, bent towards the little finger, while it is being repeatedly rotated. In this position the use of the hand-wrist-arm segment of the body is extremely inefficient and very stressful on the tendons and joint structures. If the action is repetitive it may give rise to various manifestations of overuse injury.
To counter this problem a new and ergonomically more suitable version of pliers has appeared in recent years. In these pliers the axis of the handles is bent through approximately 45° relative to the axis of the jaws. The handles are thickened to allow a better grasp with less localized pressure on the soft tissues. The upper handle is proportionately longer with a shape that fits into, and around the ulnar side of, the palm. The forward end of the handle incorporates a thumb support. The lower handle is shorter, with a tang, or rounded projection, at the forward end and a curve conforming to the flexed fingers.
While the foregoing is a somewhat radical change, several ergonomically sound improvements can be made in pliers relatively easily. Perhaps the most important, where a power grip is required, is in the thickening and slight flattening of the handles, with a thumb support at the head-end of the handle and a slight flare at the other end. If not integral to the design, this modification can be achieved by encasing the basic metal handle with a fixed or detachable non-conductive sheath made of rubber or an appropriate synthetic material, and perhaps bluntly roughened to improve the tactile quality. Indentation of the handles for fingers is undesirable. For repetitive use it may be desirable to incorporate a light spring into the handle to open it after closing.
The same principles apply to other cross-lever tools, particularly with respect to change in the thickness and flattening of the handles.
Knives
For a general purpose knife, that is, one that is not used in a dagger grasp, it is desirable to include a 15° angle between handle and blade to reduce the stress on joint tissues. The size and shape of handles should conform in general to that for other tools, but to allow for different hand sizes it has been suggested that two sizes of knife handle should be supplied, namely one to fit the 50th to 95th percentile user, and one for the 5th to 50th percentile. To allow the hand to exert force as close to the blade as possible the top surface of the handle should incorporate a raised thumb rest.
A knife guard is required to prevent the hand from slipping forward onto the blade. The guard may take several forms, such as a tang, or curved projection, about 10 to 15 mm in length, protruding downwards from the handle, or at right angles to the handle, or a bail guard comprising a heavy metal loop from front to rear of the handle. The thumb rest also acts to prevent slippage.
The handle should conform to general ergonomic guidelines, with a yielding surface resistant to grease.
Hammers
The requirements for hammers have been largely considered above, with the exception of that relating to bending the handle. As noted above, forced and repetitive bending of the wrist may cause tissue damage. By bending the tool instead of the wrist this damage may be reduced. With respect to hammers various angles have been examined, but it would appear that bending the head downward between 10° and 20° may improve comfort, if it does not actually improve performance.
Screwdrivers and scraping tools
The handles of screwdrivers and other tools held in a somewhat similar manner, such as scrapers, files, hand chisels and so on, have some special requirements. Each at one time or another is used with a precision grip or a power grip. Each relies on the functions of the fingers and the palm of the hand for stabilization and the transmission of force.
The general requirements of handles have already been considered. The most common effective shape of a screwdriver handle has been found to be that of a modified cylinder, dome-shaped at the end to receive the palm, and slightly flared where it meets the shaft to provide support to the ends of the fingers. In this manner, torque is applied largely by way of the palm, which is maintained in contact with the handle by way of pressure applied from the arm and the frictional resistance at the skin. The fingers, although transmitting some force, occupy more of a stabilizing role, which is less fatiguing since less power is required. Thus the dome of the head becomes very important in handle design. If there are sharp edges or ridges on the dome or where the dome meets the handle, then either the hand becomes callused and injured, or the transmission of force is transferred towards the less efficient and more readily fatigued fingers and thumb. The shaft is commonly cylindrical, but a triangular shaft has been introduced which provides better support for the fingers, although its use may be more fatiguing.
Where the use of a screwdriver or other fastener is so repetitive as to comprise an overuse injury hazard the manual driver should be replaced with a powered driver slung from an overhead harness in such a manner as to be readily accessible without obstructing the work.
Saws and power tools
Hand saws, with the exception of fret saws and light hacksaws, where a handle like that of a screwdriver is most appropriate, commonly have a handle which takes the form of a closed pistol grip attached to the blade of the saw.
The handle essentially comprises a loop into which the fingers are placed. The loop is effectively a rectangle with curved ends. To allow for gloves it should have internal dimensions of approximately 90 to 100 mm in the long diameter and 35 to 40 mm in the short. The handle in contact with the palm should have the flattened cylindrical shape already mentioned, with compound curves to reasonably fit the palm and the flexed fingers. The width from outer curve to inner curve should be about 35 mm, and the thickness not more than 25 mm.
Curiously, the function of grasping and holding a power tool is very similar to that of holding a saw, and consequently a somewhat similar type of handle is effective. The pistol grip common in power tools is akin to an open saw handle with the sides being curved instead of being flattened.
Most power tools comprise a handle, a body and a head. Placement of the handle is significant. Ideally handle, body and head should be in line so that the handle is attached at the rear of the body and the head protrudes from the front. The line of action is the line of the extended index finger, so that the head is eccentric to the central axis of the body. The centre of mass of the tool, however, is in front of the handle, while the torque is such as to create a turning movement of the body which the hand must overcome. Consequently it would be more appropriate to place the primary handle directly under the centre of mass in such a way that, if necessary, the body juts out behind the handle as well as in front. Alternatively, particularly in a heavy drill, a secondary handle can be placed underneath the drill in such a manner that the drill can be operated with either hand. Power tools are normally operated by a trigger incorporated into the upper front end of the handle and operated by the index finger. The trigger should be designed to be operated by either hand and should incorporate an easily reset latching mechanism to hold the power on when required.
Eye and face protection includes safety spectacles, goggles, face shields and similar items used to protect against flying particles and foreign bodies, corrosive chemicals, fumes, lasers and radiation. Often, the whole face may need protection against radiation or mechanical, thermal or chemical hazards. Sometimes a face shield may be adequate also for protecting the eyes, but often specific eye protection is necessary, either separately or as a complement to the face protection.
A wide range of occupations require eye and face protectors: hazards include flying particles, fumes or corrosive solids, liquids or vapours in polishing, grinding, cutting, blasting, crushing, galvanizing or various chemical operations; against intensive light as in laser operations; and against ultraviolet or infrared radiation in welding or furnace operations. Of the many types of eye and face protection available, there is a correct type for each hazard. Whole-face protection is preferred for certain severe risks. As needed, hood or helmet type face protectors and face shields are used. Spectacles or goggles may be used for specific eye protection.
The two basic problems in wearing eye and face protectors are (1) how to provide effective protection which is acceptable for wearing over long hours of work without undue discomfort, and (2) the unpopularity of eye and face protection due to restriction of vision. The wearer’s peripheral vision is limited by the side frames; the nose bridge may disturb binocular vision; and misting is a constant problem. Particularly in hot climates or in hot work, additional coverings of the face may become intolerable and may be discarded. Short-term, intermittent operations also create problems as workers may be forgetful and disinclined to use protection. First consideration should always be given to the improvement of the working environment rather than to the possible need for personal protection. Before or in conjunction with the use of eye and face protection, consideration must be given to guarding of machines and tools (including interlocking guards), removal of fumes and dust by exhaust ventilation, screening of sources of heat or radiation, and screening of points from which particles may be ejected, such as abrasive grinders or lathes. When the eyes and face can be protected by the use of transparent screens or partitions of appropriate size and quality, for example, these alternatives are to be preferred to the use of personal eye protection.
There are six basic types of eye and face protection:
Figure 1. Common types of spectacles for eye protection with or without sideshield
Figure 2. Examples of goggle-type eye protectors
Figure 3. Face shield type protectors for hot work
Figure 4. Protectors for welders
There are goggles that may be worn over corrective spectacles. It is often better for the hardened lenses of such goggles to be fitted under the guidance of an ophthalmic specialist.
Protection against Specific Hazards
Traumatic and chemical injuries. Face shields or eye protectors are used against flying
particles, fumes, dust and chemical hazards. Common types are spectacles (often with side shields), goggles, plastic eye shields and face shields. The helmet type is used when injury risks are expected from various directions. The hood type and the diver’s helmet type are used in sand- and shot-blasting. Transparent plastics of various sorts, hardened glass or a wire screen may be used for protection against certain foreign bodies. Eye cup goggles with plastic or glass lenses or plastic eye shields as well as a diver’s helmet type shield or face shields made of plastic are used for protection against chemicals.
Materials commonly used include polycarbonates, acrylic resins or fibre-based plastics. Polycarbonates are effective against impacts but may not be suitable against corrosives. Acrylic protectors are weaker against impacts but suitable for protection from chemical hazards. Fibre-based plastics have the advantage of adding anti-misting coating. This anti-misting coating also prevents electrostatic effects. Thus such plastic protectors may be used not only in physically light work or chemical handling but also in modern clean-room work.
Thermal radiation. Face shields or eye protectors against infrared radiation are used mainly in furnace operations and other hot work involving exposure to high-temperature radiation sources. Protection is usually necessary at the same time against sparks or flying hot objects. Face protectors of the helmet type and the face shield type are mainly used. Various materials are used, including metal wire meshes, punched aluminium plates or similar metal plates, aluminized plastic shields or plastic shields with gold layer coatings. A face shield made of wire mesh can reduce thermal radiation by 30 to 50%. Aluminized plastic shields give good protection from radiant heat. Some examples of face shields against thermal radiation are given in figure 1.
Welding. Goggles, helmets or shields that give maximum eye protection for each welding and cutting process should be worn by operators, welders and their helpers. Effective protection is needed not only against intensive light and radiation but also against impacts upon the face, head and neck. Fibreglass-reinforced plastic or nylon protectors are effective but rather expensive. Vulcanized fibres are commonly used as shield material. As shown in figure 4, both helmet type protectors and hand-held shields are used to protect the eyes and face at the same time. Requirements for correct filter lenses to be used in various welding and cutting operations are described below.
Wide spectral bands. Welding and cutting processes or furnaces emit radiations in the ultraviolet, visible and infrared bands of the spectrum, which are all able to produce harmful effects upon the eyes. Spectacle type or goggle type protectors similar to those shown in figure 1 and figure 2 as well as welders’ protectors such as those shown in figure 4 can be used. In welding operations, helmet type protection and hand-shield type protectors are generally used, sometimes in conjunction with spectacles or goggles. It should be noted that protection is necessary also for the welder’s assistant.
Transmittance and tolerances in transmittance of various shades of filter lenses and filter plates of eye protection against high-intensity light are shown in table 1. Guides for selecting correct filter lenses in terms of the scales of protection are given in table 2 through table 6).
Table 1. Transmittance requirements (ISO 4850-1979)
Scale number |
Maximum transmittance in the ultraviolet spectrum t (), % |
Luminous transmittance ( ), % |
Maximum mean transmittance in the infrared spectrum , % |
|||
|
313 nm |
365 nm |
maximum |
minimum |
Near IR 1,300 to 780 nm, |
Mid. IR 2,000 to 1,300 nm , |
1.2 1.4 1.7 2.0 2.5 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
0,0003 0,0003 0,0003 0,0003 0,0003 0,0003 0,0003 0,0003 0,0003 0,0003 0,0003 0,0003 0,0003 Value less than or equal to transmittance permitted for 365 nm |
50 35 22 14 6,4 2,8 0,95 0,30 0,10 0,037 0,013 0,0045 0,0016 0,00060 0,00020 0,000076 0,000027 0,0000094 0,0000034 |
100 74,4 58,1 43,2 29,1 17,8 8,5 3,2 1,2 0,44 0,16 0,061 0,023 0,0085 0,0032 0,0012 0,00044 0,00016 0,000061 |
74,4 58,1 43,2 29,1 17,8 8,5 3,2 1,2 0,44 0,16 0,061 0,023 0,0085 0,0032 0,0012 0,00044 0,00016 0,000061 0,000029 |
37 33 26 21 15 12 6,4 3,2 1,7 0,81 0,43 0,20 0,10 0,050 0,027 0,014 0,007 0,003 0,003 |
37 33 26 13 9,6 8,5 5,4 3,2 1,9 1,2 0,68 0,39 0,25 0,15 0,096 0,060 0,04 0,02 0,02 |
Taken from ISO 4850:1979 and reproduced with the permission of the International Organization for Standardization (ISO). These standards can be obtained from any ISO member or from the ISO Central Secretariat, Case postale 56, 1211 Geneva 20, Switzerland. Copyright remains with ISO.
Table 2. Scales of protection to be used for gas-welding and braze-welding
Work to be carried out1 |
l = flow rate of acetylene, in litres per hour |
|||
l £ 70 |
70 l £ 200 |
200 l £ 800 |
l > 800 |
|
Welding and braze-welding |
4 |
5 |
6 |
7 |
Welding with emittive |
4a |
5a |
6a |
7a |
1 According to the conditions of use, the next greater or the next smaller scale can be used.
Taken from ISO 4850:1979 and reproduced with the permission of the International Organization for Standardization (ISO). These standards can be obtained from any ISO member or from the ISO Central Secretariat, Case postale 56, 1211 Geneva 20, Switzerland. Copyright remains with ISO.
Table 3. Scales of protection to be used for oxygen cutting
Work to be carried out1 |
Flow rate of oxygen, in litres per hour |
||
900 to 2,000 |
2,000 to 4,000 |
4,000 to 8,000 |
|
Oxygen cutting |
5 |
6 |
7 |
1 According to the conditions of use, the next greater or the next smaller scale can be used.
NOTE: 900 to 2,000 and 2,000 to 8,000 litres of oxygen per hour, correspond fairly closely to the use of cutting nozzles diameters of 1 to 1.5 and 2 mm respectively.
Taken from ISO 4850:1979 and reproduced with the permission of the International Organization for Standardization (ISO). These standards can be obtained from any ISO member or from the ISO Central Secretariat, Case postale 56, 1211 Geneva 20, Switzerland. Copyright remains with ISO.
Table 4. Scales of protection to be used for plasma arc cutting
Work to be carried out1 |
l = Current, in amperes |
||
l £ 150 |
150 l £ 250 |
250 l £ 400 |
|
Thermal cutting |
11 |
12 |
13 |
1 According to the conditions of use, the next greater or the next smaller scale can be used.
Taken from ISO 4850:1979 and reproduced with the permission of the International Organization for Standardization (ISO). These standards can be obtained from any ISO member or from the ISO Central Secretariat, Case postale 56, 1211 Geneva 20, Switzerland. Copyright remains with ISO.
Table 5. Scales of protection to be used for electric arc welding or gouging
1 According to the conditions of use, the next greater or the next smaller scale can be used.
2 The expression “heavy metals” applies to steels, alloy stells, copper and its alloys, etc.
NOTE: The coloured areas correspond to the ranges where the welding operations are not usually used in the current practice of manual welding.
Taken from ISO 4850:1979 and reproduced with the permission of the International Organization for Standardization (ISO). These standards can be obtained from any ISO member or from the ISO Central Secretariat, Case postale 56, 1211 Geneva 20, Switzerland. Copyright remains with ISO.
Table 6. Scales of protection to be used for plasma direct arc welding
1 According to the conditions of use, the next greater or the next smaller scale can be used.
The coloured areas correspond to the ranges where the welding operations are not usually used in the current practice of manual welding.
Taken from ISO 4850:1979 and reproduced with the permission of the International Organization for Standardization (ISO). These standards can be obtained from any ISO member or from the ISO Central Secretariat, Case postale 56, 1211 Geneva 20, Switzerland. Copyright remains with ISO.
A new development is the use of filter plates made of welded crystal surfaces which increase their protective shade as soon as the welding arc starts. The time for this nearly instantaneous shade increase can be as short as 0.1 ms. The good visibility through the plates in non-welding situations can encourage their use.
Laser beams. No one type of filter offers protection from all laser wavelengths. Different kinds of lasers vary in wavelength, and there are lasers that produce beams of various wavelengths or those whose beams change their wavelengths by passing through optical systems. Consequently, laser-using firms should not depend solely on laser protectors to protect an employee’s eyes from laser burns. Nevertheless, laser operators do frequently need eye protection. Both spectacles and goggles are available; they have shapes similar to those shown in figure 1 and figure 2. Each kind of eyewear has maximum attenuation at a specific laser wavelength. Protection falls off rapidly at other wavelengths. It is essential to select the correct eyewear appropriate for the kind of laser, its wavelength and optical density. The eyewear is to provide protection from reflections and scattered lights and the utmost precautions are necessary to foresee and avoid harmful radiation exposure.
With the use of eye and face protectors, due attention must be paid to greater comfort and efficiency. It is important that the protectors be fitted and adjusted by a person who has received some training in this task. Each worker should have the exclusive use of his or her own protector, while communal provision for cleaning and demisting may well be made in larger works. Comfort is particularly important in helmet and hood type protectors as they may become almost intolerably hot during use. Air lines can be fitted to prevent this. Where the risks of the work process allow, some personal choice among different types of protection is psychologically desirable.
The protectors should be examined regularly to ensure that they are in good condition. Care should be taken that they give adequate protection at all times even with the use of corrective vision devices.
Karl H. E. Kroemer
In what follows, three of the most important concerns of ergonomic design will be examined: first, that of controls, devices to transfer energy or signals from the operator to a piece of machinery; second, indicators or displays, which provide visual information to the operator about the status of the machinery; and third, the combination of controls and displays in a panel or console.
Designing for the Sitting Operator
Sitting is a more stable and less energy-consuming posture than standing, but it restricts the working space, particularly of the feet, more than standing. However, it is much easier to operate foot controls when sitting, as compared to standing, because little body weight must be transferred by the feet to the ground. Furthermore, if the direction of the force exerted by the foot is partly or largely forward, provision of a seat with a backrest allows the exertion of rather large forces. (A typical example of this arrangement is the location of pedals in an automobile, which are located in front of the driver, more or less below seat height.) Figure 1 shows schematically the locations in which pedals may be located for a seated operator. Note that the specific dimensions of that space depend on the anthropometry of the actual operators.
Figure 1. Preferred and regular workspace for feet (in centimetres)
The space for the positioning of hand-operated controls is primarily located in front of the body, within a roughly spherical contour that is centred at either the elbow, at the shoulder, or somewhere between those two body joints. Figure 2 shows schematically that space for the location of controls. Of course, the specific dimensions depend on the anthropometry of the operators.
Figure 2. Preferred and regular workspace for hands (in centimetres)
The space for displays and for controls that must be looked at is bounded by the periphery of a partial sphere in front of the eyes and centred at the eyes. Thus, the reference height for such displays and controls depends on the eye height of the seated operator and on his or her trunk and neck postures. The preferred location for visual targets closer than about one metre is distinctly below the height of the eye, and depends on the closeness of the target and on the posture of the head. The closer the target, the lower it should be located, and it should be in or near the medial (mid-sagittal) plane of the operator.
It is convenient to describe the posture of the head by using the “ear-eye line” (Kroemer 1994a) which, in the side view, runs through the right ear hole and the juncture of the lids of the right eye, while the head is not tilted to either side (the pupils are at the same horizontal level in the frontal view). One usually calls the head position “erect” or “upright” when the pitch angle P (see figure 3) between the ear-eye line and the horizon is about 15°, with the eyes above the height of the ear. The preferred location for visual targets is 25°–65° below the ear-eye line (LOSEE in figure 3), with the lower values preferred by most people for close targets that must be kept in focus. Even though there are large variations in the preferred angles of the line of sight, most subjects, particularly as they become older, prefer to focus on close targets with large LOSEE angles.
Designing for the Standing Operator
Pedal operation by a standing operator should be seldom required, because otherwise the person must spend too much time standing on one foot while the other foot operates the control. Obviously, simultaneous operation of two pedals by a standing operator is practically impossible. While the operator is standing still, the room for the location of foot controls is limited to a small area below the trunk and slightly in front of it. Walking about would provide more room to place pedals, but that is highly impractical in most cases because of the walking distances involved.
The location for hand-operated controls of a standing operator includes about the same area as for a seated operator, roughly a half sphere in front of the body, with its centre near the shoulders of the operator. For repeated control operations, the preferred part of that half sphere would be its lower section. The area for the location of displays is also similar to the one suited to a seated operator, again roughly a half sphere centred near the operator’s eyes, with the preferred locations in the lower section of that half sphere. The exact locations for displays, and also for controls that must be seen, depends on the posture of the head, as discussed above.
The height of controls is appropriately referenced to the height of the elbow of the operator while the upper arm is hanging from the shoulder. The height of displays and controls that must be looked at is referred to the eye height of the operator. Both depend on the operator’s anthropometry, which may be rather different for short and tall persons, for men and women, and for people of different ethnic origins.
Foot-operated Controls
Two kinds of controls should be distinguished: one is used to transfer large energy or forces to a piece of machinery. Examples of this are the pedals on a bicycle or the brake pedal in a heavier vehicle that does not have a power-assist feature. A foot-operated control, such as an on-off switch, in which a control signal is conveyed to the machinery, usually requires only a small quantity of force or energy. While it is convenient to consider these two extremes of pedals, there are various intermediate forms, and it is the task of the designer to determine which of the following design recommendations apply best among them.
As mentioned above, repeated or continual pedal operation should be required only from a seated operator. For controls meant to transmit large energies and forces, the following rules apply:
Selection of Controls
Selection among different sorts of controls must be made according to the following needs or conditions:
The functional usefulness of controls also determines selection procedures. The main criteria are as follows:
Table 1. Control movements and expected effects
Direction of control movement |
||||||||||||
Function |
Up |
Right |
Forward |
Clockwise |
Press, |
Down |
Left |
Rearward |
Back |
Counter- |
Pull1 |
Push2 |
On |
+3 |
+ |
+ |
+ |
– |
+3 |
+ |
|||||
Off |
+ |
– |
– |
+ |
– |
|||||||
Right |
+ |
– |
||||||||||
Left |
+ |
– |
||||||||||
Raise |
+ |
– |
||||||||||
Lower |
– |
+ |
||||||||||
Retract |
– |
+ |
– |
|||||||||
Extend |
+ |
– |
– |
|||||||||
Increase |
– |
– |
+ |
– |
||||||||
Decrease |
– |
– |
+ |
– |
||||||||
Open Value |
– |
+ |
||||||||||
Close Value |
+ |
– |
Blank: Not applicable; + Most preferred; – less preferred. 1 With trigger-type control. 2 With push-pull switch. 3 Up in the United States, down in Europe.
Source: Modified from Kroemer 1995.
Table 1 and table 2 help in the selection of proper controls. However, note that there are few “natural” rules for selection and design of controls. Most current recommendations are purely empirical and apply to existing devices and Western stereotypes.
Table 2. Control-effect relations of common hand controls
Effect |
Key- |
Toggle |
Push- |
Bar |
Round |
Thumbwheel |
Thumbwheel |
Crank |
Rocker switch |
Lever |
Joystick |
Legend |
Slide1 |
Select ON/OFF |
+ |
+ |
+ |
= |
+ |
+ |
+ |
||||||
Select ON/STANDBY/OFF |
– |
+ |
+ |
+ |
+ |
+ |
|||||||
Select OFF/MODE1/MODE2 |
= |
– |
+ |
+ |
+ |
+ |
|||||||
Select one function of several related functions |
– |
+ |
– |
= |
|||||||||
Select one of three or more discrete alternatives |
+ |
+ |
|||||||||||
Select operating condition |
+ |
+ |
– |
+ |
+ |
– |
|||||||
Engage or disengage |
+ |
||||||||||||
Select one of mutually |
+ |
+ |
|||||||||||
Set value on scale |
+ |
– |
= |
= |
= |
+ |
|||||||
Select value in discrete steps |
+ |
+ |
+ |
+ |
Blank: Not applicable; +: Most preferred; –: Less preferred; = Least preferred. 1 Estimated (no experiments known).
Source: Modified from Kroemer 1995.
Figure 4 presents examples of “detent” controls, characterized by discrete detents or stops in which the control comes to rest. It also depicts typical “continuous” controls where the control operation may take place anywhere within the adjustment range, without the need to be set in any given position.
Figure 4. Some examples of "detent" and "continuous" controls
The sizing of controls is largely a matter of past experiences with various control types, often guided by the desire to minimize the needed space in a control panel, and either to allow simultaneous operations of adjacent controls or to avoid inadvertent concurrent activation. Furthermore, the choice of design characteristics will be influenced by such considerations as whether the controls are to be located outdoors or in sheltered environments, in stationary equipment or moving vehicles, or may involve the use of bare hands or of gloves and mittens. For these conditions, consult readings at the end of the chapter.
Several operational rules govern the arrangement and grouping of controls. These are listed in table 3. For more details, check the references listed at the end of this section and Kroemer, Kroemer and Kroemer-Elbert (1994).
Table 3. Rules for arrangement of controls
Locate for the |
Controls shall be oriented with respect to the operator. If the |
Primary controls |
The most important controls shall have the most advantageous |
Group related |
Controls that are operated in sequence, that are related to a |
Arrange for |
If operation of controls follows a given pattern, controls shall |
Be consistent |
The arrangement of functionally identical or similar controls |
Dead-operator |
If the operator becomes incapacitated and either lets go of a |
Select codes |
There are numerous ways to help identify controls, to indicate |
Source: Modified from Kroemer, Kroemer and Kroemer-Elbert 1994.
Reproduced by permission of Prentice-Hall. All rights reserved.
Preventing Accidental Operation
The following are the most important means to guard against inadvertent activation of controls, some of which may be combined:
Note that these designs usually slow the operation of controls, which may be detrimental in case of an emergency.
Data Entry Devices
Nearly all controls can be used to enter data on a computer or other data storage device. However, we are most used to the practice of using a keyboard with push-buttons. On the original typewriter keyboard, which has become the standard even for computer keyboards, the keys were arranged in a basically alphabetic sequence, which has been modified for various, often obscure, reasons. In some cases, letters which frequently follow each other in common text were spaced apart so that the original mechanical type bars might not entangle if struck in rapid sequence. “Columns” of keys run in roughly straight lines, as do the “rows” of keys. However, the fingertips are not aligned in such manners, and do not move in this way when digits of the hand are flexed or extended, or moved sideways.
Many attempts have been made over the last hundred years to improve keying performance by changing the keyboard layout. These include relocating keys within the standard layout, or changing the keyboard layout altogether. The keyboard has been divided into separate sections, and sets of keys (such as numerical pads) have been added. Arrangements of adjacent keys may be changed by altering spacing, offset from each other or from reference lines. The keyboard may be divided into sections for the left and right hand, and those sections may be laterally tilted and sloped and slanted.
The dynamics of the operation of push-button keys are important for the user, but are difficult to measure in operation. Thus, the force-displacement characteristics of keys are commonly described for static testing, which is not indicative of actual operation. By current practise, keys on computer keyboards have fairly little displacement (about 2 mm) and display a “snap-back” resistance, that is, a decrease in operation force at the point when actuation of the key has been achieved. Instead of separate single keys, some keyboards consist of a membrane with switches underneath which, when pressed in the correct location, generate the desired input with little or no displacement felt. The major advantage of the membrane is that dust or fluids cannot penetrate it; however, many users dislike it.
There are alternatives to the “one key-one character” principle; instead, one can generate inputs by various combinatory means. One is “chording”, meaning that two or more controls are operated simultaneously to generate one character. This poses demands on the memory capabilities of the operator, but requires the use of only very few keys. Other developments utilize controls other than the binary tapped push button, replacing it by levers, toggles or special sensors (such as an instrumented glove) which respond to movements of the digits of the hand.
By tradition, typing and computer entry have been made by mechanical interaction between the operator’s fingers and such devices as keyboard, mouse, track ball or light pen. Yet there are many other means to generate inputs. Voice recognition appears one promising technique, but other methods can be employed. They might utilize, for example, pointing, gestures, facial expressions, body movements, looking (directing one’s gaze), movements of the tongue, breathing or sign language to transmit information and to generate inputs to a computer. Technical development in this area is very much in flux, and as the many nontraditional input devices used for computer games indicate, acceptance of devices other than the traditional binary tap-down keyboard is entirely feasible within the near future. Discussions of current keyboard devices have been provided, for example, by Kroemer (1994b) and McIntosh (1994).
Displays
Displays provide information about the status of equipment. Displays may apply to the operator’s visual sense (lights, scales, counters, cathode-ray tubes, flat panel electronics, etc.), to the auditory sense (bells, horns, recorded voice messages, electronically generated sounds, etc.) or to the sense of touch (shaped controls, Braille, etc.). Labels, written instructions, warnings or symbols (“icons”) may be considered special kinds of displays.
The four “cardinal rules” for displays are:
The selection of either an auditory or visual display depends on the prevailing conditions and purposes. The objective of the display may be to provide:
A visual display is most appropriate if the environment is noisy, the operator stays in place, the message is long and complex, and especially if it deals with the spatial location of an object. An auditory display is appropriate if the workplace must be kept dark, the operator moves around, and the message is short and simple, requires immediate attention, and deals with events and time.
Visual Displays
There are three basic types of visual displays: (1)The check display indicates whether or not a given condition exists (for example a green light indicates normal function). (2)The qualitative display indicates the status of a changing variable or its approximate value, or its trend of change (for example, a pointer moves within a “normal” range). (3) The quantitative display shows exact information that must be ascertained (for example, to find a location on a map, to read text or to draw on a computer monitor), or it may indicate an exact numerical value that must be read by the operator (for example, a time or a temperature).
Design guidelines for visual displays are:
Figure 5. Colour coding of indicator lights
For more complex and detailed information, especially quantitative information, one of four different kinds of displays are traditionally used: (1) a moving pointer (with fixed scale), (2) a moving scale (with fixed pointer), (3) counters or (4) “pictorial” displays, especially computer-generated on a display monitor. Figure 6 lists the major characteristics of these display types.
Figure 6. Characteristics of displays
It is usually preferable to use a moving pointer rather than a moving scale, with the scale either straight (horizontally or vertically arranged), curved or circular. Scales should be simple and uncluttered, with graduation and numbering so designed that correct readings can be taken quickly. Numerals should be located outside the scale markings so that they are not obscured by the pointer. The pointer should end with its tip directly at the marking. The scale should mark divisions only so finely as the operator must read. All major marks should be numbered. Progressions are best marked with intervals of one, five or ten units between major marks. Numbers should increase left to right, bottom to top or clockwise. For details of dimensions of scales refer to standards such as those listed by Cushman and Rosenberg 1991 or Kroemer 1994a.
Starting in the 1980s, mechanical displays with pointers and printed scales were increasingly replaced by “electronic” displays with computer-generated images, or solid-state devices using light-emitting diodes (see Snyder 1985a). The displayed information may be coded by the following means:
Unfortunately, many electronically generated displays have been fuzzy, often overly complex and colourful, hard to read, and required exact focusing and close attention, which may distract from the main task, for example, driving a car. In these cases the first three of the four “cardinal rules” listed above were often violated. Furthermore, many electronically generated pointers, markings and alphanumerics did not comply with established ergonomic design guidelines, especially when generated by line segments, scan lines or dot matrices. Although some of these defective designs were tolerated by the users, rapid innovation and improving display techniques allows many better solutions. However, the same rapid development leads to the fact that printed statements (even if current and comprehensive when they appear) are becoming obsolete quickly. Therefore, none are given in this text. Compilations have been published by Cushman and Rosenberg (1991), Kinney and Huey (1990), and Woodson, Tillman and Tillman (1991).
The overall quality of electronic displays is often wanting. One measure used to assess the image quality is the modulation transfer function (MTF) (Snyder 1985b). It describes the resolution of the display using a special sine-wave test signal; yet, readers have many criteria regarding the preference of displays (Dillon 1992).
Monochrome displays have only one colour, usually either green, yellow, amber, orange or white (achromatic). If several colours appear on the same chromatic display, they should be easily discriminated. It is best to display not more than three or four colours simultaneously (with preference being given to red, green, yellow or orange, and cyan or purple). All should strongly contrast with the background. In fact, a suitable rule is to design first by contrast, that is, in terms of black and white, and then to add colours sparingly.
In spite of the many variables that, singly and interacting with each other, affect the use of complex colour display, Cushman and Rosenberg (1991) compiled guidelines for use of colour in displays; these are listed in figure 7.
Figure 7. Guidelines for use of colours in displays
Other suggestions are as follows:
Panels of Controls and Displays
Displays as well as controls should be arranged in panels so they are in front of the operator, that is, close to the person’s medial plane. As discussed earlier, controls should be near elbow height, and displays below or at eye height, whether the operator is sitting or standing. Infrequently operated controls, or less important displays, can be located further to the sides, or higher.
Often, information on the result of control operation is displayed on an instrument. In this case, the display should be located close to the control so that the control setting can be done without error, quickly and conveniently. The assignment is usually clearest when the control is directly below or to the right of the display. Care must be taken that the hand does not cover the display when operating the control.
Popular expectancies of control-display relations exist, but they are often learned, they may depend on the user’s cultural background and experience, and these relationships are often not strong. Expected movement relationships are influenced by the type of control and display. When both are either linear or rotary, the stereotypical expectation is that they move in corresponding directions, such as both up or both clockwise. When the movements are incongruent, in general the following rules apply:
The ratio of control and display displacement (C/D ratio or D/C gain) describes how much a control must be moved to adjust a display. If much control movement produces only a small display motion, once speaks of a high C/D ratio, and of the control as having low sensitivity. Often, two distinct movements are involved in making a setting: first a fast primary (“slewing”) motion to an approximate location, then a fine adjustment to the exact setting. In some cases, one takes as the optimal C/D ratio that which minimizes the sum of these two movements. However, the most suitable ratio depends on the given circumstances; it must be determined for each application.
Labels and Warnings
Labels
Ideally, no label should be required on equipment or on a control to explain its use. Often, however, it is necessary to use labels so that one may locate, identify, read or manipulate controls, displays or other equipment items. Labelling must be done so that the information is provided accurately and rapidly. For this, the guidelines in table 4 apply.
Table 4. Guidelines for labels
Orientation |
A label and the information printed on it shall be oriented |
Location |
A label shall be placed on or very near the item that it |
Standardization |
Placement of all labels shall be consistent throughout the |
Equipment |
A label shall primarily describe the function (“what does it |
Abbreviations |
Common abbreviations may be used. If a new abbreviation is |
Brevity |
The label inscription shall be as concise as possible without |
Familiarity |
Words shall be chosen, if possible, that are familiar to the |
Visibility and |
The operator shall be able to be read easily and accurately at |
Font and size |
Typography determines the legibility of written information; |
Source: Modified from Kroemer, Kroemer and Kroemer-Elbert 1994
(reproduced by permission of Prentice-Hall; all rights reserved).
Font (typeface) should be simple, bold and vertical, such as Futura, Helvetica, Namel, Tempo and Vega. Note that most electronically generated fonts (formed by LED, LCD or dot matrix) are generally inferior to printed fonts; thus, special attention must be paid to making these as legible as possible.
viewing distance 35 cm, suggested height 22 mm
viewing distance 70 cm, suggested height 50 mm
viewing distance 1 m, suggested height 70 mm
viewing distance 1.5 m, suggested height at least 1 cm.
Warnings
Ideally, all devices should be safe to use. In reality, often this cannot be achieved through design. In this case, one must warn users of the dangers associated with product use and provide instructions for safe use to prevent injury or damage.
It is preferable to have an “active” warning, usually consisting of a sensor that notices inappropriate use, combined with an alerting device that warns the human of an impending danger. Yet, in most cases, “passive” warnings are used, usually consisting of a label attached to the product and of instructions for safe use in the user manual. Such passive warnings rely completely on the human user to recognize an existing or potential dangerous situation, to remember the warning, and to behave prudently.
Labels and signs for passive warnings must be carefully designed by following the most recent government laws and regulations, national and international standards, and the best applicable human engineering information. Warning labels and placards may contain text, graphics, and pictures—often graphics with redundant text. Graphics, particularly pictures and pictograms, can be used by persons with different cultural and language backgrounds, if these depictions are selected carefully. However, users with different ages, experiences, and ethnic and educational backgrounds, may have rather different perceptions of dangers and warnings. Therefore, design of a safe product is much preferable to applying warnings to an inferior product.
Injuries to the foot and leg are common to many industries. The dropping of a heavy object may injure the foot, particularly the toes, in any workplace, especially among workers in the heavier industries such as mining, metal manufacture, engineering and building and construction work. Burns of the lower limbs from the molten metals, sparks or corrosive chemicals occur frequently in foundries, iron- and steelworks, chemical plants and so on. Dermatitis or eczema may be caused by a variety of acidic, alkaline and many other agents. The foot may also suffer physical injury caused by striking it against an object or by stepping on sharp protrusions such as can occur in the construction industry.
Improvements in the work environment have made the simple puncturing and laceration of the worker’s foot by protruding floor nails and other sharp hazards less common, but accidents from working on damp or wet floors still occur, particularly when wearing unsuitable foot wear.
Types of Protection.
The type of foot and leg protection should be related to the risk. In some light industries, it may be sufficient hat workers wear well-made ordinary shoes. Many women, for example will wear footwear that is comfortable to them, such as sandals or old slippers, or footwear with very high or worn-down heels. This practice should be discouraged because such footwear can cause an accident.
Sometimes a protective shoe or clog is adequate, and sometimes a boot or leggings will be required (see figure 1, figure 2 and figure 3). The height to which the footwear covers the ankle, knee or thigh depends on the hazard, although comfort and mobility will also have to be considered. Thus shoes and gaiters may in some circumstances be preferable to high boots.
Figure 1. Safety shoes
Figure 2. Heat protective boots
Protective shoes and boots may be made from leather, rubber, synthetic rubber or plastic and may be fabricated by sewing, vulcanizing or moulding. Since the toes are most vulnerable to impact injuries, a steel toe cap is the essential feature of protective footwear wherever such hazards exist. For comfort the toe cap must be reasonably thin and light, and carbon tool steel is therefore used for this purpose. These safety toe caps may be incorporated into many types of boots and shoes. In some trades where falling objects present a particular risk, metal instep guards may be fitted over protective shoes.
Rubber or synthetic outer soles with various tread patterns are used to minimize or prevent the risk of slipping: this is especially important where floors are likely to be wet or slippery. The material of the sole appears to be of more importance than the tread pattern and should have a high coefficient of friction. Reinforced, puncture-proof soles are necessary in such places as construction sites; metallic insoles can also be inserted into various types of footwear that lack this protection.
Where an electrical hazard exists, shoes should be either entirely stitched or cemented, or directly vulcanized in order to avoid the need for nails or any other electrically conductive fasteners. Where static electricity may be present, protective shoes should have electrically conductive rubber outer soles to allow static electricity to leak from the bottom of the shoes.
Footwear with a dual purpose has now come into common use: these are shoes or boots that have both anti-electrostatic properties mentioned above together with the ability to protect the wearer from receiving an electrical shock when in contact with a low-voltage electrical source. In the latter case, the electrical resistance between the insole and the outer sole must be controlled in order to provide this protection between a given voltage range.
In the past, “safety and durability” were the only considerations. Now, worker comfort has also been taken into account, so that lightness, comfort and even attractiveness in protective shoes are sought-after qualities. The “safety sneaker” is one example of this kind of footwear. Design and colour may come to play a part in the use of footwear as an emblem of corporate identity, a matter that receives special attention in countries like Japan, to name only one.
Synthetic rubber boots offer useful protection from chemical injuries: the material should show not more than 10% reduction in tensile strength or elongation after immersion in a 20% solution of hydrochloric acid for 48 hours at room temperature.
Especially in environments where molten metals or chemical burns are a major hazard, it is important that shoes or boots should be without tongues and that the fastenings should be pulled over the top of the boot and not tucked inside.
Rubber or metallic spats, gaiters or leggings may be used to protect the leg above the shoe line, especially from risks of burns. Protective knee pads may be necessary, especially where work involves kneeling, for example in some foundry moulding. Aluminized heat-protective shoes, boots or leggings will be necessary near sources of intense heat.
Use and Maintenance
All protective footwear should be kept clean and dry when not in use and should be replaced as soon as necessary. In places where the same rubber boots are used by several people, regular arrangements for disinfection between each use should be made to prevent the spread of foot infections. A danger of foot mycosis exists that arises from the use of too tight and too heavy types of boots or shoes.
The success of any protective footwear depends upon its acceptability, a reality that is now widely recognized in the far greater attention that is now paid to styling. Comfort is a prerequisite and the shoes should be as light as is consistent with their purpose: shoes weighing more than two kilogram per pair should be avoided.
Sometimes foot and leg safety protection is required by law to be provided by the employers. Where the employers are interested in progressive programmes and not just meeting legal obligations, concerned companies often find it very effective to provide some arrangement for easy purchase at the place of work. And if protective wear can be offered at wholesale price, or arrangements for convenient extended payment terms are made available, workers may be more willing and able to purchase and use better equipment. In this way, the type of protection obtained and worn can be better controlled. Many conventions and regulations, however, do consider supplying workers with work clothing and protective equipment to be the employer’s obligation.
In designing equipment it is of the utmost importance to take full account of the fact that a human operator has both capabilities and limitations in processing information, which are of a varying nature and which are found on various levels. Performance in actual work conditions strongly depends on the extent to which a design has either attended to or ignored these potentials and their limits. In the following a brief sketch will be offered of some of the chief issues. Reference will be made to other contributions of this volume, where an issue will be discussed in greater detail.
It is common to distinguish three main levels in the analysis of human information processing, namely, the perceptual level, the decision level and the motor level. The perceptual level is subdivided into three further levels, relating to sensory processing, feature extraction and identification of the percept. On the decision level, the operator receives perceptual information and chooses a reaction to it which is finally programmed and actualized on the motor level. This describes only the information flow in the simplest case of a choice reaction. It is evident, though, that perceptual information may accumulate and be combined and diagnosed before eliciting an action. Again, there may arise a need for selecting information in view of perceptual overload. Finally, choosing an appropriate action becomes more of a problem when there are several options some of which may be more appropriate than others. In the present discussion, the emphasis will be on the perceptual and decisional factors of information processing.
Perceptual Capabilities and Limits
Sensory limits
The first category of processing limits is sensory. Their relevance to information processing is obvious since processing becomes less reliable as information approaches threshold limits. This may seem a fairly trivial statement, but nonetheless, sensory problems are not always clearly recognized in designs. For example, alphanumerical characters in sign posting systems should be sufficiently large to be legible at a distance consistent with the need for appropriate action. Legibility, in turn, depends not only on the absolute size of the alphanumericals but also on contrast and—in view of lateral inhibition—also on the total amount of information on the sign. In particular, in conditions of low visibility (e.g., rain or fog during driving or flying) legibility is a considerable problem requiring additional measures. More recently developed traffic signposts and road markers are usually well designed, but signposts near and within buildings are often illegible. Visual display units are another example in which sensory limits of size, contrast and amount of information play an important role. In the auditory domain some main sensory problems are related to understanding speech in noisy environments or in poor quality audio transmission systems.
Feature extraction
Provided sufficient sensory information, the next set of information processing issues relates to extracting features from the information presented. Most recent research has shown ample evidence that an analysis of features precedes the perception of meaningful wholes. Feature analysis is particularly useful in locating a special deviant object amidst many others. For instance, an essential value on a display containing many values may be represented by a single deviant colour or size, which feature then draws immediate attention or “pops out”. Theoretically, there is the common assumption of “feature maps” for different colours, sizes, forms and other physical features. The attention value of a feature depends on the difference in activation of the feature maps that belong to the same class, for example, colour. Thus, the activation of a feature map depends on the discriminability of the deviant features. This means that when there are a few instances of many colours on a screen, most colour feature maps are about equally activated, which has the effect that none of the colours pops out.
In the same way a single moving advertisement pops out, but this effect disappears altogether when there are several moving stimuli in the field of view. The principle of the different activation of feature maps is also applied when aligning pointers that indicate ideal parameter values. A deviation of a pointer is indicated by a deviant slope which is rapidly detected. If this is impossible to realize, a dangerous deviation might be indicated by a change in colour. Thus, the general rule for design is to use only a very few deviant features on a screen and to reserve them only for the most essential information. Searching for relevant information becomes cumbersome in the case of conjunctions of features. For example, it is hard to locate a large red object amidst small red objects and large and small green objects. If possible, conjunctions should be avoided when trying to design for efficient search.
Separable versus integral dimensions
Features are separable when they can be changed without affecting the perception of other features of an object. Line lengths of histograms are a case in point. On the other hand, integral features refer to features which, when changed, change the total appearance of the object. For instance, one cannot change features of the mouth in a schematic drawing of a face without altering the total appearance of the picture. Again, colour and brightness are integral in the sense that one cannot change a colour without altering the brightness impression at the same time. The principles of separable and integral features, and of emergent properties evolving from changes of single features of an object, are applied in so-called integrated or diagnostic displays. The rationale of these displays is that, rather than displaying individual parameters, different parameters are integrated into a single display, the total composition of which indicates what may be actually wrong with a system.
Data presentation in control rooms is still often dominated by the philosophy that each individual measure should have its own indicator. Piecemeal presentation of the measures means that the operator has the task of integrating the evidence from the various individual displays so as to diagnose a potential problem. At the time of the problems in the Three Mile Island nuclear power plant in the United States some forty to fifty displays were registering some form of disorder. Thus, the operator had the task of diagnosing what was actually wrong by integrating the information from that myriad of displays. Integral displays may be helpful in diagnosing the kind of error, since they combine various measures into a single pattern. Different patterns of the integrated display, then, may be diagnostic with regard to specific errors.
A classical example of a diagnostic display, which has been proposed for nuclear control rooms, is shown in figure 1. It displays a number of measures as spokes of equal length so that a regular polygon always represents normal conditions, while different distortions may be connected with different types of problems in the process.
Figure 1. In the normal situation all parameter values are equal, creating a hexagon. In the deviation, some of the values have changed creating a specific distortion.
Not all integral displays are equally discriminable. To illustrate the issue, a positive correlation between the two dimensions of a rectangle creates differences in surface, while maintaining an equal shape. Alternatively, a negative correlation creates differences in shape while maintaining an equal surface. The case in which variation of integral dimensions creates a new shape has been referred to as revealing an emergent property of the patterning, which adds to the operator’s ability to discriminate the patterns. Emergent properties depend upon the identity and arrangement of parts but are not identifiable with any single part.
Object and configural displays are not always beneficial. The very fact that they are integral means that the characteristics of the individual variables are harder to perceive. The point is that, by definition, integral dimensions are mutually dependent, thus clouding their individual constituents. There may be circumstances in which this is unacceptable, while one may still wish to profit from the diagnostic patternlike properties, which are typical for the object display. One compromise might be a traditional bar graph display. On the one hand, bar graphs are quite separable. Yet, when positioned in sufficiently close vicinity, the differential lengths of the bars may together constitute an object-like pattern which may well serve a diagnostic aim.
Some diagnostic displays are better than others. Their quality depends on the extent that the display corresponds to the mental model of the task. For example, fault diagnosis on the basis of distortions of a regular polygon, as in figure 1, may still bear little relationship to the domain semantics or to the concept of the operator of the processes in a power plant. Thus, various types of deviations of the polygon do not obviously refer to a specific problem in the plant. Therefore, the design of the most suitable configural display is one that corresponds to the specific mental model of the task. Thus it should be emphasized that the surface of a rectangle is only a useful object display when the product of length and width is the variable of interest!
Interesting object displays stem from three-dimensional representations. For instance, a three-dimensional representation of air traffic—rather than the traditional two-dimensional radar representation—may provide the pilot with a greater “situational awareness” of other traffic. The three-dimensional display has been shown to be much superior to a two-dimensional one since its symbols indicate whether another aircraft is above or below one’s own.
Degraded conditions
Degraded viewing occurs under a variety of conditions. For some purposes, as with camouflage, objects are intentionally degraded so as to prevent their identification. On other occasions, for example in brightness amplification, features may become too blurred to allow one to identify the object. One research issue has concerned the minimal number of “lines” required on a screen or “the amount of detail” needed in order to avoid degradation. Unfortunately, this approach to image quality has not led to unequivocal results. The problem is that identifying degraded stimuli (e.g., a camouflaged armoured vehicle) depends too much on the presence or absence of minor object-specific details. The consequence is that no general prescription about line density can be formulated, except for the trivial statement that degradation decreases as the density increases.
Features of alphanumeric symbols
A major issue in the process of feature extraction concerns the actual number of features which together define a stimulus. Thus, the legibility of ornate characters like Gothic letters is poor because of the many redundant curves. In order to avoid confusion, the difference between letters with very similar features—like the i and the l, and the c and the e—should be accentuated. For the same reason, it is recommended to make the stroke and tail length of ascenders and descenders at least 40% of the total letter height.
It is evident that discrimination among letters is mainly determined by the number of features which they do not share. These mainly consist of straight line and circular segments which may have horizontal, vertical and oblique orientation and which may differ in size, as in lower- and upper-case letters.
It is obvious that, even when alphanumericals are well discriminable, they may easily lose that property in combination with other items. Thus, the digits 4 and 7 share only a few features but they do not do well in the context of larger otherwise identical groups (e.g., 384 versus 387) There is unanimous evidence that reading text in lower case is faster than in capitals. This is usually ascribed to the fact that lower case letters have more distinct features (e.g., dog, cat versus DOG, CAT). The superiority of lower case letters has not only been established for reading text but also for road signs such as those used for indicating towns at the exits of motorways.
Identification
The final perceptual process is concerned with identification and interpretation of percepts. Human limits arising on this level are usually related to discrimination and finding the appropriate interpretation of the percept. The applications of research on visual discrimination are manifold, relating to alphanumerical patterns as well as to more general stimulus identification. The design of brake lights in cars will serve as an example of the last category. Rear-end accidents account for a considerable proportion of traffic accidents, and are due in part to the fact that the traditional location of the brake light next to the rear lights makes it poorly discriminable and therefore extends the driver’s reaction time. As an alternative, a single light has been developed which appears to reduce the accident rate. It is mounted in the centre of the rear window at approximately eye level. In experimental studies on the road, the effect of the central braking light appears to be less when subjects are aware of the aim of the study, suggesting that stimulus identification in the traditional configuration improves when subjects focus on the task. Despite the positive effect of the isolated brake light, its identification might still be further improved by making the brake light more meaningful, giving it the form of an exclamation mark, “!”, or even an icon.
Absolute judgement
Very strict and often counterintuitive performance limits arise in cases of absolute judgement of physical dimensions. Examples occur in connection with colour coding of objects and the use of tones in auditory call systems. The point is that relative judgement is far superior to absolute judgement. The problem with absolute judgement is that the code has to be translated into another category. Thus a specific colour may be linked with an electrical resistance value or a specific tone may be intended for a person for which an ensuing message is meant. In fact, therefore, the problem is not one of perceptual identification but rather of response choice, which will be discussed later in this article. At this point it suffices to remark that one should not use more than four or five colours or pitches so as to avoid errors. When more alternatives are needed one may add extra dimensions, like loudness, duration and components of tones.
Word reading
The relevance of reading separate word units in traditional print is demonstrated by various widely experienced evidence, such as the fact that reading is very much hampered when spaces are omitted, printing errors remain often undetected, and it is very hard to read words in alternating cases (e.g., ALTeRnAtInG). Some investigators have emphasized the role of word shape in reading word units and suggested that spatial frequency analysers may be relevant in identifying word shape. In this view meaning would be derived from total word shape rather than by letter-by-letter analysis. Yet, the contribution of word shape analysis is probably limited to small common words—articles and endings—which is consistent with the finding that printing errors in small words and endings have a relatively low probability of detection.
Text in lower case has an advantage over upper case which is due to a loss of features in the upper case. Yet, the advantage of lower case words is absent or may even be reversed when searching for a single word. It could be that factors of letter size and letter case are confounded in searching: Larger-sized letters are detected more rapidly, which may offset the disadvantage of less distinctive features. Thus, a single word may be about equally legible in upper case as in lower case, while continuous text is read faster in lower case. Detecting a SINGLE capital word amidst many lower case words is very efficient, since it evokes pop-out. An even more efficient fast detection can be achieved by printing a single lower case word in bold, in which case the advantages of pop-out and of more distinctive features are combined.
The role of encoding features in reading is also clear from the impaired legibility of older low-resolution visual display unit screens, which consisted of fairly rough dot matrices and could portray alphanumericals only as straight lines. The common finding was that reading text or searching from a low-resolution monitor was considerably slower than from a paper-printed copy. The problem has largely disappeared with the present-day higher-resolution screens. Besides letter form there are a number of additional differences between reading from paper and reading from a screen. The spacing of the lines, the size of the characters, the type face, the contrast ratio between characters and background, the viewing distance, the amount of flicker and the fact that changing pages on a screen is done by scrolling are some examples. The common finding that reading is slower from computer screens—although comprehension seems about equal—may be due to some combination of these factors. Present-day text processors usually offer a variety of options in font, size, colour, format and style; such choices could give the false impression that personal taste is the major reason.
Icons versus words
In some studies the time taken by a subject in naming a printed word was found to be faster than that for a corresponding icon, while both times were about equally fast in other studies. It has been suggested that words are read faster than icons since they are less ambiguous. Even a fairly simple icon, like a house, may still elicit different responses among subjects, resulting in response conflict and, hence, a decrease in reaction speed. If response conflict is avoided by using really unambiguous icons the difference in response speed is likely to disappear. It is interesting to note that as traffic signs, icons are usually much superior to words, even in the case where the issue of understanding language is not seen as a problem. This paradox may be due to the fact that the legibility of traffic signs is largely a matter of the distance at which a sign can be identified. If properly designed, this distance is larger for symbols than for words, since pictures can provide considerably larger differences in shape and contain less fine details than words. The advantage of pictures, then, arises from the fact that discrimination of letters requires some ten to twelve minutes of arc and that feature detection is the initial prerequisite for discrimination. At the same time it is clear that the superiority of symbols is only guaranteed when (1) they do indeed contain little detail, (2) they are sufficiently distinct in shape and (3) they are unambiguous.
Capabilities and Limits for Decision
Once a precept has been identified and interpreted it may call for an action. In this context the discussion will be limited to deterministic stimulus-response relations, or, in other words, to conditions in which each stimulus has its own fixed response. In that case the major problems for equipment design arise from issues of compatibility, that is, the extent to which the identified stimulus and its related response have a “natural” or well-practised relationship. There are conditions in which an optimal relation is intentionally aborted, as in the case of abbreviations. Usually a contraction like abrvtin is much worse than a truncation like abbrev. Theoretically, this is due to the increasing redundancy of successive letters in a word, which allows “filling out” final letters on the basis of earlier ones; a truncated word can profit from this principle while a contracted one cannot.
Mental models and compatibility
In most compatibility problems there are stereotypical responses derived from generalized mental models. Choosing the null position in a circular display is a case in point. The 12 o’clock and 9 o’clock positions appear to be corrected faster than the 6 o’clock and 3 o’clock positions. The reason may be that a clockwise deviation and a movement in the upper part in the display are experienced as “increases” requiring a response that reduces the value. In the 3 and 6 o’clock positions both principles conflict and they may therefore be handled less efficiently. A similar stereotype is found in locking or opening the rear door of a car. Most people act on the stereotype that locking requires a clockwise movement. If the lock is designed in the opposite way, continuous errors and frustration in trying to lock the door are the most likely result.
With respect to control movements the well-known Warrick’s principle on compatibility describes the relation between the location of a control knob and the direction of the movement on a display. If the control knob is located to the right of the display, a clockwise movement is supposed to move the scale marker up. Or consider moving window displays. According to most people’s mental model, the upward direction of a moving display suggests that the values go up in the same way in which a rising temperature in a thermometer is indicated by a higher mercury column. There are problems in implementing this principle with a “fixed pointer-moving scale” indicator. When the scale in such an indicator moves down, its value is intended to increasing. Thus a conflict with the common stereotype occurs. If the values are inverted, the low values are on the top of the scale, which is also contrary to most stereotypes.
The term proximity compatibility refers to the correspondence of symbolic representations to people’s mental models of functional or even spatial relationships within a system. Issues of proximity compatibility are more pressing as the mental model of a situation is more primitive, global or distorted. Thus, a flow diagram of a complex automated industrial process is often displayed on the basis of a technical model which may not correspond at all with the mental model of the process. In particular, when the mental model of a process is incomplete or distorted, a technical representation of the progress adds little to develop or correct it. A daily-life example of poor proximity compatibility is an architectural map of a building that is intended for viewer orientation or for showing fire escape routes. These maps are usually entirely inadequate—full of irrelevant details—in particular for people who have only a global mental model of the building. Such convergence between map reading and orientation comes close to what has been called “situational awareness”, which is particularly relevant in three-dimensional space during an air flight. There have been interesting recent developments in three-dimensional object displays, representing attempts to achieve optimal proximity compatibility in this domain.
Stimulus-response compatibility
An example of stimulus-response (S-R) compatibility is typically found in the case of most text processing programs, which assume that operators know how commands correspond to specific key combinations. The problem is that a command and its corresponding key combination usually fail to have any pre-existing relation, which means that the S-R relations must be learned by a painstaking process of paired-associate learning. The result is that, even after the skill has been acquired, the task remains error-prone. The internal model of the program remains incomplete since less practised operations are liable to be forgotten, so that the operator can simply not come up with the appropriate response. Also, the text produced on the screen usually does not correspond in all respects to what finally appears on the printed page, which is another example of inferior proximity compatibility. Only a few programs utilize a stereotypical spatial internal model in connection with stimulus-response relations for controlling commands.
It has been correctly argued that there are much better pre-existing relations between spatial stimuli and manual responses—like the relation between a pointing response and a spatial location, or like that between verbal stimuli and vocal responses. There is ample evidence that spatial and verbal representations are relatively separate cognitive categories with little mutual interference but also with little mutual correspondence. Hence, a spatial task, like formatting a text, is most easily performed by spatial mouse-type movement, thus leaving the keyboard for verbal commands.
This does not mean that the keyboard is ideal for carrying out verbal commands. Typing remains a matter of manually operating arbitrary spatial locations which are basically incompatible with processing letters. It is actually another example of a highly incompatible task which is only mastered by extensive practise, and the skill is easily lost without continuous practice. A similar argument can be made for shorthand writing, which also consists of connecting arbitrary written symbols to verbal stimuli. An interesting example of an alternative method of keyboard operation is a chording keyboard.
The operator handles two keyboards (one for the left and one for the right hand) both consisting of six keys. Each letter of the alphabet corresponds to a chording response, that is, a combination of keys. The results of studies on such a keyboard showed striking savings in the time needed for acquiring typing skills. Motor limitations limited the maximal speed of the chording technique but, still, once learned, operator performance approached the speed of the conventional technique quite closely.
A classical example of a spatial compatibility effect concerns the traditional arrangements of stove burner controls: four burners in a 2 ´ 2 matrix, with the controls in a horizontal row. In this configuration, the relations between burner and control are not obvious and are poorly learned. However, despite many errors, the problem of lighting the stove, given time, can usually be solved. The situation is worse when one is faced with undefined display-control relations. Other examples of poor S-R compatibility are found in the display-control relations of video cameras, video recorders and television sets. The effect is that many options are never used or must be studied anew at each new trial. The claim that “it is all explained in the manual”, while true, is not useful since, in practice, most manuals are incomprehensible to the average user, in particular when they attempt to describe actions using incompatible verbal terms.
Stimulus-stimulus (S-S) and response-response (R-R) compatibility
Originally S-S and R-R compatibility were distinguished from S-R compatibility. A classical illustration of S-S compatibility concerns attempts in the late forties to support auditory sonar by a visual display in an effort to enhance signal detection. One solution was sought in a horizontal light beam with vertical perturbations travelling from left to right and reflecting a visual translation of the auditory background noise and potential signal. A signal consisted of a slightly larger vertical perturbation. The experiments showed that a combination of the auditory and visual displays did not do better than the single auditory display. The reason was sought in a poor S-S compatibility: the auditory signal is perceived as a loudness change; hence visual support should correspond most when provided in the form of a brightness change, since that is the compatible visual analogue of a loudness change.
It is of interest that the degree of S-S compatibility corresponds directly to how skilled subjects are in cross-modality matching. In a cross-modality match, subjects may be asked to indicate which auditory loudness corresponds to a certain brightness or to a certain weight; this approach has been popular in research on scaling sensory dimensions, since it allows one to avoid mapping sensory stimuli to numerals. R-R compatibility refers to correspondence of simultaneous and also of successive movements. Some movements are more easily coordinated than others, which provides clear constraints for the way a succession of actions—for example, successive operation of controls—is most efficiently done.
The above examples show clearly how compatibility issues pervade all user-machine interfaces. The problem is that the effects of poor compatibility are often softened by extended practice and so may remain unnoticed or underestimated. Yet, even when incompatible display-control relations are well-practised and do not seem to affect performance, there remains the point of a larger error probability. The incorrect compatible response remains a competitor for the correct incompatible one and is likely to come through on occasion, with the obvious risk of an accident. In addition, the amount of practice required for mastering incompatible S-R relations is formidable and a waste of time.
Limits of Motor Programming and Execution
One limit in motor programming was already briefly touched upon in the remarks on R-R compatibility. The human operator has clear problems in carrying out incongruent movement sequences, and in particular, changing from the one to another incongruent sequence is hard to accomplish. The results of studies on motor coordination are relevant to the design of controls in which both hands are active. Yet, practice can overcome much in this regard, as is clear from the surprising levels of acrobatic skills.
Many common principles in the design of controls derive from motor programming. They include the incorporation of resistance in a control and the provision of feedback indicating that it has been properly operated. A preparatory motor state is a highly relevant determinant of reaction time. Reacting to an unexpected sudden stimulus may take an additional second or so, which is considerable when a fast reaction is needed—as in reacting to a lead car’s brake light. Unprepared reactions are probably a main cause of chain collisions. Early warning signals are beneficial in preventing such collisions. A major application of research on movement execution concerns Fitt’s law, which relates movement, distance and the size of the target that is aimed at. This law appears to be quite general, applying equally to an operating lever, a joystick, a mouse or a light pen. Among others, it has been applied to estimate the time needed to make corrections on computer screens.
There is obviously much more to say than the above sketchy remarks. For instance, the discussion has been almost fully limited to issues of information flow on the level of a simple choice reaction. Issues beyond choice reactions have not been touched upon, nor problems of feedback and feed forward in the ongoing monitoring of information and motor activity. Many of the issues mentioned bear a strong relation to problems of memory and of planning of behaviour, which have not been addressed either. More extensive discussions are found in Wickens (1992), for example.
Head Injuries
Head injuries are fairly common in industry and account for 3 to 6% of all industrial injuries in industrialized countries. They are often severe and result in an average lost time of about three weeks. The injuries sustained are generally the result of blows caused by the impact of angular objects such as tools or bolts falling from a height of several metres; in other cases, workers may strike their heads in a fall to a floor or suffer a collision between some fixed object and their heads.
A number of different types of injury have been recorded:
Understanding the physical parameters that account for these various types of injury is difficult, although of fundamental importance, and there is considerable disagreement in the extensive literature published on this subject. Some specialists consider that the force involved is the principal factor to be considered, while others claim that it is a matter of energy, or of the quantity of movement; further opinions relate the brain injury to acceleration, to acceleration rate, or to a specific shock index such as HIC, GSI, WSTC. In most cases, each one of these factors is likely to be involved to a greater or lesser extent. It may be concluded that our knowledge of the mechanisms of shocks to the head is still only partial and controversial. The shock tolerance of the head is determined by means of experimentation on cadavers or on animals, and it is not easy to extrapolate these values to a living human subject.
On the basis of the results of analyses of accidents sustained by building workers wearing safety helmets, however, it seems that head injuries due to shocks occur when the quantity of energy involved in the shock is in excess of about 100 J.
Other types of injuries are less frequent but should not be overlooked. They include burns resulting from splashes of hot or corrosive liquids or molten material, or electrical shocks resulting from accidental contact of the head with exposed conductive parts.
Safety Helmets
The chief purpose of a safety helmet is to protect the head of the wearer against hazards, mechanical shocks. It may in addition provide protection against other for example, mechanical, thermal and electrical.
A safety helmet should fulfil the following requirements in order to reduce the harmful effects of shocks to the head:
Figure 1. Example of essential elements of safety helmet construction
Other requirements may apply to helmets used for particular tasks. These include protection against splashes of molten metal in the iron and steel industry and protection against electrical shock by direct contact in the case of helmets used by electrical technicians.
Materials used in the manufacture of helmets and harnesses should retain their protective qualities over a long period of time and under all foreseeable climatic conditions, including sun, rain, heat, bela-freezing temperature, and so on. Helmets should also have a fairly good resistance to flame and should not break if dropped onto a hard surface from a height of a few metres.
Performance Tests
ISO International Standard No. 3873-1977 was published in 1977 as a result of the work of the subcommittee dealing especially with “industrial safety helmets”. This standard, approved by practically all the member states of the ISO, sets out the essential features required of a safety helmet together with the related testing methods. These tests may be divided into two groups (see table 1), namely:
Table 1. Safety helmets: testing requirements of ISO Standard 3873-1977
Characteristic |
Description |
Criteria |
Obligatory tests |
||
Absorption of shocks |
A hemispherical mass of 5 kg is allowed to fall from a height of |
The maximum force measured should not exceed 500 daN. |
The test is repeated on a helmet at temperatures of –10°, +50°C and under wet conditions., |
||
Resistance to penetration |
The helmet is struck within a zone of 100 mm in diameter on its uppermost point using a conical punch weighing 3 kg and a tip angle of 60°. |
The tip of the punch must not come into contact with the false (dummy) head. |
Test to be performed under the conditions which gave the worst results in the shock test., |
||
Resistance to flame |
The helmet is exposed for 10 s to a Bunsen burner flame of 10 mm in diameter using propane. |
The outer shell should not continue to burn more than 5 s after it has been withdrawn from the flame. |
Optional tests |
||
Dielectric strength |
The helmet is filled with a solution of NaCl and is itself immersed in a bath of the same solution. The electric leakage under an applied voltage of 1200 V, 50 Hz is measured. |
The leakage current should not be greater than 1.2 mA. |
Lateral rigidity |
The helmet is placed sideways between two parallel plates and subjected to a compressive pressure of 430 N |
The deformation under load should not exceed 40 mm, and the permanent deformation should not be more than 15 mm. |
Low-temperature test |
The helmet is subject to the shock and penetration tests at a temperature of -20°C. |
The helmet must fulfil the foregoing requirements for these two tests. |
The resistance to ageing of the plastic materials used in the manufacture of helmets is not specified in ISO No. 3873-1977. Such a specification should be required for helmets made out of plastic materials. A simple test consists in exposing the helmets to a high-pressure, quartz-envelope 450 watt xenon lamp over a period of 400 hours at a distance of 15 cm, followed by a check to ensure that the helmet can still withstand the appropriate penetration test.
It is recommended that helmets intended for use in the iron and steel industry be subjected to a test for resistance to splashes of molten metal. A quick way of carrying out this test is to allow 300 grams of molten metal at 1,300°C to drop onto the top of a helmet and to check that none has passed through to the interior.
The European Standard EN 397 adopted in 1995 specifies requirements and test methods for these two important characteristics.
Selection of a Safety Helmet
The ideal helmet providing protection and perfect comfort in every situation has yet to be designed. Protection and comfort are indeed often conflicting requirements. As regards protection, in selecting a helmet, the hazards against which protection is required and the conditions under which the helmet will be used must be considered with specific attention to the characteristics of the available safety products.
General considerations
It is advisable to choose helmets complying with the recommendations of ISO Standard No. 3873 (or its equivalent). The European Standard EN 397-1993 is used as a reference for the certification of helmets in application of the 89/686/EEC directive: equipment undergoing such certification, as is the case with almost all personal protective equipment, is submitted to a mandatory third party certification before being put onto the European market. In any case, helmets should meet the following requirements:
Special considerations
Helmets made of light alloys or having a brim along the sides should not be used in workplaces where there is a hazard of molten metal splashes. In such cases, the use of polyester–glass fibre, phenol textile, polycarbonate–glass fibre or polycarbonate helmets is recommended.
Where there is a hazard of contact with exposed conductive parts, only helmets made of thermoplastic material should be used. They should not have ventilation holes and no metal parts such as rivets should appear on the outside of the shell.
Helmets for persons working overhead, particularly steel framework erectors, should be provided with chin straps. The straps should be about 20 mm in width and should be such that the helmet is held firmly in place at all times.
Helmets made largely of polyethene are not recommended for use at high temperatures. In such cases, polycarbonate, polycarbonate–glass fibre, phenol textile, or polyester–glass fibre helmets are more suitable. The harness should be made of woven fabric. Where there is no hazard of contact with exposed conductive parts, ventilation holes in the helmet shell may be provided.
Situations where there is a crushing hazard call for helmets made of glass–fibre reinforced polyester or polycarbonate having a rim with a width of not less than 15 mm.
Comfort considerations
In addition to safety, consideration should also be given to the physiological aspects of comfort for the wearer.
The helmet should be as light as possible, certainly not more than 400 grams in weight. Its harness should be flexible and permeable to liquid and should not irritate or injure the wearer; for this reason, harnesses of woven fabric are to be preferred to those made of polyethene. A full or half leather sweatband should be incorporated not only in order to provide sweat absorption but also to reduce skin irritation; it should be replaced several times during the life of the helmet for hygienic reasons. To ensure better thermal comfort, the shell should be of a light colour and have ventilation holes with a surface range of 150 to 450 mm2. Careful adjustment of the helmet to fit the wearer is necessary in order to ensure its stability and to prevent its slipping and reducing the field of vision. Various helmet shapes are available, the most common being the “cap” shape with a peak and a brim around the sides; for work in quarries and on demolition sites, the “hat” type of helmet with a wider brim provides better protection. A “skull-cap” shaped helmet without a peak or a brim is particularly suitable for persons working overhead as this pattern precludes a possible loss of balance caused by the peak or brim coming into contact with joists or girders among which the worker may have to move.
Accessories and Other Protective Headgear
Helmets may be fitted with eye or face shields made of plastic material, metallic mesh or optical filters; hearing protectors, chin straps and nape straps to keep the helmet firmly in position; and woollen neck protectors or hoods against wind or cold (figure 2). For use in mines and underground quarries, attachments for a headlamp and a cable holder are fitted.
Figure 2. Example of safety helmet with chin strap (a), optical filter (b) and woolen neck protector against wind and cold (c)
Other types of protective headgear include those designed for protection against dirt, dust, scratches and bumps. Sometimes known as “bump caps,” these are made of light plastic material or linen. For persons working near machine tools such as drills, lathes, spooling machines and so forth, where there is a risk of the hair being caught, linen caps with a net, peaked hair nets or even scarves or turbans may be used, provided that they have no exposed loose ends.
Hygiene and Maintenance
All protective headgear should be cleaned and checked regularly. If splits or cracks appear, or if a helmet shows signs of ageing or deterioration of the harness, the helmet should be discarded. Cleaning and disinfection are particularly important if the wearer sweats excessively or if more than one person share the same headgear.
Substances adhering to a helmet such as chalk, cement, glue or resin may be removed mechanically or by using an appropriate solvent that does not attack the shell material. Warm water with a detergent may be used with a hard brush.
For disinfecting headgear, articles should be dipped into a suitable disinfecting solution such as a 5% formalin solution or a sodium hypochlorite solution.
Hearing Protectors
No one knows when people first discovered that covering the ears with the flats of the hands or plugging up the ear canals with one’s fingers was effective in reducing the level of unwanted sound—noise—but the basic technique has been in use for generations as the last line of defence against loud sound. Unfortunately, this level of technology precludes the use of most others. Hearing protectors, an obvious solution to the problem, are a form of noise control in that they block the path of the noise from the source to the ear. They come in various forms, as depicted in figure 1.
Figure 1. Examples of different types of hearing protectors
An earplug is a device worn in the external ear canal. Premolded earplugs are available in one or more standard sizes intended to fit into the ear canals of most people. A formable, user-molded earplug is made of a pliable material that is shaped by the wearer to fit into the ear canal to form an acoustic seal. A custom-molded earplug is individually made to fit the particular ear of the wearer. Earplugs can be made from vinyl, silicone, elastomer formulations, cotton and wax, spun glass wool, and slow-recovery closed-cell foam.
A semi-insert earplug, also called an ear-canal cap, is worn against the opening of the external ear canal: the effect is similar to plugging one’s ear canal with a fingertip. Semi-insert devices are manufactured in one size and are designed to fit most ears. This sort of device is held in place by a lightweight headband with mild tension.
An earmuff is a device composed of a headband and two circumaural cups that are usually made of plastic. The headband may be made of metal or plastic. The circumaural ear cup completely encloses the outer ear and seals against the side of the head with a cushion. The cushion may be made of foam or it may be filled with fluid. Most earmuffs have a lining inside the ear cup to absorb the sound that is transmitted through the shell of the ear cup in order to improve the attenuation above approximately 2,000 Hz. Some earmuffs are designed so that the headband may be worn over the head, behind the neck or under the chin, although the amount of protection they afford may be different for each headband position. Other earmuffs are designed to fit on “hard hats.” These may offer less protection because the hard-hat attachment makes it more difficult to adjust the earmuff and they do not fit as wide a range of head sizes as do those with headbands.
In the United States there are 53 manufacturers and distributors of hearing protectors who, as of July 1994, sold 86 models of earplugs, 138 models of earmuffs, and 17 models of semi-insert hearing protectors. In spite of the diversity of hearing protectors, foam earplugs designed for one-time use account for more than half of the hearing protectors in use in the United States.
Last line of defence
The most effective way to avoid noise-induced hearing loss is to stay out of hazardous noise areas. In many work settings it is possible to redesign the manufacturing process so that operators work in enclosed, sound-attenuating control rooms. The noise is reduced in these control rooms to the point where it is not hazardous and where speech communication is not impaired. The next most effective way to avoid noise-induced hearing loss is to reduce the noise at the source so that it is no longer hazardous. This is often done by designing quiet equipment or retrofitting noise control devices to existing equipment.
When it is not possible to avoid the noise or to reduce the noise at the source, hearing protection becomes the last resort. As the last line of defence, having no backup, its effectiveness can often be abridged.
One of the ways to diminish the effectiveness of hearing protectors is to use them less than 100% of the time. Figure 2 shows what happens. Eventually, no matter how much protection is afforded by design, protection is reduced as percent of wearing time decreases. Wearers who remove an earplug or lift an earmuff to talk with fellow workers in noisy environments can severely reduce the amount of protection they receive.
Figure 2. Decrease in effective protection as time of non-use during an 8-hour day increases (based on 3-dB exchange rate)
The Rating Systems and How to Use Them
There are many ways to rate hearing protectors. The most common methods are the single-number systems such as the Noise Reduction Rating (NRR) (EPA 1979) used in the United States and the Single Number Rating (SNR), used in Europe (ISO 1994). Another European rating method is the HML (ISO 1994) that uses three numbers to rate protectors. Finally, there are methods based on the attenuation of the hearing protectors for each of the octave bands, called the long or octave-band method in the United States and the assumed protection value method in Europe (ISO 1994).
All of these methods use the real-ear attenuation at threshold values of the hearing protectors as determined in laboratories according to relevant standards. In the United States, attenuation testing is done in accordance with ANSI S3.19, Method for the Measurement of Real-Ear Protection of Hearing Protectors and Physical Attenuation of Earmuffs (ANSI 1974). Although this standard has been replaced by a newer one (ANSI 1984), the US Environmental Protection Agency (EPA) controls the NRR on hearing protector labels and requires the older standard to be used. In Europe attenuation testing is done in accordance with ISO 4869-1 (ISO 1990).
In general, the laboratory methods require that sound-field hearing thresholds be determined both with the protectors fitted and with the ears open. In the United States the hearing protector must be fitted by the experimenter, while in Europe the subject, assisted by the experimenter, performs this task. The difference between the protectors-fitted and ears-open sound field thresholds is the real-ear attenuation at threshold. Data are collected for a group of subjects, presently ten in the United States with three trials each and 16 in Europe with one trial each. The average attenuation and associated standard deviations are calculated for each octave band tested.
For purposes of discussion, the NRR method and the long method are described and illustrated in table 1.
Table 1. Example calculation of the Noise Reduction Rating (NRR) of a hearing protector
Procedure:
Steps |
Octave-band center frequency in Hz |
|||||||
125 |
250 |
500 |
1000 |
2000 |
4000 |
8000 |
dBX |
|
1. Assumed octave-band level of noise |
100.0 |
100.0 |
100.0 |
100.0 |
100.0 |
100.0 |
100.0 |
|
2. C-weighting correction |
–0.2 |
0.0 |
0.0 |
0.0 |
–0.2 |
–0.8 |
–3.0 |
|
3. C-weighted octave-band levels |
99.8 |
100.0 |
100.0 |
100.0 |
99.8 |
99.2 |
97.0 |
107.9 dBC |
4. A-weighting correction |
–16.1 |
–8.6 |
–3.2 |
0.0 |
+1.2 |
+1.0 |
–1.1 |
|
5. A-weighted octave-band levels |
83.9 |
91.4 |
96.8 |
100.0 |
101.2 |
101.0 |
98.9 |
|
6. Attenuation of hearing protector |
27.4 |
26.6 |
27.5 |
27.0 |
32.0 |
46.01 |
44.22 |
|
7. Standard deviation × 2 |
7.8 |
8.4 |
9.4 |
6.8 |
8.8 |
7.33 |
12.84 |
|
8. Estimated protected A-weighted octave band levels |
64.3 |
73.2 |
78.7 |
79.8 |
78.0 |
62.3 |
67.5 |
84.2 dBA |
9. NRR = 107.9 – 84.2 – 3 = 20.7 (Step 3 – Step 8 – 3 dB5 ) |
1 Mean attenuation at 3000 and 4000 Hz.
2 Mean attenuation at 6000 and 8000 Hz.
3 Sum of standard deviations at 3000 and 4000 Hz.
4 Sum of standard deviations at 6000 and 8000 Hz.
5 The 3-dB correction factor is intended to account for spectrum uncertainty in that the noise in which the hearing protector is to be worn may deviate from the pink-noise spectrum used to calculate the NRR.
The NRR may be used to determine the protected noise level, that is, the effective A-weighted sound pressure level at the ear, by subtracting it from the C-weighted environmental noise level. Thus, if the C-weighted environmental noise level was 100 dBC and the NRR for the protector was 21 dB, the protected noise level would be 79 dBA (100–21 = 79). If only the A-weighted environmental noise level is known, a 7-dB correction is used (Franks, Themann and Sherris 1995). So, if the A-weighted noise level was 103 dBA, the protected noise level would be 89 dBA (103–[21-7] = 89).
The long method requires that the octave-band environmental noise levels be known; there is no shortcut. Many modern sound level meters can simultaneously measure octave-band, C-weighted and A-weighted environmental noise levels. However, no dosimeters currently provide octave-band data. Calculation by the long method is described below and shown in table 2.
Table 2. Example of the long method for computing the A-weighted noise reduction for a hearing protector in a known environmental noise
Procedure:
Steps |
Octave-band center frequency in Hz |
|||||||
125 |
250 |
500 |
1000 |
2000 |
4000 |
8000 |
dBA |
|
1. Measured octave-band levels of noise |
85.0 |
87.0 |
90.0 |
90.0 |
85.0 |
82.0 |
80.0 |
|
2. A-weighting correction |
–16.1 |
–8.6 |
–3.2 |
0.0 |
+1.2 |
+1.0 |
–1.1 |
|
3. A-weighted octave-band levels |
68.9 |
78.4 |
86.8 |
90.0 |
86.2 |
83.0 |
78.9 |
93.5 |
4. Attenuation of hearing protector |
27.4 |
26.6 |
27.5 |
27.0 |
32.0 |
46.01 |
44.22 |
|
5. Standard deviation × 2 |
7.8 |
8.4 |
9.4 |
6.8 |
8.8 |
7.33 |
12.84 |
|
6. Estimated protected |
49.3 |
60.2 |
68.7 |
69.8 |
63.0 |
44.3 |
47.5 |
73.0 |
1 Mean attenuation at 3000 and 4000 Hz.
2 Mean attenuation at 6000 and 8000 Hz.
3 Sum of standard deviations at 3000 and 4000 Hz.
4 Sum of standard deviations at 6000 and 8000 Hz.
The subtractive standard deviation corrections in the long method and in the NRR computations are intended to use the laboratory variability measurements to adjust the estimates of protection to correspond to values expected for most of the users (98% with a 2-standard-deviation correction or 84% if a 1-standard-deviation correction is used) who wear the hearing protector under conditions identical to those involved in the testing. The appropriateness of this adjustment is, of course, heavily dependent upon the validity of the laboratory-estimated standard deviations.
Comparison of the long method and the NRR
The long method and the NRR computations may be compared by subtracting the NRR (20.7) from the C-weighted sound pressure level for the spectrum in table 2 (95.2 dBC) to predict the effective level when the hearing protector is worn, namely 74.5 dBA. This compares favourably to the value of 73.0 dBA derived from the long method in table 2. Part of the disparity between the two estimates is due to the use of the approximate 3 dB spectral safety factor incorporated in line 9 of table 1. The spectral safety factor is intended to account for errors arising from the use of an assumed noise instead of an actual noise. Depending upon the slope of the spectrum and the shape of the attenuation curve of the hearing protector, the differences between the two methods may be greater than that shown in this example.
Reliability of test data
It is unfortunate that the attenuation values and their standard deviations as obtained in laboratories in the United States, and to a lesser extent in Europe, are not representative of those obtained by everyday wearers. Berger, Franks and Lindgren (1996) reviewed 22 real-world studies of hearing protectors and found that US laboratory values reported on the EPA-required label overestimated protection from 140 to almost 2000%. The overestimation was greatest for earplugs and least for earmuffs. Since 1987, the US Occupational Safety and Health Administration has recommended that the NRR be derated by 50% before calculations are made of noise levels under the hearing protector. In 1995, the US National Institute for Occupational Safety and Health (NIOSH) recommended that the NRR for earmuffs be derated by 25% that the NRR for formable earplugs be derated by 50% and that the NRR for premolded earplugs and semi inserts be derated by 70% before calculations of noise levels under the hearing protector are made (Rosenstock 1995).
Intra- and inter-laboratory variability
Another consideration, but of less impact than the real-world issues noted above, is within-laboratory validity and variability, as well as differences between facilities. Inter-laboratory variability can be substantial (Berger, Kerivan and Mintz 1982), affecting both the octave-band values and the computed NRRs, both in terms of absolute computations as well as rank ordering. Therefore, even rank ordering of hearing protectors based on attenuation values is best done at present only for data from a single laboratory.
Important Points for Selecting Protection
When a hearing protector is selected, there are several important points to be considered (Berger 1988). Foremost is that the protector will be adequate for the environmental noise in which it will be worn. The Hearing Conservation Amendment to the OSHA Noise Standard (1983) recommends that the noise level under the hearing protector be 85 dB or less. NIOSH has recommended that the noise level under the hearing protector be no higher than 82 dBA, so that risk of noise-induced hearing loss is minimal (Rosenstock 1995).
Second, the protector should not be overprotective. If the protected exposure level is more than 15 dB below the desired level, the hearing protector has too much attenuation and the wearer is considered to be overprotected, resulting in the wearer’s feeling isolated from the environment (BSI 1994). It may be difficult to hear speech and warning signals and wearers will temporarily either remove the protector when they need to communicate (as mentioned above) and verify warning signals or they will modify the protector to reduce its attenuation. In either case, the protection will usually be reduced to the point that hearing loss is no longer being prevented.
At present, accurate determination of protected noise levels is difficult since reported attenuations and standard deviations, along with their resultant NRRs, are inflated. However, using the derating factors recommended by the NIOSH should improve accuracy of such a determination in the short run.
Comfort is a critical issue. No hearing protector can be as comfortable as not wearing one at all. Covering or occluding the ears produces many unnatural sensations. These range from a change in the sound of one’s own voice due to the “occlusion effect” (see below), to a feeling of fullness of the ears or pressure on the head. Use of earmuffs or earplugs in hot environments may be uncomfortable because of the increase in perspiration. It will take time for wearers to get used to the sensations caused by hearing protectors and to some of the discomfort. However, when wearers experience such types of discomfort as headache from headband pressure or pain in the ear canals from earplug insertion, they should be fitted with alternative devices.
If earmuffs or reusable earplugs are used, a means to keep them clean should be provided. For earmuffs, wearers should have easy access to replaceable components such as ear cushions and ear cup liners. Wearers of disposable earplugs should have ready access to a fresh supply. If one intends to have earplugs reused, wearers should have access to earplug cleaning facilities. Wearers of custom-molded earplugs should have facilities to keep the earplugs clean and access to new earplugs when they have become damaged or worn out.
The average American worker is exposed to 2.7 occupational hazards each and every day (Luz et al. 1991). These hazards may require the use of other protective equipment such as “hard hats,” eye protection and respirators. It is important that any hearing protector selected be compatible with other safety equipment that is required. The NIOSH Compendium of Hearing Protective Devices (Franks, Themann and Sherris 1995) has tables that, among other things, list the compatibility of each hearing protector with other safety equipment.
The Occlusion Effect
The occlusion effect describes the increase in the efficiency with which bone-conducted sound is transmitted to the ear at frequencies below 2,000 Hz when the ear canal is sealed with a finger or an earplug, or is covered by an earmuff. The magnitude of the occlusion effect depends upon how the ear is occluded. The maximum occlusion effect occurs when the entrance to the ear canal is blocked. Earmuffs with large ear cups and earplugs that are deeply inserted cause less of an occlusion effect (Berger 1988). The occlusion effect often causes hearing protector wearers to object to wearing protection because they dislike the sound of their voices—louder, booming and muffled.
Communication Effects
Because of the occlusion effect that most hearing protectors cause, one’s own voice tends to sound louder—since the hearing protectors reduce the level of environmental noise, the voice sounds much louder than when the ears are open. To adjust for the increased loudness of one’s own speech, most wearers tend to lower their voice levels substantially, speaking more softly. Lowering the voice in a noisy environment where the listener is also wearing hearing protection contributes to the difficulty of communicating. Furthermore, even without an occlusion effect, most speakers raise their voice levels by only 5 to 6 dB for every 10 dB increase in environmental noise level (the Lombard effect). Thus, the combination of lowered voice level because of the use of hearing protection combined with inadequate elevation of voice level to make up for environmental noise has severe consequences on the ability of hearing-protector wearers to hear and understand each other in noise.
The Operation of Hearing Protectors
Earmuffs
The basic function of earmuffs is to cover the outer ear with a cup that forms a noise-attenuating acoustic seal. The styles of the ear cup and the earmuff’s cushions as well as the tension provided by the headband determine, for the most part, how well the earmuff attenuates environmental noise. Figure 3 displays both an example of a well-fitted earmuff with a good seal all around the outer ear as well as an example of an earmuff with a leak underneath the cushion. The chart in figure 3 shows that while the tight-fitting earmuff has good attenuation at all frequencies, the one with a leak provides practically no low-frequency attenuation. Most earmuffs will provide attenuation approaching bone conduction, approximately 40 dB, for frequencies from 2,000 Hz and greater. The low-frequency attenuation properties of a tightly fitting earmuff are determined by design features and materials that include ear cup volume, the area of the ear cup opening, headband force and mass.
Figure 3. Well-fitted and poorly fitted earmuffs and their attenuation consequences
Earplugs
Figure 4 displays an example of a well-fitted, fully inserted foam earplug (about 60% of it extends into the ear canal) and an example of a poorly fitted, shallowly inserted foam earplug that just caps the ear canal entrance. The well-fitted earplug has good attenuation at all frequencies. The poorly fitted foam earplug has substantially less attenuation. The foam earplug, when fitted properly, can provide attenuation approaching bone conduction at many frequencies. In high-level noise, the differences in attenuation between a well-fitted and a poorly fitted foam earplug can be sufficient to either prevent or permit noise-induced hearing loss.
Figure 4. A well-fitted and a poorly fitted foam earplug and the attenuation consequences
Figure 5 displays a well-fitted and poorly fitted premolded earplug. In general, premolded earplugs do not provide the same degree of attenuation as properly fitted foam earplugs or earmuffs. However, the well-fitted premolded earplug provides adequate attenuation for most industrial noises. The poorly fitted premolded earplug provides substantially less, and no attenuation at 250 and 500 Hz. It has been observed that for some wearers, there is actually gain at these frequencies, meaning that the protected noise level is actually higher than the environmental noise level, putting the wearer at more risk of developing noise-induced hearing loss than if the protector were not worn at all.
Figure 5. A well-fitted and a poorly fitted premolded earplug
Dual hearing protection
For some environmental noises, especially when daily equivalent exposures exceed about 105 dBA, a single hearing protector may be insufficient. In such situations wearers can use both earmuffs and earplugs in combination to achieve about 3 to 10 dB of extra protection, limited primarily by the bone conduction of the head of the wearer. Attenuation changes very little when different earmuffs are used with the same earplug, but changes greatly when different earplugs are used with the same earmuff. For dual protection, the choice of the earplug is critical for attenuation below 2,000 Hz, but at and above 2,000 Hz essentially all earmuff/earplug combinations provide attenuation approximately equal to the skull’s bone-conduction pathways.
Interference from glasses and head-worn personal protective equipment
Safety glasses, or other devices such as respirators that interfere with the earmuff’s circumaural seal, can degrade earmuff attenuation. For example, eye wear can reduce attenuation in individual octave bands by 3 to 7 dB.
Flat-response devices
A flat-attenuation earmuff or earplug is one that provides approximately equal attenuation for frequencies from 100 to 8,000 Hz. These devices maintain the same frequency response as the unoccluded ear, providing undistorted audition of signals (Berger 1991). A normal earmuff or earplug may sound as if the treble of the signal has been turned down, in addition to the overall lowering of the sound level. The flat-attenuation earmuff or earplug will sound as if only the volume has been reduced since its attenuation characteristics are “tuned” by the use of resonators, dampers and diaphragms. Flat-attenuation characteristics can be important for wearers having high-frequency hearing loss, for those for whom understanding speech while being protected is important, or for those for whom having high-quality sound is important, such as musicians. Flat attenuation devices are available as earmuffs and earplugs. One drawback of the flat-attenuation devices is that they don’t provide as much attenuation as conventional earmuffs and earplugs.
Passive amplitude-sensitive devices
A passive amplitude-sensitive hearing protector has no electronics and is designed to allow voice communications during quiet periods and provide little attenuation at low noise levels with protection increasing as the noise level increases. These devices contain orifices, valves, or diaphragms intended to produce this nonlinear attenuation, typically beginning once sound levels exceed 120 dB sound pressure levels (SPL). At sound levels below 120 dB SPL, orifice and valve-type devices typically act as vented earmolds, providing as much as 25 dB of attenuation at the higher frequencies, but very little attenuation at and below 1,000 Hz. Few occupational and recreational activities, other than shooting competitions (especially in outdoor environments), are appropriate if this type of hearing protector is expected to be truly effective in preventing noise-induced hearing loss.
Active amplitude-sensitive devices
An active amplitude-sensitive hearing protector has electronics and design goals similar to a passive amplitude-sensitive protector. These systems employ a microphone placed on the exterior of the ear cup or ported to the lateral surface of the earplug. The electronic circuit is designed to provide less and less amplification, or in some cases to completely shut down, as the environmental noise level increases. At the levels of normal conversational speech, these devices provide unity gain, (the loudness of speech is the same as if the protector wasn’t worn), or even a small amount of amplification. The goal is to keep the sound level under the earmuff or earplug to less than a 85 dBA diffuse-field equivalent. Some of the units built into earmuffs have a channel for each ear, thus allowing some level of localization to be maintained. Others have only one microphone. The fidelity (naturalness) of these systems varies among manufacturers. Because of the electronics package built into the ear cup which is necessary to have an active level-dependent system, these devices provide about four to six decibels less attenuation in their passive state, electronics turned off, than similar earmuffs without the electronics.
Active noise reduction
Active noise reduction, while an old concept, is a relatively new development for hearing protectors. Some units work by capturing the sound inside the ear cup, inverting its phase, and retransmitting the inverted noise into the ear cup to cancel the incoming sound. Other units work by capturing sound outside the ear cup, modifying its spectrum to account for the attenuation of the ear cup, and inserting the inverted noise into the ear cup, effectively using the electronics as a timing device so that the electrically inverted sound arrives in the ear cup at the same time as the noise transmitted through the ear cup. Active noise reduction is limited to the reduction of low-frequency noises below 1,000 Hz, with a maximum attenuation of 20 to 25 dB occurring at or below 300 Hz.
However, a portion of the attenuation provided by the active noise reduction system simply offsets the reduction in attenuation of the earmuffs that is caused by the inclusion in the ear cup of the very electronics which are required to effect the active noise reductions. At present these devices cost 10 to 50 times that of passive earmuffs or earplugs. If the electronics fail, the wearer may be inadequately protected and could experience more noise under the ear cup than if the electronics were simply shut off. As active noise cancellation devices become more popular, costs should diminish and their applicability may become more widespread.
The Best Hearing Protector
The best hearing protector is the one that the wearer will use willingly, 100% of the time. It is estimated that approximately 90% of noise-exposed workers in the manufacturing sector in the United States are exposed to noise levels of less than 95 dBA (Franks 1988). They need between 13 and 15 dB of attenuation to provide them with adequate protection. There are a wide array of hearing protectors that can provide sufficient attenuation. Finding the one that each worker will wear willingly 100% of the time is the challenge.
" DISCLAIMER: The ILO does not take responsibility for content presented on this web portal that is presented in any language other than English, which is the language used for the initial production and peer-review of original content. Certain statistics have not been updated since the production of the 4th edition of the Encyclopaedia (1998)."