27. Biological Monitoring
Chapter Editor: Robert Lauwerys
Table of Contents
General Principles
Vito Foà and Lorenzo Alessio
Quality Assurance
D. Gompertz
Metals and Organometallic Compounds
P. Hoet and Robert Lauwerys
Organic Solvents
Masayuki Ikeda
Genotoxic Chemicals
Marja Sorsa
Pesticides
Marco Maroni and Adalberto Ferioli
Click a link below to view table in article context.
1. ACGIH, DFG & other limit values for metals
2. Examples of chemicals & biological monitoring
3. Biological monitoring for organic solvents
4. Genotoxicity of chemicals evaluated by IARC
5. Biomarkers & some cell/tissue samples & genotoxicity
6. Human carcinogens, occupational exposure & cytogenetic end points
8. Exposure from production & use of pesticides
9. Acute OP toxicity at different levels of ACHE inhibition
10. Variations of ACHE & PCHE & selected health conditions
11. Cholinesterase activities of unexposed healthy people
12. Urinary alkyl phosphates & OP pesticides
13. Urinary alkyl phosphates measurements & OP
14. Urinary carbamate metabolites
15. Urinary dithiocarbamate metabolites
16. Proposed indices for biological monitoring of pesticides
17. Recommended biological limit values (as of 1996)
Point to a thumbnail to see figure caption, click to see figure in article context.
28. Epidemiology and Statistics
Chapter Editors: Franco Merletti, Colin L. Soskolne and Paolo Vineis
Epidemiological Method Applied to Occupational Health and Safety
Franco Merletti, Colin L. Soskolne and Paolo Vineis
Exposure Assessment
M. Gerald Ott
Summary Worklife Exposure Measures
Colin L. Soskolne
Measuring Effects of Exposures
Shelia Hoar Zahm
Case Study: Measures
Franco Merletti, Colin L. Soskolne and Paola Vineis
Options in Study Design
Sven Hernberg
Validity Issues in Study Design
Annie J. Sasco
Impact of Random Measurement Error
Paolo Vineis and Colin L. Soskolne
Statistical Methods
Annibale Biggeri and Mario Braga
Causality Assessment and Ethics in Epidemiological Research
Paolo Vineis
Case Studies Illustrating Methodological Issues in the Surveillance of Occupational Diseases
Jung-Der Wang
Questionnaires in Epidemiological Research
Steven D. Stellman and Colin L. Soskolne
Asbestos Historical Perspective
Lawrence Garfinkel
Click a link below to view table in article context.
1. Five selected summary measures of worklife exposure
2. Measures of disease occurrence
3. Measures of association for a cohort study
4. Measures of association for case-control studies
5. General frequency table layout for cohort data
6. Sample layout of case-control data
7. Layout case-control data - one control per case
8. Hypothetical cohort of 1950 individuals to T2
9. Indices of central tendency & dispersion
10. A binomial experiment & probabilities
11. Possible outcomes of a binomial experiment
12. Binomial distribution, 15 successes/30 trials
13. Binomial distribution, p = 0.25; 30 trials
14. Type II error & power; x = 12, n = 30, a = 0.05
15. Type II error & power; x = 12, n = 40, a = 0.05
16. 632 workers exposed to asbestos 20 years or longer
17. O/E number of deaths among 632 asbestos workers
Point to a thumbnail to see figure caption, click to see the figure in article context.
29. Ergonomics
Chapter Editors: Wolfgang Laurig and Joachim Vedder
Table of Contents
Overview
Wolfgang Laurig and Joachim Vedder
The Nature and Aims of Ergonomics
William T. Singleton
Analysis of Activities, Tasks and Work Systems
Véronique De Keyser
Ergonomics and Standardization
Friedhelm Nachreiner
Checklists
Pranab Kumar Nag
Anthropometry
Melchiorre Masali
Muscular Work
Juhani Smolander and Veikko Louhevaara
Postures at Work
Ilkka Kuorinka
Biomechanics
Frank Darby
General Fatigue
Étienne Grandjean
Fatigue and Recovery
Rolf Helbig and Walter Rohmert
Mental Workload
Winfried Hacker
Vigilance
Herbert Heuer
Mental Fatigue
Peter Richter
Work Organization
Eberhard Ulich and Gudela Grote
Sleep Deprivation
Kazutaka Kogi
Workstations
Roland Kadefors
Tools
T.M. Fraser
Controls, Indicators and Panels
Karl H. E. Kroemer
Information Processing and Design
Andries F. Sanders
Designing for Specific Groups
Joke H. Grady-van den Nieuwboer
Case Study: The International Classification of Functional Limitation in People
Cultural Differences
Houshang Shahnavaz
Elderly Workers
Antoine Laville and Serge Volkoff
Workers with Special Needs
Joke H. Grady-van den Nieuwboer
System Design in Diamond Manufacturing
Issachar Gilad
Disregarding Ergonomic Design Principles: Chernobyl
Vladimir M. Munipov
Click a link below to view table in article context.
1. Basic anthropometric core list
2. Fatigue & recovery dependent on activity levels
3. Rules of combination effects of two stress factors on strain
4. Differenting among several negative consequences of mental strain
5. Work-oriented principles for production structuring
6. Participation in organizational context
7. User participation in the technology process
8. Irregular working hours & sleep deprivation
9. Aspects of advance, anchor & retard sleeps
10. Control movements & expected effects
11. Control-effect relations of common hand controls
12. Rules for arrangement of controls
Point to a thumbnail to see figure caption, click to see the figure in the article context.
30. Occupational Hygiene
Chapter Editor: Robert F. Herrick
Table of Contents
Goals, Definitions and General Information
Berenice I. Ferrari Goelzer
Recognition of Hazards
Linnéa Lillienberg
Evaluation of the Work Environment
Lori A. Todd
Occupational Hygiene: Control of Exposures Through Intervention
James Stewart
The Biological Basis for Exposure Assessment
Dick Heederik
Occupational Exposure Limits
Dennis J. Paustenbach
1. Hazards of chemical; biological & physical agents
2. Occupational exposure limits (OELs) - various countries
31. Personal Protection
Chapter Editor: Robert F. Herrick
Table of Contents
Overview and Philosophy of Personal Protection
Robert F. Herrick
Eye and Face Protectors
Kikuzi Kimura
Foot and Leg Protection
Toyohiko Miura
Head Protection
Isabelle Balty and Alain Mayer
Hearing Protection
John R. Franks and Elliott H. Berger
Protective Clothing
S. Zack Mansdorf
Respiratory Protection
Thomas J. Nelson
Click a link below to view table in article context.
1. Transmittance requirements (ISO 4850-1979)
2. Scales of protection - gas-welding & braze-welding
3. Scales of protection - oxygen cutting
4. Scales of protection - plasma arc cutting
5. Scales of protection - electric arc welding or gouging
6. Scales of protection - plasma direct arc welding
7. Safety helmet: ISO Standard 3873-1977
8. Noise Reduction Rating of a hearing protector
9. Computing the A-weighted noise reduction
10. Examples of dermal hazard categories
11. Physical, chemical & biological performance requirements
12. Material hazards associated with particular activities
13. Assigned protection factors from ANSI Z88 2 (1992)
Point to a thumbnail to see figure caption, click to see figure in article context.
32. Record Systems and Surveillance
Chapter Editor: Steven D. Stellman
Table of Contents
Occupational Disease Surveillance and Reporting Systems
Steven B. Markowitz
Occupational Hazard Surveillance
David H. Wegman and Steven D. Stellman
Surveillance in Developing Countries
David Koh and Kee-Seng Chia
Development and Application of an Occupational Injury and Illness Classification System
Elyce Biddle
Risk Analysis of Nonfatal Workplace Injuries and Illnesses
John W. Ruser
Case Study: Worker Protection and Statistics on Accidents and Occupational Diseases - HVBG, Germany
Martin Butz and Burkhard Hoffmann
Case Study: Wismut - A Uranium Exposure Revisited
Heinz Otten and Horst Schulz
Measurement Strategies and Techniques for Occupational Exposure Assessment in Epidemiology
Frank Bochmann and Helmut Blome
Case Study: Occupational Health Surveys in China
Click a link below to view the table in article context.
1. Angiosarcoma of the liver - world register
2. Occupational illness, US, 1986 versus 1992
3. US Deaths from pneumoconiosis & pleural mesothelioma
4. Sample list of notifiable occupational diseases
5. Illness & injury reporting code structure, US
6. Nonfatal occupational injuries & illnesses, US 1993
7. Risk of occupational injuries & illnesses
8. Relative risk for repetitive motion conditions
9. Workplace accidents, Germany, 1981-93
10. Grinders in metalworking accidents, Germany, 1984-93
11. Occupational disease, Germany, 1980-93
12. Infectious diseases, Germany, 1980-93
13. Radiation exposure in the Wismut mines
14. Occupational diseases in Wismut uranium mines 1952-90
Point to a thumbnail to see figure caption, click to see the figure in article context.
33. Toxicology
Chapter Editor: Ellen K. Silbergeld
Introduction
Ellen K. Silbergeld, Chapter Editor
Definitions and Concepts
Bo Holmberg, Johan Hogberg and Gunnar Johanson
Toxicokinetics
Dušan Djuríc
Target Organ And Critical Effects
Marek Jakubowski
Effects Of Age, Sex And Other Factors
Spomenka Telišman
Genetic Determinants Of Toxic Response
Daniel W. Nebert and Ross A. McKinnon
Introduction And Concepts
Philip G. Watanabe
Cellular Injury And Cellular Death
Benjamin F. Trump and Irene K. Berezesky
Genetic Toxicology
R. Rita Misra and Michael P. Waalkes
Immunotoxicology
Joseph G. Vos and Henk van Loveren
Target Organ Toxicology
Ellen K. Silbergeld
Biomarkers
Philippe Grandjean
Genetic Toxicity Assessment
David M. DeMarini and James Huff
In Vitro Toxicity Testing
Joanne Zurlo
Structure Activity Relationships
Ellen K. Silbergeld
Toxicology In Health And Safety Regulation
Ellen K. Silbergeld
Principles Of Hazard Identification - The Japanese Approach
Masayuki Ikeda
The United States Approach to Risk Assessment Of Reproductive Toxicants and Neurotoxic Agents
Ellen K. Silbergeld
Approaches To Hazard Identification - IARC
Harri Vainio and Julian Wilbourn
Appendix - Overall Evaluations of Carcinogenicity to Humans: IARC Monographs Volumes 1-69 (836)
Carcinogen Risk Assessment: Other Approaches
Cees A. van der Heijden
Click a link below to view table in article context.
Point to a thumbnail to see figure caption, click to see figure in article context.
Basic Concepts and Definitions
At the worksite, industrial hygiene methodologies can measure and control only airborne chemicals, while other aspects of the problem of possible harmful agents in the environment of workers, such as skin absorption, ingestion, and non-work-related exposure, remain undetected and therefore uncontrolled. Biological monitoring helps fill this gap.
Biological monitoring was defined in a 1980 seminar, jointly sponsored by the European Economic Community (EEC), National Institute for Occupational Safety and Health (NIOSH) and Occupational Safety and Health Association (OSHA) (Berlin, Yodaiken and Henman 1984) in Luxembourg as “the measurement and assessment of agents or their metabolites either in tissues, secreta, excreta, expired air or any combination of these to evaluate exposure and health risk compared to an appropriate reference”. Monitoring is a repetitive, regular and preventive activity designed to lead, if necessary, to corrective actions; it should not be confused with diagnostic procedures.
Biological monitoring is one of the three important tools in the prevention of diseases due to toxic agents in the general or occupational environment, the other two being environmental monitoring and health surveillance.
The sequence in the possible development of such disease may be schematically represented as follows: source-exposed chemical agent—internal dose—biochemical or cellular effect (reversible) —health effects—disease. The relationships among environmental, biological, and exposure monitoring, and health surveillance, are shown in figure 1.
Figure 1. The relationship between environmental, biological and exposure monitoring, and health surveillance
When a toxic substance (an industrial chemical, for example) is present in the environment, it contaminates air, water, food, or surfaces in contact with the skin; the amount of toxic agent in these media is evaluated via environmental monitoring.
As a result of absorption, distribution, metabolism, and excretion, a certain internal dose of the toxic agent (the net amount of a pollutant absorbed in or passed through the organism over a specific time interval) is effectively delivered to the body, and becomes detectable in body fluids. As a result of its interaction with a receptor in the critical organ (the organ which, under specific conditions of exposure, exhibits the first or the most important adverse effect), biochemical and cellular events occur. Both the internal dose and the elicited biochemical and cellular effects may be measured through biological monitoring.
Health surveillance was defined at the above-mentioned 1980 EEC/NIOSH/OSHA seminar as “the periodic medico-physiological examination of exposed workers with the objective of protecting health and preventing disease”.
Biological monitoring and health surveillance are parts of a continuum that can range from the measurement of agents or their metabolites in the body via evaluation of biochemical and cellular effects, to the detection of signs of early reversible impairment of the critical organ. The detection of established disease is outside the scope of these evaluations.
Goals of Biological Monitoring
Biological monitoring can be divided into (a) monitoring of exposure, and (b) monitoring of effect, for which indicators of internal dose and of effect are used respectively.
The purpose of biological monitoring of exposure is to assess health risk through the evaluation of internal dose, achieving an estimate of the biologically active body burden of the chemical in question. Its rationale is to ensure that worker exposure does not reach levels capable of eliciting adverse effects. An effect is termed “adverse” if there is an impairment of functional capacity, a decreased ability to compensate for additional stress, a decreased ability to maintain homeostasis (a stable state of equilibrium), or an enhanced susceptibility to other environmental influences.
Depending on the chemical and the analysed biological parameter, the term internal dose may have different meanings (Bernard and Lauwerys 1987). First, it may mean the amount of a chemical recently absorbed, for example, during a single workshift. A determination of the pollutant’s concentration in alveolar air or in the blood may be made during the workshift itself, or as late as the next day (samples of blood or alveolar air may be taken up to 16 hours after the end of the exposure period). Second, in the case that the chemical has a long biological half-life—for example, metals in the bloodstream—the internal dose could reflect the amount absorbed over a period of a few months.
Third, the term may also mean the amount of chemical stored. In this case it represents an indicator of accumulation which can provide an estimate of the concentration of the chemical in organs and/or tissues from which, once deposited, it is only slowly released. For example, measurements of DDT or PCB in blood could provide such an estimate.
Finally, an internal dose value may indicate the quantity of the chemical at the site where it exerts its effects, thus providing information about the biologically effective dose. One of the most promising and important uses of this capability, for example, is the determination of adducts formed by toxic chemicals with protein in haemoglobin or with DNA.
Biological monitoring of effects is aimed at identifying early and reversible alterations which develop in the critical organ, and which, at the same time, can identify individuals with signs of adverse health effects. In this sense, biological monitoring of effects represents the principal tool for the health surveillance of workers.
Principal Monitoring Methods
Biological monitoring of exposure is based on the determination of indicators of internal dose by measuring:
Factors affecting the concentration of the chemical and its metabolites in blood or urine will be discussed below.
As far as the concentration in alveolar air is concerned, besides the level of environmental exposure, the most important factors involved are solubility and metabolism of the inhaled substance, alveolar ventilation, cardiac output, and length of exposure (Brugnone et al. 1980).
The use of DNA and haemoglobin adducts in monitoring human exposure to substances with carcinogenic potential is a very promising technique for measurement of low level exposures. (It should be noted, however, that not all chemicals that bind to macromolecules in the human organism are genotoxic, i.e., potentially carcinogenic.) Adduct formation is only one step in the complex process of carcinogenesis. Other cellular events, such as DNA repair promotion and progression undoubtedly modify the risk of developing a disease such as cancer. Thus, at the present time, the measurement of adducts should be seen as being confined only to monitoring exposure to chemicals. This is discussed more fully in the article “Genotoxic chemicals” later in this chapter.
Biological monitoring of effects is performed through the determination of indicators of effect, that is, those that can identify early and reversible alterations. This approach may provide an indirect estimate of the amount of chemical bound to the sites of action and offers the possibility of assessing functional alterations in the critical organ in an early phase.
Unfortunately, we can list only a few examples of the application of this approach, namely, (1) the inhibition of pseudocholinesterase by organophosphate insecticides, (2) the inhibition of d-aminolaevulinic acid dehydratase (ALA-D) by inorganic lead, and (3) the increased urinary excretion of d-glucaric acid and porphyrins in subjects exposed to chemicals inducing microsomal enzymes and/or to porphyrogenic agents (e.g., chlorinated hydrocarbons).
Advantages and Limitations of Biological Monitoring
For substances that exert their toxicity after entering the human organism, biological monitoring provides a more focused and targeted assessment of health risk than does environmental monitoring. A biological parameter reflecting the internal dose brings us one step closer to understanding systemic adverse effects than does any environmental measurement.
Biological monitoring offers numerous advantages over environmental monitoring and in particular permits assessment of:
In spite of these advantages, biological monitoring still suffers today from considerable limitations, the most significant of which are the following:
Information Required for the Development of Methods and Criteria for Selecting Biological Tests
Programming biological monitoring requires the following basic conditions:
In this context, the validity of a test is the degree to which the parameter under consideration predicts the situation as it really is (i.e., as more accurate measuring instruments would show it to be). Validity is determined by the combination of two properties: sensitivity and specificity. If a test possesses a high sensitivity, this means that it will give few false negatives; if it possesses high specificity, it will give few false positives (CEC 1985-1989).
Relationship between exposure, internal dose and effects
The study of the concentration of a substance in the working environment and the simultaneous determination of the indicators of dose and effect in exposed subjects allows information to be obtained on the relationship between occupational exposure and the concentration of the substance in biological samples, and between the latter and the early effects of exposure.
Knowledge of the relationships between the dose of a substance and the effect it produces is an essential requirement if a programme of biological monitoring is to be put into effect. The evaluation of this dose-effect relationship is based on the analysis of the degree of association existing between the indicator of dose and the indicator of effect and on the study of the quantitative variations of the indicator of effect with every variation of indicator of dose. (See also the chapter Toxicology, for further discussion of dose-related relationships).
With the study of the dose-effect relationship it is possible to identify the concentration of the toxic substance at which the indicator of effect exceeds the values currently considered not harmful. Furthermore, in this way it may also be possible to examine what the no-effect level might be.
Since not all the individuals of a group react in the same manner, it is necessary to examine the dose-response relationship, in other words, to study how the group responds to exposure by evaluating the appearance of the effect compared to the internal dose. The term response denotes the percentage of subjects in the group who show a specific quantitative variation of an effect indicator at each dose level.
Practical Applications of Biological Monitoring
The practical application of a biological monitoring programme requires information on (1) the behaviour of the indicators used in relation to exposure, especially those relating to degree, continuity and duration of exposure, (2) the time interval between end of exposure and measurement of the indicators, and (3) all physiological and pathological factors other than exposure that can alter the indicator levels.
In the following articles the behaviour of a number of biological indicators of dose and effect that are used for monitoring occupational exposure to substances widely used in industry will be presented. The practical usefulness and limits will be assessed for each substance, with particular emphasis on time of sampling and interfering factors. Such considerations will be helpful in establishing criteria for selecting a biological test.
Time of sampling
In selecting the time of sampling, the different kinetic aspects of the chemical must be kept in mind; in particular it is essential to know how the substance is absorbed via the lung, the gastrointestinal tract and the skin, subsequently distributed to the different compartments of the body, biotransformed, and finally eliminated. It is also important to know whether the chemical may accumulate in the body.
With respect to exposure to organic substances, the collection time of biological samples becomes all the more important in view of the different velocity of the metabolic processes involved and consequently the more or less rapid excretion of the absorbed dose.
Interfering Factors
Correct use of biological indicators requires a thorough knowledge of those factors which, although independent of exposure, may nevertheless affect the biological indicator levels. The following are the most important types of interfering factors (Alessio, Berlin and Foà 1987).
Physiological factors including diet, sex and age, for example, can affect results. Consumption of fish and crustaceans may increase the levels of urinary arsenic and blood mercury. In female subjects with the same lead blood levels as males, the erythrocyte protoporphyrin values are significantly higher compared to those of male subjects. The levels of urinary cadmium increase with age.
Among the personal habits that can distort indicator levels, smoking and alcohol consumption are particularly important. Smoking may cause direct absorption of substances naturally present in tobacco leaves (e.g., cadmium), or of pollutants present in the working environment that have been deposited on the cigarettes (e.g., lead), or of combustion products (e.g., carbon monoxide).
Alcohol consumption may influence biological indicator levels, since substances such as lead are naturally present in alcoholic beverages. Heavy drinkers, for example, show higher blood lead levels than control subjects. Ingestion of alcohol can interfere with the biotransformation and elimination of toxic industrial compounds: with a single dose, alcohol can inhibit the metabolism of many solvents, for example, trichloroethylene, xylene, styrene and toluene, because of their competition with ethyl alcohol for enzymes which are essential for the breakdown of both ethanol and solvents. Regular alcohol ingestion can also affect the metabolism of solvents in a totally different manner by accelerating solvent metabolism, presumably due to induction of the microsome oxidizing system. Since ethanol is the most important substance capable of inducing metabolic interference, it is advisable to determine indicators of exposure for solvents only on days when alcohol has not been consumed.
Less information is available on the possible effects of drugs on the levels of biological indicators. It has been demonstrated that aspirin can interfere with the biological transformation of xylene to methylhippuric acid, and phenylsalicylate, a drug widely used as an analgesic, can significantly increase the levels of urinary phenols. The consumption of aluminium-based antacid preparations can give rise to increased levels of aluminium in plasma and urine.
Marked differences have been observed in different ethnic groups in the metabolism of widely used solvents such as toluene, xylene, trichloroethylene, tetrachloroethylene, and methylchloroform.
Acquired pathological states can influence the levels of biological indicators. The critical organ can behave anomalously with respect to biological monitoring tests because of the specific action of the toxic agent as well as for other reasons. An example of situations of the first type is the behaviour of urinary cadmium levels: when tubular disease due to cadmium sets in, urinary excretion increases markedly and the levels of the test no longer reflect the degree of exposure. An example of the second type of situation is the increase in erythrocyte protoporphyrin levels observed in iron-deficient subjects who show no abnormal lead absorption.
Physiological changes in the biological media—urine, for example—on which determinations of the biological indicators are based, can influence the test values. For practical purposes, only spot urinary samples can be obtained from individuals during work, and the varying density of these samples means that the levels of the indicator can fluctuate widely in the course of a single day.
In order to overcome this difficulty, it is advisable to eliminate over-diluted or over-concentrated samples according to selected specific gravity or creatinine values. In particular, urine with a specific gravity below 1010 or higher than 1030 or with a creatinine concentration lower than 0.5 g/l or greater than 3.0 g/l should be discarded. Several authors also suggest adjusting the values of the indicators according to specific gravity or expressing the values according to urinary creatinine content.
Pathological changes in the biological media can also considerably influence the values of the biological indicators. For example, in anaemic subjects exposed to metals (mercury, cadmium, lead, etc.) the blood levels of the metal may be lower than would be expected on the basis of exposure; this is due to the low level of red blood cells that transport the toxic metal in the blood circulation.
Therefore, when determinations of toxic substances or metabolites bound to red blood cells are made on whole blood, it is always advisable to determine the haematocrit, which gives a measure of the percentage of blood cells in whole blood.
Multiple exposure to toxic substances present in the workplace
In the case of combined exposure to more than one toxic substance present at the workplace, metabolic interferences may occur that can alter the behaviour of the biological indicators and thus create serious problems in interpretation. In human studies, interferences have been demonstrated, for example, in combined exposure to toluene and xylene, xylene and ethylbenzene, toluene and benzene, hexane and methyl ethyl ketone, tetrachloroethylene and trichloroethylene.
In particular, it should be noted that when biotransformation of a solvent is inhibited, the urinary excretion of its metabolite is reduced (possible underestimation of risk) whereas the levels of the solvent in blood and expired air increase (possible overestimation of risk).
Thus, in situations in which it is possible to measure simultaneously the substances and their metabolites in order to interpret the degree of inhibitory interference, it would be useful to check whether the levels of the urinary metabolites are lower than expected and at the same time whether the concentration of the solvents in blood and/or expired air is higher.
Metabolic interferences have been described for exposures where the single substances are present in levels close to and sometimes below the currently accepted limit values. Interferences, however, do not usually occur when exposure to each substance present in the workplace is low.
Practical Use of Biological Indicators
Biological indicators can be used for various purposes in occupational health practice, in particular for (1) periodic control of individual workers, (2) analysis of the exposure of a group of workers, and (3) epidemiological assessments. The tests used should possess the features of precision, accuracy, good sensitivity, and specificity in order to minimize the possible number of false classifications.
Reference values and reference groups
A reference value is the level of a biological indicator in the general population not occupationally exposed to the toxic substance under study. It is necessary to refer to these values in order to compare the data obtained through biological monitoring programmes in a population which is presumed to be exposed. Reference values should not be confused with limit values, which generally are the legal limits or guidelines for occupational and environmental exposure (Alessio et al. 1992).
When it is necessary to compare the results of group analyses, the distribution of the values in the reference group and in the group under study must be known because only then can a statistical comparison be made. In these cases, it is essential to attempt to match the general population (reference group) with the exposed group for similar characteristics such as, sex, age, lifestyle and eating habits.
To obtain reliable reference values one must make sure that the subjects making up the reference group have never been exposed to the toxic substances, either occupationally or due to particular conditions of environmental pollution.
In assessing exposure to toxic substances one must be careful not to include subjects who, although not directly exposed to the toxic substance in question, work in the same workplace, since if these subjects are, in fact, indirectly exposed, the exposure of the group may be in consequence underestimated.
Another practice to avoid, although it is still widespread, is the use for reference purposes of values reported in the literature that are derived from case lists from other countries and may often have been collected in regions where different environmental pollution situations exist.
Periodic monitoring of individual workers
Periodic monitoring of individual workers is mandatory when the levels of the toxic substance in the atmosphere of the working environment approach the limit value. Where possible, it is advisable to simultaneously check an indicator of exposure and an indicator of effect. The data thus obtained should be compared with the reference values and the limit values suggested for the substance under study (ACGIH 1993).
Analysis of a group of workers
Analysis of a group becomes mandatory when the results of the biological indicators used can be markedly influenced by factors independent of exposure (diet, concentration or dilution of urine, etc.) and for which a wide range of “normal” values exists.
In order to ensure that the group study will furnish useful results, the group must be sufficiently numerous and homogeneous as regards exposure, sex, and, in the case of some toxic agents, work seniority. The more the exposure levels are constant over time, the more reliable the data will be. An investigation carried out in a workplace where the workers frequently change department or job will have little value. For a correct assessment of a group study it is not sufficient to express the data only as mean values and range. The frequency distribution of the values of the biological indicator in question must also be taken into account.
Epidemiological assessments
Data obtained from biological monitoring of groups of workers can also be used in cross-sectional or prospective epidemiological studies.
Cross-sectional studies can be used to compare the situations existing in different departments of the factory or in different industries in order to set up risk maps for manufacturing processes. A difficulty that may be encountered in this type of application depends on the fact that inter-laboratory quality controls are not yet sufficiently widespread; thus it cannot be guaranteed that different laboratories will produce comparable results.
Prospective studies serve to assess the behaviour over time of the exposure levels so as to check, for example, the efficacy of environmental improvements or to correlate the behaviour of biological indicators over the years with the health status of the subjects being monitored. The results of such long-term studies are very useful in solving problems involving changes over time. At present, biological monitoring is mainly used as a suitable procedure for assessing whether current exposure is judged to be “safe,” but it is as yet not valid for assessing situations over time. A given level of exposure considered safe today may no longer be regarded as such at some point in the future.
Ethical Aspects
Some ethical considerations arise in connection with the use of biological monitoring as a tool to assess potential toxicity. One goal of such monitoring is to assemble enough information to decide what level of any given effect constitutes an undesirable effect; in the absence of sufficient data, any perturbation will be considered undesirable. The regulatory and legal implications of this type of information need to be evaluated. Therefore, we should seek societal discussion and consensus as to the ways in which biological indicators should best be used. In other words, education is required of workers, employers, communities and regulatory authorities as to the meaning of the results obtained by biological monitoring so that no one is either unduly alarmed or complacent.
There must be appropriate communication with the individual upon whom the test has been performed concerning the results and their interpretation. Further, whether or not the use of some indicators is experimental should be clearly conveyed to all participants.
The International Code of Ethics for Occupational Health Professionals, issued by the International Commission on Occupational Health in 1992, stated that “biological tests and other investigations must be chosen from the point of view of their validity for protection of the health of the worker concerned, with due regard to their sensitivity, their specificity and their predictive value”. Use must not be made of tests “which are not reliable or which do not have a sufficient predictive value in relation to the requirements of the work assignment”. (See the chapter Ethical Issues for further discussion and the text of the Code.)
Trends in Regulation and Application
Biological monitoring can be carried out for only a limited number of environmental pollutants on account of the limited availability of appropriate reference data. This imposes important limitations on the use of biological monitoring in evaluating exposure.
The World Health Organization (WHO), for example, has proposed health-based reference values for lead, mercury, and cadmium only. These values are defined as levels in blood and urine not linked to any detectable adverse effect.The American Conference of Governmental Industrial Hygienists (ACGIH) has established biological exposure indices (BEIs) for about 26 compounds; BEIs are defined as “values for determinants which are indicators of the degree of integrated exposure to industrial chemicals” (ACGIH 1995).
Definition and Scope
Ergonomics means literally the study or measurement of work. In this context, the term work signifies purposeful human function; it extends beyond the more restricted concept of work as labour for monetary gain to incorporate all activities whereby a rational human operator systematically pursues an objective. Thus it includes sports and other leisure activities, domestic work such as child care and home maintenance, education and training, health and social service, and either controlling engineered systems or adapting to them, for example, as a passenger in a vehicle.
The human operator, the focus of study, may be a skilled professional operating a complex machine in an artificial environment, a customer who has casually purchased a new piece of equipment for personal use, a child sitting in a classroom or a disabled person in a wheelchair. The human being is highly adaptable but not infinitely so. There are ranges of optimum conditions for any activity. One of the tasks of ergonomics is to define what these ranges are and to explore the undesirable effects which occur if the limits are transgressed—for example if a person is expected to work in conditions of excessive heat, noise or vibration, or if the physical or mental workload is too high or too low.
Ergonomics examines not only the passive ambient situation but also the unique advantages of the human operator and the contributions that can be made if a work situation is designed to permit and encourage the person to make the best use of his or her abilities. Human abilities may be characterized not only with reference to the generic human operator but also with respect to those more particular abilities that are called upon in specific situations where high performance is essential. For example, an automobile manufacturer will consider the range of physical size and strength of the population of drivers who are expected to use a particular model to ensure that the seats are comfortable, that the controls are readily identifiable and within reach, that there is clear visibility to the front and the rear, and that the internal instruments are easy to read. Ease of entry and egress will also be taken into account. By contrast, the designer of a racing car will assume that the driver is athletic so that ease of getting in and out, for example, is not important and, in fact, design features as a whole as they relate to the driver may well be tailored to the dimensions and preferences of a particular driver to ensure that he or she can exercise his or her full potential and skill as a driver.
In all situations, activities and tasks the focus is the person or persons involved. It is assumed that the structure, the engineering and any other technology is there to serve the operator, not the other way round.
History and Status
About a century ago it was recognized that working hours and conditions in some mines and factories were not tolerable in terms of safety and health, and the need was evident to pass laws to set permissible limits in these respects. The determination and statement of those limits can be regarded as the beginning of ergonomics. They were, incidentally, the beginning of all the activities which now find expression through the work of the International Labour Organization (ILO).
Research, development and application proceeded slowly until the Second World War. This triggered greatly accelerated development of machines and instrumentation such as vehicles, aircraft, tanks, guns and vastly improved sensing and navigation devices. As technology advanced, greater flexibility was available to allow adaptation to the operator, an adaptation that became the more necessary because human performance was limiting the performance of the system. If a powered vehicle can travel at a speed of only a few kilometres per hour there is no need to worry about the performance of the driver, but when the vehicle’s maximum speed is increased by a factor of ten or a hundred, then the driver has to react more quickly and there is no time to correct mistakes to avert disaster. Similarly, as technology is improved there is less need to worry about mechanical or electrical failure (for instance) and attention is freed to think about the needs of the driver.
Thus ergonomics, in the sense of adapting engineering technology to the needs of the operator, becomes simultaneously both more necessary and more feasible as engineering advances.
The term ergonomics came into use about 1950 when the priorities of developing industry were taking over from the priorities of the military. The development of research and application for the following thirty years is described in detail in Singleton (1982). The United Nations agencies, particularly the ILO and the World Health Organization (WHO), became active in this field in the 1960s.
In immediate postwar industry the overriding objective, shared by ergonomics, was greater productivity. This was a feasible objective for ergonomics because so much industrial productivity was determined directly by the physical effort of the workers involved—speed of assembly and rate of lifting and movement determined the extent of output. Gradually, mechanical power replaced human muscle power. More power, however, leads to more accidents on the simple principle that an accident is the consequence of power in the wrong place at the wrong time. When things are happening faster, the potential for accidents is further increased. Thus the concern of industry and the aim of ergonomics gradually shifted from productivity to safety. This occurred in the 1960s and early 1970s. About and after this time, much of manufacturing industry shifted from batch production to flow and process production. The role of the operator shifted correspondingly from direct participation to monitoring and inspection. This resulted in a lower frequency of accidents because the operator was more remote from the scene of action but sometimes in a greater severity of accidents because of the speed and power inherent in the process.
When output is determined by the speed at which machines function then productivity becomes a matter of keeping the system running: in other words, reliability is the objective. Thus the operator becomes a monitor, a trouble-shooter and a maintainer rather than a direct manipulator.
This historical sketch of the postwar changes in manufacturing industry might suggest that the ergonomist has regularly dropped one set of problems and taken up another set but this is not the case for several reasons. As explained earlier, the concerns of ergonomics are much wider than those of manufacturing industry. In addition to production ergonomics, there is product or design ergonomics, that is, adapting the machine or product to the user. In the car industry, for example, ergonomics is important not only to component manufacturing and the production lines but also to the eventual driver, passenger and maintainer. It is now routine in the marketing of cars and in their critical appraisal by others to review the quality of the ergonomics, considering ride, seat comfort, handling, noise and vibration levels, ease of use of controls, visibility inside and outside, and so on.
It was suggested above that human performance is usually optimized within a tolerance range of a relevant variable. Much of the early ergonomics attempted to reduce both muscle power output and the extent and variety of movement by way of ensuring that such tolerances were not exceeded. The greatest change in the work situation, the advent of computers, has created the opposite problem. Unless it is well designed ergonomically, a computer workspace can induce too fixed a posture, too little bodily movement and too much repetition of particular combinations of joint movements.
This brief historical review is intended to indicate that, although there has been continuous development of ergonomics, it has taken the form of adding more and more problems rather than changing the problems. However, the corpus of knowledge grows and becomes more reliable and valid, energy expenditure norms are not dependent on how or why the energy is expended, postural issues are the same in aircraft seats and in front of computer screens, much human activity now involves using videoscreens and there are well-established principles based on a mix of laboratory evidence and field studies.
Ergonomics and Related Disciplines
The development of a science-based application which is intermediate between the well-established technologies of engineering and medicine inevitably overlaps into many related disciplines. In terms of its scientific basis, much of ergonomic knowledge derives from the human sciences: anatomy, physiology and psychology. The physical sciences also make a contribution, for example, to solving problems of lighting, heating, noise and vibration.
Most of the European pioneers in ergonomics were workers among the human sciences and it is for this reason that ergonomics is well-balanced between physiology and psychology. A physiological orientation is required as a background to problems such as energy expenditure, posture and application of forces, including lifting. A psychological orientation is required to study problems such as information presentation and job satisfaction. There are of course many problems which require a mixed human sciences approach such as stress, fatigue and shift work.
Most of the American pioneers in this field were involved in either experimental psychology or engineering and it is for this reason that their typical occupational titles—human engineering and human factors—reflect a difference in emphasis (but not in core interests) from European ergonomics. This also explains why occupational hygiene, from its close relationship to medicine, particularly occupational medicine, is regarded in the United States as quite different from human factors or ergonomics. The difference in other parts of the world is less marked. Ergonomics concentrates on the human operator in action, occupational hygiene concentrates on the hazards to the human operator present in the ambient environment. Thus the central interest of the occupational hygienist is toxic hazards, which are outside the scope of the ergonomist. The occupational hygienist is concerned about effects on health, either long-term or short-term; the ergonomist is, of course, concerned about health but he or she is also concerned about other consequences, such as productivity, work design and workspace design. Safety and health are the generic issues which run through ergonomics, occupational hygiene, occupational health and occupational medicine. It is, therefore, not surprising to find that in a large institution of a research, design or production kind, these subjects are often grouped together. This makes possible an approach based on a team of experts in these separate subjects, each making a specialist contribution to the general problem of health, not only of the workers in the institution but also of those affected by its activities and products. By contrast, in institutions concerned with design or provision of services, the ergonomist might be closer to the engineers and other technologists.
It will be clear from this discussion that because ergonomics is interdisciplinary and still quite new there is an important problem of how it should best be fitted into an existing organization. It overlaps onto so many other fields because it is concerned with people and people are the basic and all-pervading resource of every organization. There are many ways in which it can be fitted in, depending on the history and objectives of the particular organization. The main criteria are that ergonomics objectives are understood and appreciated and that mechanisms for implementation of recommendations are built into the organization.
Aims of Ergonomics
It will be clear already that the benefits of ergonomics can appear in many different forms, in productivity and quality, in safety and health, in reliability, in job satisfaction and in personal development.
The reason for this breadth of scope is that its basic aim is efficiency in purposeful activity—efficiency in the widest sense of achieving the desired result without wasteful input, without error and without damage to the person involved or to others. It is not efficient to expend unnecessary energy or time because insufficient thought has been given to the design of the work, the workspace, the working environment and the working conditions. It is not efficient to achieve the desired result in spite of the situation design rather than with support from it.
The aim of ergonomics is to ensure that the working situation is in harmony with the activities of the worker. This aim is self-evidently valid but attaining it is far from easy for a variety of reasons. The human operator is flexible and adaptable and there is continuous learning, but there are quite large individual differences. Some differences, such as physical size and strength, are obvious, but others, such as cultural differences and differences in style and in level of skill, are less easy to identify.
In view of these complexities it might seem that the solution is to provide a flexible situation where the human operator can optimize a specifically appropriate way of doing things. Unfortunately such an approach is sometimes impracticable because the more efficient way is often not obvious, with the result that a worker can go on doing something the wrong way or in the wrong conditions for years.
Thus it is necessary to adopt a systematic approach: to start from a sound theory, to set measurable objectives and to check success against these objectives. The various possible objectives are considered below.
Safety and health
There can be no disagreement about the desirability of safety and health objectives. The difficulty stems from the fact that neither is directly measurable: their achievement is assessed by their absence rather than their presence. The data in question always pertain to departures from safety and health.
In the case of health, much of the evidence is long-term as it is based on populations rather than individuals. It is, therefore, necessary to maintain careful records over long periods and to adopt an epidemiological approach through which risk factors can be identified and measured. For example, what should be the maximum hours per day or per year required of a worker at a computer workstation? It depends on the design of the workstation, the kind of work and the kind of person (age, vision, abilities and so on). The effects on health can be diverse, from wrist problems to mental apathy, so it is necessary to carry out comprehensive studies covering quite large populations while simultaneously keeping track of differences within the populations.
Safety is more directly measurable in a negative sense in terms of kinds and frequencies of accidents and damage. There are problems in defining different kinds of accidents and identifying the often multiple causal factors and there is often a distant relationship between the kind of accident and the degree of harm, from none to fatality.
Nevertheless, an enormous body of evidence concerning safety and health has been accumulated over the past fifty years and consistencies have been discovered which can be related back to theory, to laws and standards and to principles operative in particular kinds of situations.
Productivity and efficiency
Productivity is usually defined in terms of output per unit of time, whereas efficiency incorporates other variables, particularly the ratio of output to input. Efficiency incorporates the cost of what is done in relation to achievement, and in human terms this requires the consideration of the penalties to the human operator.
In industrial situations, productivity is relatively easy to measure: the amount produced can be counted and the time taken to produce it is simple to record. Productivity data are often used in before/after comparisons of working methods, situations or conditions. It involves assumptions about equivalence of effort and other costs because it is based on the principle that the human operator will perform as well as is feasible in the circumstances. If the productivity is higher then the circumstances must be better. There is much to recommend this simple approach provided that it is used with due regard to the many possible complicating factors which can disguise what is really happening. The best safeguard is to try to make sure that nothing has changed between the before and after situations except the aspects being studied.
Efficiency is a more comprehensive but always a more difficult measure. It usually has to be specifically defined for a particular situation and in assessing the results of any studies the definition should be checked for its relevance and validity in terms of the conclusions being drawn. For example, is bicycling more efficient than walking? Bicycling is much more productive in terms of the distance that can be covered on a road in a given time, and it is more efficient in terms of energy expenditure per unit of distance or, for indoor exercise, because the apparatus required is cheaper and simpler. On the other hand, the purpose of the exercise might be energy expenditure for health reasons or to climb a mountain over difficult terrain; in these circumstances walking will be more efficient. Thus, an efficiency measure has meaning only in a well-defined context.
Reliability and quality
As explained above, reliability rather than productivity becomes the key measure in high technology systems (for instance, transport aircraft, oil refining and power generation). The controllers of such systems monitor performance and make their contribution to productivity and to safety by making tuning adjustments to ensure that the automatic machines stay on line and function within limits. All these systems are in their safest states either when they are quiescent or when they are functioning steadily within the designed performance envelope. They become more dangerous when moving or being moved between equilibrium states, for example, when an aircraft is taking off or a process system is being shut down. High reliability is the key characteristic not only for safety reasons but also because unplanned shut-down or stoppage is extremely expensive. Reliability is straightforward to measure after performance but is extremely difficult to predict except by reference to the past performance of similar systems. When or if something goes wrong human error is invariably a contributing cause, but it is not necessarily an error on the part of the controller: human errors can originate at the design stage and during setting up and maintenance. It is now accepted that such complex high-technology systems require a considerable and continuous ergonomics input from design to the assessment of any failures that occur.
Quality is related to reliability but is very difficult if not impossible to measure. Traditionally, in batch and flow production systems, quality has been checked by inspection after output, but the current established principle is to combine production and quality maintenance. Thus each operator has parallel responsibility as an inspector. This usually proves to be more effective, but it may mean abandoning work incentives based simply on rate of production. In ergonomic terms it makes sense to treat the operator as a responsible person rather than as a kind of robot programmed for repetitive performance.
Job satisfaction and personal development
From the principle that the worker or human operator should be recognized as a person and not a robot it follows that consideration should be given to responsibilities, attitudes, beliefs and values. This is not easy because there are many variables, mostly detectable but not quantifiable, and there are large individual and cultural differences. Nevertheless a great deal of effort now goes into the design and management of work with the aim of ensuring that the situation is as satisfactory as is reasonably practicable from the operator’s viewpoint. Some measurement is possible by using survey techniques and some principles are available based on such working features as autonomy and empowerment.
Even accepting that these efforts take time and cost money, there can still be considerable dividends from listening to the suggestions, opinions and attitudes of the people actually doing the work. Their approach may not be the same as that of the external work designer and not the same as the assumptions made by the work designer or manager. These differences of view are important and can provide a refreshing change in strategy on the part of everyone involved.
It is well established that the human being is a continuous learner or can be, given the appropriate conditions. The key condition is to provide feedback about past and present performance which can be used to improve future performance. Moreover, such feedback itself acts as an incentive to performance. Thus everyone gains, the performer and those responsible in a wider sense for the performance. It follows that there is much to be gained from performance improvement, including self-development. The principle that personal development should be an aspect of the application of ergonomics requires greater designer and manager skills but, if it can be applied successfully, can improve all the aspects of human performance discussed above.
Successful application of ergonomics often follows from doing no more than developing the appropriate attitude or point of view. The people involved are inevitably the central factor in any human effort and the systematic consideration of their advantages, limitations, needs and aspirations is inherently important.
Conclusion
Ergonomics is the systematic study of people at work with the objective of improving the work situation, the working conditions and the tasks performed. The emphasis is on acquiring relevant and reliable evidence on which to base recommendation for changes in specific situations and on developing more general theories, concepts, guidelines and procedures which will contribute to the continually developing expertise available from ergonomics.
Exposure, Dose and Response
Toxicity is the intrinsic capacity of a chemical agent to affect an organism adversely.
Xenobiotics is a term for “foreign substances”, that is, foreign to the organism. Its opposite is endogenous compounds. Xenobiotics include drugs, industrial chemicals, naturally occurring poisons and environmental pollutants.
Hazard is the potential for the toxicity to be realized in a specific setting or situation.
Risk is the probability of a specific adverse effect to occur. It is often expressed as the percentage of cases in a given population and during a specific time period. A risk estimate can be based upon actual cases or a projection of future cases, based upon extrapolations.
Toxicity rating and toxicity classification can be used for regulatory purposes. Toxicity rating is an arbitrary grading of doses or exposure levels causing toxic effects. The grading can be “supertoxic,” “highly toxic,” “moderately toxic” and so on. The most common ratings concern acute toxicity. Toxicity classification concerns the grouping of chemicals into general categories according to their most important toxic effect. Such categories can include allergenic, neurotoxic, carcinogenic and so on. This classification can be of administrative value as a warning and as information.
The dose-effect relationship is the relationship between dose and effect on the individual level. An increase in dose may in- crease the intensity of an effect, or a more severe effect may result. A dose-effect curve may be obtained at the level of the whole organism, the cell or the target molecule. Some toxic effects, such as death or cancer, are not graded but are “all or none” effects.
The dose-response relationship is the relationship between dose and the percentage of individuals showing a specific effect. With increasing dose a greater number of individuals in the exposed population will usually be affected.
It is essential to toxicology to establish dose-effect and dose- response relationships. In medical (epidemiological) studies a criterion often used for accepting a causal relationship between an agent and a disease is that effect or response is proportional to dose.
Several dose-response curves can be drawn for a chemical—one for each type of effect. The dose-response curve for most toxic effects (when studied in large populations) has a sigmoid shape. There is usually a low-dose range where there is no response detected; as dose increases, the response follows an ascending curve that will usually reach a plateau at a 100% response. The dose-response curve reflects the variations among individuals in a population. The slope of the curve varies from chemical to chemical and between different types of effects. For some chemicals with specific effects (carcinogens, initiators, mutagens) the dose-response curve might be linear from dose zero within a certain dose range. This means that no threshold exists and that even small doses represent a risk. Above that dose range, the risk may increase at greater than a linear rate.
Variation in exposure during the day and the total length of exposure during one’s lifetime may be as important for the outcome (response) as mean or average or even integrated dose level. High peak exposures may be more harmful than a more even exposure level. This is the case for some organic solvents. On the other hand, for some carcinogens, it has been experimentally shown that the fractionation of a single dose into several exposures with the same total dose may be more effective in producing tumours.
A dose is often expressed as the amount of a xenobiotic entering an organism (in units such as mg/kg body weight). The dose may be expressed in different (more or less informative) ways: exposure dose, which is the air concentration of pollutant inhaled during a certain time period (in work hygiene usually eight hours), or the retained or absorbed dose (in industrial hygiene also called the body burden), which is the amount present in the body at a certain time during or after exposure. The tissue dose is the amount of substance in a specific tissue and the target dose is the amount of substance (usually a metabolite) bound to the critical molecule. The target dose can be expressed as mg chemical bound per mg of a specific macromolecule in the tissue. To apply this concept, information on the mechanism of toxic action on the molecular level is needed. The target dose is more exactly associated with the toxic effect. The exposure dose or body burden may be more easily available, but these are less precisely related to the effect.
In the dose concept a time aspect is often included, even if it is not always expressed. The theoretical dose according to Haber’s law is D = ct, where D is dose, c is concentration of the xenobiotic in the air and t the duration of exposure to the chemical. If this concept is used at the target organ or molecular level, the amount per mg tissue or molecule over a certain time may be used. The time aspect is usually more important for understanding repeated exposures and chronic effects than for single exposures and acute effects.
Additive effects occur as a result of exposure to a combination of chemicals, where the individual toxicities are simply added to each other (1+1= 2). When chemicals act via the same mechanism, additivity of their effects is assumed although not always the case in reality. Interaction between chemicals may result in an inhibition (antagonism), with a smaller effect than that expected from addition of the effects of the individual chemicals (1+1 2). Alternatively, a combination of chemicals may produce a more pronounced effect than would be expected by addition (increased response among individuals or an increase in frequency of response in a population), this is called synergism (1+1 >2).
Latency time is the time between first exposure and the appearance of a detectable effect or response. The term is often used for carcinogenic effects, where tumours may appear a long time after the start of exposure and sometimes long after the cessation of exposure.
A dose threshold is a dose level below which no observable effect occurs. Thresholds are thought to exist for certain effects, like acute toxic effects; but not for others, like carcinogenic effects (by DNA-adduct-forming initiators). The mere absence of a response in a given population should not, however, be taken as evidence for the existence of a threshold. Absence of response could be due to simple statistical phenomena: an adverse effect occurring at low frequency may not be detectable in a small population.
LD50 (effective dose) is the dose causing 50% lethality in an animal population. The LD50 is often given in older literature as a measure of acute toxicity of chemicals. The higher the LD50, the lower is the acute toxicity. A highly toxic chemical (with a low LD50) is said to be potent. There is no necessary correlation between acute and chronic toxicity. ED50 (effective dose) is the dose causing a specific effect other than lethality in 50% of the animals.
NOEL (NOAEL) means the no observed (adverse) effect level, or the highest dose that does not cause a toxic effect. To establish a NOEL requires multiple doses, a large population and additional information to make sure that absence of a response is not merely a statistical phenomenon. LOEL is the lowest observed effective dose on a dose-response curve, or the lowest dose that causes an effect.
A safety factor is a formal, arbitrary number with which one divides the NOEL or LOEL derived from animal experiments to obtain a tentative permissible dose for humans. This is often used in the area of food toxicology, but may be used also in occupational toxicology. A safety factor may also be used for extrapolation of data from small populations to larger populations. Safety factors range from 100 to 103. A safety factor of two may typically be sufficient to protect from a less serious effect (such as irritation) and a factor as large as 1,000 may be used for very serious effects (such as cancer). The term safety factor could be better replaced by the term protection factor or, even, uncertainty factor. The use of the latter term reflects scientific uncertainties, such as whether exact dose-response data can be translated from animals to humans for the particular chemical, toxic effect or exposure situation.
Extrapolations are theoretical qualitative or quantitative estimates of toxicity (risk extrapolations) derived from translation of data from one species to another or from one set of dose-response data (typically in the high dose range) to regions of dose-response where no data exist. Extrapolations usually must be made to predict toxic responses outside the observation range. Mathematical modelling is used for extrapolations based upon an understanding of the behaviour of the chemical in the organism (toxicokinetic modelling) or based upon the understanding of statistical probabilities that specific biological events will occur (biologically or mechanistically based models). Some national agencies have developed sophisticated extrapolation models as a formalized method to predict risks for regulatory purposes. (See discussion of risk assessment later in the chapter.)
Systemic effects are toxic effects in tissues distant from the route of absorption.
Target organ is the primary or most sensitive organ affected after exposure. The same chemical entering the body by different routes of exposure dose, dose rate, sex and species may affect different target organs. Interaction between chemicals, or between chemicals and other factors may affect different target organs as well.
Acute effects occur after limited exposure and shortly (hours, days) after exposure and may be reversible or irreversible.
Chronic effects occur after prolonged exposure (months, years, decades) and/or persist after exposure has ceased.
Acute exposure is an exposure of short duration, while chronic exposure is long-term (sometimes life-long) exposure.
Tolerance to a chemical may occur when repeat exposures result in a lower response than what would have been expected without pretreatment.
Uptake and Disposition
Transport processes
Diffusion. In order to enter the organism and reach a site where damage is produced, a foreign substance has to pass several barriers, including cells and their membranes. Most toxic substances pass through membranes passively by diffusion. This may occur for small water-soluble molecules by passage through aqueous channels or, for fat-soluble ones, by dissolution into and diffusion through the lipid part of the membrane. Ethanol, a small molecule that is both water and fat soluble, diffuses rapidly through cell membranes.
Diffusion of weak acids and bases. Weak acids and bases may readily pass membranes in their non-ionized, fat-soluble form while ionized forms are too polar to pass. The degree of ionization of these substances depends on pH. If a pH gradient exists across a membrane they will therefore accumulate on one side. The urinary excretion of weak acids and bases is highly dependent on urinary pH. Foetal or embryonic pH is somewhat higher than maternal pH, causing a slight accumulation of weak acids in the foetus or embryo.
Facilitated diffusion. The passage of a substance may be facilitated by carriers in the membrane. Facilitated diffusion is similar to enzyme processes in that it is protein mediated, highly selective, and saturable. Other substances may inhibit the facilitated transport of xenobiotics.
Active transport. Some substances are actively transported across cell membranes. This transport is mediated by carrier proteins in a process analogous to that of enzymes. Active transport is similar to facilitated diffusion, but it may occur against a concentration gradient. It requires energy input and a metabolic inhibitor can block the process. Most environmental pollutants are not transported actively. One exception is the active tubular secretion and reabsorption of acid metabolites in the kidneys.
Phagocytosis is a process where specialized cells such as macrophages engulf particles for subsequent digestion. This transport process is important, for example, for the removal of particles in the alveoli.
Bulk flow. Substances are also transported in the body along with the movement of air in the respiratory system during breathing, and the movements of blood, lymph or urine.
Filtration. Due to hydrostatic or osmotic pressure water flows in bulk through pores in the endothelium. Any solute that is small enough will be filtered together with the water. Filtration occurs to some extent in the capillary bed in all tissues but is particularly important in the formation of primary urine in the kidney glomeruli.
Absorption
Absorption is the uptake of a substance from the environment into the organism. The term usually includes not only the entrance into the barrier tissue but also the further transport into circulating blood.
Pulmonary absorption. The lungs are the primary route of deposition and absorption of small airborne particles, gases, vapours and aerosols. For highly water-soluble gases and vapours a significant part of the uptake occurs in the nose and the respiratory tree, but for less soluble substances it primarily takes place in the lung alveoli. The alveoli have a very large surface area (about 100m2 in humans). In addition, the diffusion barrier is extremely small, with only two thin cell layers and a distance in the order of micrometers from alveolar air to systemic blood circulation. This makes the lungs very efficient not only in the exchange of oxygen and carbon dioxide but also of other gases and vapours. In general, the diffusion across the alveolar wall is so rapid that it does not limit the uptake. The absorption rate is instead dependent on flow (pulmonary ventilation, cardiac output) and solubility (blood: air partition coefficient). Another important factor is metabolic elimination. The relative importance of these factors for pulmonary absorption varies greatly for different substances. Physical activity results in increased pulmonary ventilation and cardiac output, and decreased liver blood flow (and, hence, biotransformation rate). For many inhaled substances this leads to a marked increase in pulmonary absorption.
Percutaneous absorption. The skin is a very efficient barrier. Apart from its thermoregulatory role, it is designed to protect the organism from micro-organisms, ultraviolet radiation and other deleterious agents, and also against excessive water loss. The diffusion distance in the dermis is on the order of tenths of millimetres. In addition, the keratin layer has a very high resistance to diffusion for most substances. Nevertheless, significant dermal absorption resulting in toxicity may occur for some substances—highly toxic, fat-soluble substances such as organophosphorous insecticides and organic solvents, for example. Significant absorption is likely to occur after exposure to liquid substances. Percutaneous absorption of vapour may be important for solvents with very low vapour pressure and high affinity to water and skin.
Gastrointestinal absorption occurs after accidental or intentional ingestion. Larger particles originally inhaled and deposited in the respiratory tract may be swallowed after mucociliary transport to the pharynx. Practically all soluble substances are efficiently absorbed in the gastrointestinal tract. The low pH of the gut may facilitate absorption, for instance, of metals.
Other routes. In toxicity testing and other experiments, special routes of administration are often used for convenience, although these are rare and usually not relevant in the occupational setting. These routes include intravenous (IV), subcutaneous (sc), intraperitoneal (ip) and intramuscular (im) injections. In general, substances are absorbed at a higher rate and more completely by these routes, especially after IV injection. This leads to short-lasting but high concentration peaks that may increase the toxicity of a dose.
Distribution
The distribution of a substance within the organism is a dynamic process which depends on uptake and elimination rates, as well as the blood flow to the different tissues and their affinities for the substance. Water-soluble, small, uncharged molecules, univalent cations, and most anions diffuse easily and will eventually reach a relatively even distribution in the body.
Volume of distribution is the amount of a substance in the body at a given time, divided by the concentration in blood, plasma or serum at that time. The value has no meaning as a physical volume, as many substances are not uniformly distributed in the organism. A volume of distribution of less than one l/kg body weight indicates preferential distribution in the blood (or serum or plasma), whereas a value above one indicates a preference for peripheral tissues such as adipose tissue for fat soluble substances.
Accumulation is the build-up of a substance in a tissue or organ to higher levels than in blood or plasma. It may also refer to a gradual build-up over time in the organism. Many xenobiotics are highly fat soluble and tend to accumulate in adipose tissue, while others have a special affinity for bone. For example, calcium in bone may be exchanged for cations of lead, strontium, barium and radium, and hydroxyl groups in bone may be exchanged for fluoride.
Barriers. The blood vessels in the brain, testes and placenta have special anatomical features that inhibit passage of large molecules like proteins. These features, often referred to as blood-brain, blood-testes, and blood-placenta barriers, may give the false impression that they prevent passage of any substance. These barriers are of little or no importance for xenobiotics that can diffuse through cell membranes.
Blood binding. Substances may be bound to red blood cells or plasma components, or occur unbound in blood. Carbon monoxide, arsenic, organic mercury and hexavalent chromium have a high affinity for red blood cells, while inorganic mercury and trivalent chromium show a preference for plasma proteins. A number of other substances also bind to plasma proteins. Only the unbound fraction is available for filtration or diffusion into eliminating organs. Blood binding may therefore increase the residence time in the organism but decrease uptake by target organs.
Elimination
Elimination is the disappearance of a substance in the body. Elimination may involve excretion from the body or transformation to other substances not captured by a specific method of measurement. The rate of disappearance may be expressed by the elimination rate constant, biological half-time or clearance.
Concentration-time curve. The curve of concentration in blood (or plasma) versus time is a convenient way of describing uptake and disposition of a xenobiotic.
Area under the curve (AUC) is the integral of concentration in blood (plasma) over time. When metabolic saturation and other non-linear processes are absent, AUC is proportional to the absorbed amount of substance.
Biological half-time (or half-life) is the time needed after the end of exposure to reduce the amount in the organism to one-half. As it is often difficult to assess the total amount of a substance, measurements such as the concentration in blood (plasma) are used. The half-time should be used with caution, as it may change, for example, with dose and length of exposure. In addition, many substances have complex decay curves with several half-times.
Bioavailability is the fraction of an administered dose entering the systemic circulation. In the absence of presystemic clearance, or first-pass metabolism, the fraction is one. In oral exposure presystemic clearance may be due to metabolism within the gastrointestinal content, gut wall or liver. First-pass metabolism will reduce the systemic absorption of the substance and instead increase the absorption of metabolites. This may lead to a different toxicity pattern.
Clearance is the volume of blood (plasma) per unit time completely cleared of a substance. To distinguish from renal clearance, for example, the prefix total, metabolic or blood (plasma) is often added.
Intrinsic clearance is the capacity of endogenous enzymes to transform a substance, and is also expressed in volume per unit time. If the intrinsic clearance in an organ is much lower than the blood flow, the metabolism is said to be capacity limited. Conversely, if the intrinsic clearance is much higher than the blood flow, the metabolism is flow limited.
Excretion
Excretion is the exit of a substance and its biotransformation products from the organism.
Excretion in urine and bile. The kidneys are the most important excretory organs. Some substances, especially acids with high molecular weights, are excreted with bile. A fraction of biliary excreted substances may be reabsorbed in the intestines. This process, enterohepatic circulation, is common for conjugated substances following intestinal hydrolysis of the conjugate.
Other routes of excretion. Some substances, such as organic solvents and breakdown products such as acetone, are volatile enough so that a considerable fraction may be excreted by exhalation after inhalation. Small water-soluble molecules as well as fat-soluble ones are readily secreted to the foetus via the placenta, and into milk in mammals. For the mother, lactation can be a quantitatively important excretory pathway for persistent fat-soluble chemicals. The offspring may be secondarily exposed via the mother during pregnancy as well as during lactation. Water-soluble compounds may to some extent be excreted in sweat and saliva. These routes are generally of minor importance. However, as a large volume of saliva is produced and swallowed, saliva excretion may contribute to reabsorption of the compound. Some metals such as mercury are excreted by binding permanently to the sulphydryl groups of the keratin in the hair.
Toxicokinetic models
Mathematical models are important tools to understand and describe the uptake and disposition of foreign substances. Most models are compartmental, that is, the organism is represented by one or more compartments. A compartment is a chemically and physically theoretical volume in which the substance is assumed to distribute homogeneously and instantaneously. Simple models may be expressed as a sum of exponential terms, while more complicated ones require numerical procedures on a computer for their solution. Models may be subdivided in two categories, descriptive and physiological.
In descriptive models, fitting to measured data is performed by changing the numerical values of the model parameters or even the model structure itself. The model structure normally has little to do with the structure of the organism. Advantages of the descriptive approach are that few assumptions are made and that there is no need for additional data. A disadvantage of descriptive models is their limited usefulness for extrapolations.
Physiological models are constructed from physiological, anatomical and other independent data. The model is then refined and validated by comparison with experimental data. An advantage of physiological models is that they can be used for extrapolation purposes. For example, the influence of physical activity on the uptake and disposition of inhaled substances may be predicted from known physiological adjustments in ventilation and cardiac output. A disadvantage of physiological models is that they require a large amount of independent data.
Biotransformation
Biotransformation is a process which leads to a metabolic conversion of foreign compounds (xenobiotics) in the body. The process is often referred to as metabolism of xenobiotics. As a general rule metabolism converts lipid-soluble xenobiotics to large, water- soluble metabolites that can be effectively excreted.
The liver is the main site of biotransformation. All xenobiotics taken up from the intestine are transported to the liver by a single blood vessel (vena porta). If taken up in small quantities a foreign substance may be completely metabolized in the liver before reaching the general circulation and other organs (first pass effect). Inhaled xenobiotics are distributed via the general circulation to the liver. In that case only a fraction of the dose is metabolized in the liver before reaching other organs.
Liver cells contain several enzymes that oxidize xenobiotics. This oxidation generally activates the compound—it becomes more reactive than the parent molecule. In most cases the oxidized metabolite is further metabolized by other enzymes in a second phase. These enzymes conjugate the metabolite with an endogenous substrate, so that the molecule becomes larger and more polar. This facilitates excretion.
Enzymes that metabolize xenobiotics are also present in other organs such as the lungs and kidneys. In these organs they may play specific and qualitatively important roles in the metabolism of certain xenobiotics. Metabolites formed in one organ may be further metabolized in a second organ. Bacteria in the intestine may also participate in biotransformation.
Metabolites of xenobiotics can be excreted by the kidneys or via the bile. They can also be exhaled via the lungs, or bound to endogenous molecules in the body.
The relationship between biotransformation and toxicity is complex. Biotransformation can be seen as a necessary process for survival. It protects the organism against toxicity by preventing accumulation of harmful substances in the body. However, reactive intermediary metabolites may be formed in biotransformation, and these are potentially harmful. This is called metabolic activation. Thus, biotransformation may also induce toxicity. Oxidized, intermediary metabolites that are not conjugated can bind to and damage cellular structures. If, for example, a xenobiotic metabolite binds to DNA, a mutation can be induced (see “Genetic toxicology”). If the biotransformation system is overloaded, a massive destruction of essential proteins or lipid membranes may occur. This can result in cell death (see “Cellular injury and cellular death”).
Metabolism is a word often used interchangeably with biotransformation. It denotes chemical breakdown or synthesis reactions catalyzed by enzymes in the body. Nutrients from food, endogenous compounds, and xenobiotics are all metabolized in the body.
Metabolic activation means that a less reactive compound is converted to a more reactive molecule. This usually occurs during Phase 1 reactions.
Metabolic inactivation means that an active or toxic molecule is converted to a less active metabolite. This usually occurs during Phase 2 reactions. In certain cases an inactivated metabolite might be reactivated, for example by enzymatic cleavage.
Phase 1 reaction refers to the first step in xenobiotic metabolism. It usually means that the compound is oxidized. Oxidation usually makes the compound more water soluble and facilitates further reactions.
Cytochrome P450 enzymes are a group of enzymes that preferentially oxidize xenobiotics in Phase 1 reactions. The different enzymes are specialized for handling specific groups of xenobiotics with certain characteristics. Endogenous molecules are also substrates. Cytochrome P450 enzymes are induced by xenobiotics in a specific fashion. Obtaining induction data on cytochrome P450 can be informative about the nature of previous exposures (see “Genetic determinants of toxic response”).
Phase 2 reaction refers to the second step in xenobiotic meta- bolism. It usually means that the oxidized compound is conjugated with (coupled to) an endogenous molecule. This reaction increases the water solubility further. Many conjugated meta- bolites are actively excreted via the kidneys.
Transferases are a group of enzymes that catalyze Phase 2 reactions. They conjugate xenobiotics with endogenous compounds such as glutathione, amino acids, glucuronic acid or sulphate.
Glutathione is an endogenous molecule, a tripeptide, that is conjugated with xenobiotics in Phase 2 reactions. It is present in all cells (and in liver cells in high concentrations), and usually protects from activated xenobiotics. When glutathione is depleted, toxic reactions between activated xenobiotic metabolites and proteins, lipids or DNA may occur.
Induction means that enzymes involved in biotransformation are increased (in activity or amount) as a response to xenobiotic exposure. In some cases within a few days enzyme activity can be increased several fold. Induction is often balanced so that both Phase 1 and Phase 2 reactions are increased simultaneously. This may lead to a more rapid biotransformation and can explain tolerance. In contrast, unbalanced induction may increase toxicity.
Inhibition of biotransformation can occur if two xenobiotics are metabolized by the same enzyme. The two substrates have to compete, and usually one of the substrates is preferred. In that case the second substrate is not metabolized, or only slowly metabolized. As with induction, inhibition may increase as well as decrease toxicity.
Oxygen activation can be triggered by metabolites of certain xenobiotics. They may auto-oxidize under the production of activated oxygen species. These oxygen-derived species, which include superoxide, hydrogen peroxide and the hydroxyl radical, may damage DNA, lipids and proteins in cells. Oxygen activation is also involved in inflammatory processes.
Genetic variability between individuals is seen in many genes coding for Phase 1 and Phase 2 enzymes. Genetic variability may explain why certain individuals are more susceptible to toxic effects of xenobiotics than others.
Decisions affecting the health, well-being, and employability of individual workers or an employer’s approach to health and safety issues must be based on data of good quality. This is especially so in the case of biological monitoring data and it is therefore the responsibility of any laboratory undertaking analytical work on biological specimens from working populations to ensure the reliability, accuracy and precision of its results. This responsibility extends from providing suitable methods and guidance for specimen collection to ensuring that the results are returned to the health professional responsible for the care of the individual worker in a suitable form. All these activities are covered by the expression of quality assurance.
The central activity in a quality assurance programme is the control and maintenance of analytical accuracy and precision. Biological monitoring laboratories have often developed in a clinical environment and have taken quality assurance techniques and philosophies from the discipline of clinical chemistry. Indeed, measurements of toxic chemicals and biological effect indicators in blood and urine are essentially no different from those made in clinical chemistry and in clinical pharmacology service laboratories found in any major hospital.
A quality assurance programme for an individual analyst starts with the selection and establishment of a suitable method. The next stage is the development of an internal quality control procedure to maintain precision; the laboratory needs then to satisfy itself of the accuracy of the analysis, and this may well involve external quality assessment (see below). It is important to recognize however, that quality assurance includes more than these aspects of analytical quality control.
Method Selection
There are several texts presenting analytical methods in biological monitoring. Although these give useful guidance, much needs to be done by the individual analyst before data of suitable quality can be produced. Central to any quality assurance programme is the production of a laboratory protocol that must specify in detail those parts of the method which have the most bearing on its reliability, accuracy, and precision. Indeed, national accreditation of laboratories in clinical chemistry, toxicology, and forensic science is usually dependent on the quality of the laboratory’s protocols. Development of a suitable protocol is usually a time-consuming process. If a laboratory wishes to establish a new method, it is often most cost-effective to obtain from an existing laboratory a protocol that has proved its performance, for example, through validation in an established international quality assurance programme. Should the new laboratory be committed to a specific analytical technique, for example gas chromatography rather than high-performance liquid chromatography, it is often possible to identify a laboratory that has a good performance record and that uses the same analytical approach. Laboratories can often be identified through journal articles or through organizers of various national quality assessment schemes.
Internal Quality Control
The quality of analytical results depends on the precision of the method achieved in practice, and this in turn depends on close adherence to a defined protocol. Precision is best assessed by the inclusion of “quality control samples” at regular intervals during an analytical run. For example, for control of blood lead analyses, quality control samples are introduced into the run after every six or eight actual worker samples. More stable analytical methods can be monitored with fewer quality control samples per run. The quality control samples for blood lead analysis are prepared from 500 ml of blood (human or bovine) to which inorganic lead is added; individual aliquots are stored at low temperature (Bullock, Smith and Whitehead 1986). Before each new batch is put into use, 20 aliquots are analysed in separate runs on different occasions to establish the mean result for this batch of quality control samples, as well as its standard deviation (Whitehead 1977). These two figures are used to set up a Shewhart control chart (figure 27.2). The results from the analysis of the quality control samples included in subsequent runs are plotted on the chart. The analyst then uses rules for acceptance or rejection of an analytical run depending on whether the results of these samples fall within two or three standard deviations (SD) of the mean. A sequence of rules, validated by computer modelling, has been suggested by Westgard et al. (1981) for application to control samples. This approach to quality control is described in textbooks of clinical chemistry and a simple approach to the introduction of quality assurance is set forth in Whitehead (1977). It must be emphasized that these techniques of quality control depend on the preparation and analysis of quality control samples separately from the calibration samples that are used on each analytical occasion.
Figure 27.2 Shewhart control chart for quality control samples
This approach can be adapted to a range of biological monitoring or biological effect monitoring assays. Batches of blood or urine samples can be prepared by addition of either the toxic material or the metabolite that is to be measured. Similarly, blood, serum, plasma, or urine can be aliquotted and stored deep-frozen or freeze-dried for measurement of enzymes or proteins. However, care has to be taken to avoid infective risk to the analyst from samples based on human blood.
Careful adherence to a well-defined protocol and to rules for acceptability is an essential first stage in a quality assurance programme. Any laboratory must be prepared to discuss its quality control and quality assessment performance with the health professionals using it and to investigate surprising or unusual findings.
External Quality Assessment
Once a laboratory has established that it can produce results with adequate precision, the next stage is to confirm the accuracy (“trueness”) of the measured values, that is, the relationship of the measurements made to the actual amount present. This is a difficult exercise for a laboratory to do on its own but can be achieved by taking part in a regular external quality assessment scheme. These have been an essential part of clinical chemistry practice for some time but have not been widely available for biological monitoring. The exception is blood lead analysis, where schemes have been available since the 1970s (e.g., Bullock, Smith and Whitehead 1986). Comparison of analytical results with those reported from other laboratories analysing samples from the same batch allows assessment of a laboratory’s performance compared with others, as well as a measure of its accuracy. Several national and international quality assessment schemes are available. Many of these schemes welcome new laboratories, as the validity of the mean of the results of an analyte from all the participating laboratories (taken as a measure of the actual concentration) increases with the number of participants. Schemes with many participants are also more able to analyse laboratory performance according to analytical method and thus advise on alternatives to methods with poor performance characteristics. In some countries, participation in such a scheme is an essential part of laboratory accreditation. Guidelines for external quality assessment scheme design and operation have been published by the WHO (1981).
In the absence of established external quality assessment schemes, accuracy may be checked using certified reference materials which are available on a commercial basis for a limited range of analytes. The advantages of samples circulated by external quality assessment schemes are that (1) the analyst does not have fore-knowledge of the result, (2) a range of concentrations is presented, and (3) as definitive analytical methods do not have to be employed, the materials involved are cheaper.
Pre-analytical Quality Control
Effort spent in attaining good laboratory accuracy and precision is wasted if the samples presented to the laboratory have not been taken at the correct time, if they have suffered contamination, have deteriorated during transport, or have been inadequately or incorrectly labelled. It is also bad professional practice to submit individuals to invasive sampling without taking adequate care of the sampled materials. Although sampling is often not under the direct control of the laboratory analyst, a full quality programme of biological monitoring must take these factors into account and the laboratory should ensure that syringes and sample containers provided are free from contamination, with clear instructions about sampling technique and sample storage and transport. The importance of the correct sampling time within the shift or working week and its dependence on the toxicokinetics of the sampled material are now recognized (ACGIH 1993; HSE 1992), and this information should be made available to the health professionals responsible for collecting the samples.
Post-analytical Quality Control
High-quality analytical results may be of little use to the individual or health professional if they are not communicated to the professional in an interpretable form and at the right time. Each biological monitoring laboratory should develop reporting procedures for alerting the health care professional submitting the samples to abnormal, unexpected, or puzzling results in time to allow appropriate action to be taken. Interpretation of laboratory results, especially changes in concentration between successive samples, often depends on knowledge of the precision of the assay. As part of total quality management from sample collection to return of results, health professionals should be given information concerning the biological monitoring laboratory’s precision and accuracy, as well as reference ranges and advisory and statutory limits, in order to help them in interpreting the results.
It is difficult to speak of work analysis without putting it in the perspective of recent changes in the industrial world, because the nature of activities and the conditions in which they are carried out have undergone considerable evolution in recent years. The factors giving rise to these changes have been numerous, but there are two whose impact has proved crucial. On the one hand, technological progress with its ever-quickening pace and the upheavals brought about by information technologies have revolutionized jobs (De Keyser 1986). On the other hand, the uncertainty of the economic market has required more flexibility in personnel management and work organization. If the workers have gained a wider view of the production process that is less routine-oriented and undoubtedly more systematic, they have at the same time lost exclusive links with an environment, a team, a production tool. It is difficult to view these changes with serenity, but we have to face the fact that a new industrial landscape has been created, sometimes more enriching for those workers who can find their place in it, but also filled with pitfalls and worries for those who are marginalized or excluded. However, one idea is being taken up in firms and has been confirmed by pilot experiments in many countries: it should be possible to guide changes and soften their adverse effects with the use of relevant analyses and by using all resources for negotiation between the different work actors. It is within this context that we must place work analyses today—as tools allowing us to describe tasks and activities better in order to guide interventions of different kinds, such as training, the setting up of new organizational modes or the design of tools and work systems. We speak of analyses, and not just one analysis, since there exist a large number of them, depending on the theoretical and cultural contexts in which they are developed, the particular goals they pursue, the evidence they collect, or the analyser’s concern for either specificity or generality. In this article, we will limit ourselves to presenting a few characteristics of work analyses and emphasizing the importance of collective work. Our conclusions will highlight other paths that the limits of this text prevent us from pursuing in greater depth.
Some Characteristics of Work Analyses
The context
If the primary goal of any work analysis is to describe what the operator does, or should do, placing it more precisely into its context has often seemed indispensable to researchers. They mention, according to their own views, but in a broadly similar manner, the concepts of context, situation, environment, work domain, work world or work environment. The problem lies less in the nuances between these terms than in the selection of variables that need to be described in order to give them a useful meaning. Indeed, the world is vast and the industry is complex, and the characteristics that could be referred to are innumerable. Two tendencies can be noted among authors in the field. The first one sees the description of the context as a means of capturing the reader’s interest and providing him or her with an adequate semantic framework. The second has a different theoretical perspective: it attempts to embrace both context and activity, describing only those elements of the context that are capable of influencing the behavior of operators.
The semantic framework
Context has evocative power. It is enough, for an informed reader, to read about an operator in a control room engaged in a continuous process to call up a picture of work through commands and surveillance at a distance, where the tasks of detection, diagnosis, and regulation predominate. What variables need to be described in order to create a sufficiently meaningful context? It all depends on the reader. Nonetheless, there is a consensus in the literature on a few key variables. The nature of the economic sector, the type of production or service, the size and the geographical location of the site are useful.
The production processes, the tools or machines and their level of automation allow certain constraints and certain necessary qualifications to be guessed at. The structure of the personnel, together with age and level of qualification and experience are crucial data whenever the analysis concerns aspects of training or of organizational flexibility. The organization of work established depends more on the firm’s philosophy than on technology. Its description includes, notably, work schedules, the degree of centralization of decisions and the types of control exercised over the workers. Other elements may be added in different cases. They are linked to the firm’s history and culture, its economic situation, work conditions, and any restructuring, mergers, and investments. There exist at least as many systems of classification as there are authors, and there are numerous descriptive lists in circulation. In France, a special effort has been made to generalize simple descriptive methods, notably allowing for the ranking of certain factors according to whether or not they are satisfactory for the operator (RNUR 1976; Guelaud et al. 1977).
The description of relevant factors regarding the activity
The taxonomy of complex systems described by Rasmussen, Pejtersen, and Schmidts (1990) represents one of the most ambitious attempts to cover at the same time the context and its influence on the operator. Its main idea is to integrate, in a systematic fashion, the different elements of which it is composed and to bring out the degrees of freedom and the constraints within which individual strategies can be developed. Its exhaustive aim makes it difficult to manipulate, but the use of multiple modes of representation, including graphs, to illustrate the constraints has a heuristic value that is bound to be attractive to many readers. Other approaches are more targeted. What the authors seek is the selection of factors that can influence a precise activity. Hence, with an interest in the control of processes in a changing environment, Brehmer (1990) proposes a series of temporal characteristics of the context which affect the control and anticipation of the operator (see figure 1). This author’s typology has been developed from “micro-worlds”, computerized simulations of dynamic situations, but the author himself, along with many others since, used it for the continuous-process industry (Van Daele 1992). For certain activities, the influence of the environment is well known, and the selection of factors is not too difficult. Thus, if we are interested in heart rate in the work environment, we often limit ourselves to describing the air temperatures, the physical constraints of the task or the age and training of the subject—even though we know that by doing so we perhaps leave out relevant elements. For others, the choice is more difficult. Studies on human error, for example, show that the factors capable of producing them are numerous (Reason 1989). Sometimes, when theoretical knowledge is insufficient, only statistical processing, combining context and activity analysis, allows us to bring out the relevant contextual factors (Fadier 1990).
Figure 1. The criteria and sub-criteria of the taxonomy of micro-worlds proposed by Brehmer (1990)
The Task or the Activity?
The task
The task is defined by its objectives, its constraints and the means it requires for achievement. A function within the firm is generally characterized by a set of tasks. The realized task differs from the prescribed task scheduled by the firm for a large number of reasons: the strategies of operators vary within and among individuals, the environment fluctuates and random events require responses that are often outside the prescribed framework. Finally, the task is not always scheduled with the correct knowledge of its conditions of execution, hence the need for adaptations in real-time. But even if the task is updated during the activity, sometimes to the point of being transformed, it still remains the central reference.
Questionnaires, inventories, and taxonomies of tasks are numerous, especially in the English-language literature—the reader will find excellent reviews in Fleishman and Quaintance (1984) and in Greuter and Algera (1989). Certain of these instruments are merely lists of elements—for example, the action verbs to illustrate tasks—that are checked off according to the function studied. Others have adopted a hierarchical principle, characterizing a task as interlocking elements, ordered from the global to the particular. These methods are standardized and can be applied to a large number of functions; they are simple to use, and the analytical stage is much shortened. But where it is a question of defining specific work, they are too static and too general to be useful.
Next, there are those instruments requiring more skill on the part of the researcher; since the elements of analysis are not predefined, it is up to the researcher to characterize them. The already outdated critical incident technique of Flanagan (1954), where the observer describes a function by reference to its difficulties and identifies the incidents which the individual will have to face, belongs to this group.
It is also the path adopted by cognitive task analysis (Roth and Woods 1988). This technique aims to bring to light the cognitive requirements of a job. One way to do this is to break the job down into goals, constraints and means. Figure 2 shows how the task of an anesthetist, characterized first by a very global goal of patient survival, can be broken down into a series of sub-goals, which can themselves be classified as actions and means to be employed. More than 100 hours of observation in the operating theatre and subsequent interviews with anesthetists were necessary to obtain this synoptic “photograph” of the requirements of the function. This technique, although quite laborious, is nevertheless useful in ergonomics in determining whether all the goals of a task are provided with the means of attaining them. It also allows for an understanding of the complexity of a task (its particular difficulties and conflicting goals, for example) and facilitates the interpretation of certain human errors. But it suffers, as do other methods, from the absence of a descriptive language (Grant and Mayes 1991). Moreover, it does not permit hypotheses to be formulated as to the nature of the cognitive processes brought into play to attain the goals in question.
Figure 2. Cognitive analysis of the task: general anesthesia
Other approaches have analyzed the cognitive processes associated with given tasks by drawing up hypotheses as to the information processing necessary to accomplish them. A frequently employed cognitive model of this kind is Rasmussen’s (1986), which provides, according to the nature of the task and its familiarity for the subject, three possible levels of activity-based either on skill-based habits and reflexes, on acquired rule-based procedures or on knowledge-based procedures. But other models or theories that reached the height of their popularity during the 1970s remain in use. Hence, the theory of optimal control, which considers man as a controller of discrepancies between assigned and observed goals, is sometimes still applied to cognitive processes. And modeling by means of networks of interconnected tasks and flow charts continues to inspire the authors of cognitive task analysis; figure 3 provides a simplified description of the behavioral sequences in an energy-control task, constructing a hypothesis about certain mental operations. All these attempts reflect the concern of researchers to bring together in the same description not only elements of the context but also the task itself and the cognitive processes that underlie it—and to reflect the dynamic character of work as well.
Figure 3. A simplified description of the determinants of a behavior sequence in energy control tasks: a case of unacceptable consumption of energy
Since the arrival of the scientific organization of work, the concept of the prescribed task has been adversely criticized because it has been viewed as involving the imposition on workers of tasks that are not only designed without consulting their needs but are often accompanied by specific performance time, a restriction not welcomed by many workers. Even if the imposition aspect has become rather more flexible today and even if the workers contribute more often to the design of tasks, an assigned time for tasks remains necessary for schedule planning and remains an essential component of work organization. The quantification of time should not always be perceived in a negative manner. It constitutes a valuable indicator of workload. A simple but common method of measuring the time pressure exerted on a worker consists of determining the quotient of the time necessary for the execution of a task divided by the available time. The closer this quotient is to unity, the greater the pressure (Wickens 1992). Moreover, quantification can be used in flexible but appropriate personnel management. Let us take the case of nurses where the technique of predictive analysis of tasks has been generalized, for example, in the Canadian regulation Planning of Required Nursing (PRN 80) (Kepenne 1984) or one of its European variants. Thanks to such task lists, accompanied by their meantime of execution, one can, each morning, taking into account the number of patients and their medical conditions, establish a care schedule and a distribution of personnel. Far from being a constraint, PRN 80 has, in a number of hospitals, demonstrated that a shortage of nursing personnel exists, since the technique allows a difference to be established (see figure 4) between the desired and the observed, that is, between the number of staff necessary and the number available, and even between the tasks planned and the tasks carried out. The times calculated are only averages, and the fluctuations in the situation do not always make them applicable, but this negative aspect is minimized by a flexible organization that accepts adjustments and allows the personnel to participate in effecting those adjustments.
Figure 4. Discrepancies between the numbers of personnel present and required on the basis of PRN80
The activity, the evidence, and the performance
An activity is defined as the set of behaviors and resources used by the operator so that work occurs—that is to say, the transformation or production of goods or the rendering of a service. This activity can be understood through observation in different ways. Faverge (1972) has described four forms of analysis. The first is an analysis in terms of gestures and postures, where the observer locates, within the visible activity of the operator, classes of behavior that are recognizable and repeated during work. These activities are often coupled with a precise response: for example, the heart rate, which allows us to assess the physical load associated with each activity. The second form of analysis is in terms of information uptake. What is discovered, through direct observation—or with the aid of cameras or recorders of eye movements—is the set of signals picked up by the operator in the information field surrounding him or her. This analysis is particularly useful in cognitive ergonomics in trying to better understand the information processing carried out by the operator. The third type of analysis is in terms of regulation. The idea is to identify the adjustments of activity carried out by the operator in order to deal with either fluctuation in the environment or changes in his own condition. There we find the direct intervention of context within the analysis. One of the most frequently cited research projects in this area is that of Sperandio (1972). This author studied the activity of air traffic controllers and identified important strategy changes during an increase in air traffic. He interpreted them as an attempt to simplify the activity by aiming to maintain an acceptable load level, while at the same time continuing to meet the requirements of the task. The fourth is an analysis in terms of thought processes. This type of analysis has been widely used in the ergonomics of highly automated posts. Indeed, the design of computerized aids and notably intelligent aids for the operator requires a thorough understanding of the way in which the operator reasons in order to solve certain problems. The reasoning involved in scheduling, anticipation, and diagnosis has been the subject of analyses, an example of which can be found in figure 5. However, evidence of mental activity can only be inferred. Apart from certain observable aspects of behavior, such as eye movements and problem-solving time, most of these analyses resort to the verbal response. Particular emphasis has been placed, in recent years, on the knowledge necessary to accomplish certain activities, with researchers trying not to postulate them at the outset but to make them apparent through the analysis itself.
Figure 5. Analysis of mental activity. Strategies in the control of processes with long response times: the need for computerized support in diagnosis
Such efforts have brought to light the fact that almost identical performances can be obtained with very different levels of knowledge, as long as operators are aware of their limits and apply strategies adapted to their capabilities. Hence, in our study of the start-up of a thermoelectric plant (De Keyser and Housiaux 1989), the start-ups were carried out by both engineers and operators. The theoretical and procedural knowledge that these two groups possessed, which had been elicited by means of interviews and questionnaires, were very different. The operators in particular sometimes had an erroneous understanding of the variables in the functional links of the process. In spite of this, the performances of the two groups were very close. But the operators took into account more variables in order to verify the control of the start-up and undertook more frequent verifications. Such results were also obtained by Amalberti (1991), who mentioned the existence of metaknowledge allowing experts to manage their own resources.
What evidence of activity is appropriate to elicit? Its nature, as we have seen, depends closely on the form of analysis planned. Its form varies according to the degree of methodological care exercised by the observer. Provoked evidence is distinguished from spontaneous evidence and concomitant from subsequent evidence. Generally speaking, when the nature of the work allows, concomitant and spontaneous evidence are to be preferred. They are free of various drawbacks such as the unreliability of memory, observer interference, the effect of rationalizing reconstruction on the part of the subject, and so forth. To illustrate these distinctions, we will take the example of verbalizations. Spontaneous verbalizations are verbal exchanges, or monologues expressed spontaneously without being requested by the observer; provoked verbalizations are those made at the specific request of the observer, such as the request made to the subject to “think aloud”, which is well known in the cognitive literature. Both types can be done in real-time, during work, and are thus concomitant.
They can also be subsequent, as in interviews, or subjects’ verbalizations when they view videotapes of their work. As for the validity of the verbalizations, the reader should not ignore the doubt raised in this regard by the controversy between Nisbett and De Camp Wilson (1977) and White (1988) and the precautions suggested by numerous authors aware of their importance in the study of mental activity in view of the methodological difficulties encountered (Ericson and Simon 1984; Savoyant and Leplat 1983; Caverni 1988; Bainbridge 1986).
The organization of this evidence, its processing and its formalization require descriptive languages and sometimes analyses that go beyond field observation. Those mental activities which are inferred from the evidence, for example, remain hypothetical. Today they are often described using languages derived from artificial intelligence, making use of representations in terms of schemes, production rules, and connecting networks. Moreover, the use of computerized simulations—of micro-worlds—to pinpoint certain mental activities has become widespread, even though the validity of the results obtained from such computerized simulations, in view of the complexity of the industrial world, is subject to debate. Finally, we must mention the cognitive modelings of certain mental activities extracted from the field. Among the best known is the diagnosis of the operator of a nuclear power plant, carried out in ISPRA (Decortis and Cacciabue 1990), and the planning of the combat pilot perfected in Centre d’études et de recherches de médecine aérospatiale (CERMA) (Amalberti et al. 1989).
Measurement of the discrepancies between the performance of these models and that of real, living operators is a fruitful field in activity analysis. Performance is the outcome of the activity, the final response given by the subject to the requirements of the task. It is expressed at the level of production: productivity, quality, error, incident, accident—and even, at a more global level, absenteeism or turnover. But it must also be identified at the individual level: the subjective expression of satisfaction, stress, fatigue or workload, and many physiological responses are also performance indicators. Only the entire set of data permits interpretation of the activity—that is to say, judging whether or not it furthers the desired goals while remaining within human limits. There exists a set of norms which, up to a certain point, guide the observer. But these norms are not situated—they do not take into account the context, its fluctuations and the condition of the worker. This is why in design ergonomics, even when rules, norms, and models exist, designers are advised to test the product using prototypes as early as possible and to evaluate the users’ activity and performance.
Individual or Collective Work?
While in the vast majority of cases, work is a collective act, most work analyses focus on tasks or individual activities. Nonetheless, the fact is that technological evolution, just like work organization, today emphasizes distributed work, whether it be between workers and machines or simply within a group. What paths have been explored by authors so as to take this distribution into account (Rasmussen, Pejtersen and Schmidts 1990)? They focus on three aspects: structure, the nature of exchanges and structural lability.
Structure
Whether we view structure as elements of the analysis of people, or of services, or even of different branches of a firm working in a network, the description of the links that unite them remains a problem. We are very familiar with the organigrams within firms that indicate the structure of authority and whose various forms reflect the organizational philosophy of the firm—very hierarchically organized for a Taylor-like structure, or flattened like a rake, even matrix-like, for a more flexible structure. Other descriptions of distributed activities are possible: an example is given in figure 6. More recently, the need for firms to represent their information exchanges at a global level has led to a rethinking of information systems. Thanks to certain descriptive languages—for example, design schemas, or entity-relations-attribute matrixes—the structure of relations at the collective level can today be described in a very abstract manner and can serve as a springboard for the creation of computerized management systems.
Figure 6. Integrated life cycle design
The nature of exchanges
Simply having a description of the links uniting the entities says little about the content itself of the exchanges; of course the nature of the relation can be specified—movement from place to place, information transfers, hierarchical dependence, and so on—but this is often quite inadequate. The analysis of communications within teams has become a favored means of capturing the very nature of collective work, encompassing subjects mentioned, creation of a common language in a team, modification of communications when circumstances are critical, and so forth (Tardieu, Nanci and Pascot 1985; Rolland 1986; Navarro 1990; Van Daele 1992; Lacoste 1983; Moray, Sanderson and Vincente 1989). Knowledge of these interactions is particularly useful for the creation of computer tools, notably decision-making aids for understanding errors. The different stages and the methodological difficulties linked to the use of this evidence have been well described by Falzon (1991).
Structural lability
It is the work on activities rather than on tasks that have opened up the field of structural lability—that is to say, of the constant reconfigurations of collective work under the influence of contextual factors. Studies such as those of Rogalski (1991), who over a long period analyzed the collective activities dealing with forest fires in France, and Bourdon and Weill Fassina (1994), who studied the organizational structure set up to deal with railway accidents, are both very informative. They clearly show how the context molds the structure of exchanges, the number, and type of actors involved, the nature of the communications and the number of parameters essential to the work. The more this context fluctuates, the further the fixed descriptions of the task are removed from reality. Knowledge of this lability, and a better understanding of the phenomena that take place within it, are essential in planning for the unpredictable and in order to provide better training for those involved in collective work in a crisis.
Conclusions
The various phases of the work analysis that have been described are an iterative part of any human factors design cycle (see figure 6). In this design of any technical object, whether a tool, a workstation or a factory, in which human factors are a consideration, certain information is needed in time. In general, the beginning of the design cycle is characterized by a need for data involving environmental constraints, the types of jobs that are to be carried out, and the various characteristics of the users. This initial information allows the specifications of the object to be drawn up so as to take into account work requirements. But this is, in some sense, only a coarse model compared to the real work situation. This explains why models and prototypes are necessary that, from their inception, allow not the jobs themselves, but the activities of the future users to be evaluated. Consequently, while the design of the images on a monitor in a control room can be based on a thorough cognitive analysis of the job to be done, only a data-based analysis of the activity will allow an accurate determination of whether the prototype will actually be of use in the actual work situation (Van Daele 1988). Once the finished technical object is put into operation, greater emphasis is put on the performance of the users and on dysfunctional situations, such as accidents or human error. The gathering of this type of information allows the final corrections to be made that will increase the reliability and usability of the completed object. Both the nuclear industry and the aeronautics industry serve as an example: operational feedback involves reporting every incident that occurs. In this way, the design loop comes full circle.
The human organism represents a complex biological system on various levels of organization, from the molecular-cellular level to the tissues and organs. The organism is an open system, exchanging matter and energy with the environment through numerous biochemical reactions in a dynamic equilibrium. The environment can be polluted, or contaminated with various toxicants.
Penetration of molecules or ions of toxicants from the work or living environment into such a strongly coordinated biological system can reversibly or irreversibly disturb normal cellular biochemical processes, or even injure and destroy the cell (see “Cellular injury and cellular death”).
Penetration of a toxicant from the environment to the sites of its toxic effect inside the organism can be divided into three phases:
Here we will focus our attention exclusively on the toxicokinetic processes inside the human organism following exposure to toxicants in the environment.
The molecules or ions of toxicants present in the environment will penetrate into the organism through the skin and mucosa, or the epithelial cells of the respiratory and gastrointestinal tracts, depending on the point of entry. That means molecules and ions of toxicants must penetrate through cellular membranes of these biological systems, as well as through an intricate system of endomembranes inside the cell.
All toxicokinetic and toxicodynamic processes occur on the molecular-cellular level. Numerous factors influence these processes and these can be divided into two basic groups:
Physico-Chemical Properties of Toxicants
In 1854 the Russian toxicologist E.V. Pelikan started studies on the relation between the chemical structure of a substance and its biological activity—the structure activity relationship (SAR). Chemical structure directly determines physico-chemical properties, some of which are responsible for biological activity.
To define the chemical structure numerous parameters can be selected as descriptors, which can be divided into various groups:
1. Physico-chemical:
2. Steric: molecular volume, shape and surface area, substructure shape, molecular reactivity, etc.
3. Structural: number of bonds number of rings (in polycyclic compounds), extent of branching, etc.
For each toxicant it is necessary to select a set of descriptors related to a particular mechanism of activity. However, from the toxicokinetic point of view two parameters are of general importance for all toxicants:
For inhaled dusts and aerosols, the particle size, shape, surface area and density also influence their toxicokinetics and toxico- dynamics.
Structure and Properties of Membranes
The eukaryotic cell of human and animal organisms is encircled by a cytoplasmic membrane regulating the transport of substances and maintaining cell homeostasis. The cell organelles (nucleus, mitochondria) possess membranes too. The cell cytoplasm is compartmentalized by intricate membranous structures, the endo- plasmic reticulum and Golgi complex (endomembranes). All these membranes are structurally alike, but vary in the content of lipids and proteins.
The structural framework of membranes is a bilayer of lipid molecules (phospholipids, sphyngolipids, cholesterol). The backbone of a phospholipid molecule is glycerol with two of its -OH groups esterified by aliphatic fatty acids with 16 to 18 carbon atoms, and the third group esterified by a phosphate group and a nitrogenous compound (choline, ethanolamine, serine). In sphyngolipids, sphyngosine is the base.
The lipid molecule is amphipatic because it consists of a polar hydrophilic “head” (amino alcohol, phosphate, glycerol) and a non-polar twin “tail” (fatty acids). The lipid bilayer is arranged so that the hydrophilic heads constitute the outer and inner surface of membrane and lipophilic tails are stretched toward the membrane interior, which contains water, various ions and molecules.
Proteins and glycoproteins are inserted into the lipid bilayer (intrinsic proteins) or attached to the membrane surface (extrinsic proteins). These proteins contribute to the structural integrity of the membrane, but they may also perform as enzymes, carriers, pore walls or receptors.
The membrane represents a dynamic structure which can be disintegrated and rebuilt with a different proportion of lipids and proteins, according to functional needs.
Regulation of transport of substances into and out of the cell represents one of the basic functions of outer and inner membranes.
Some lipophilic molecules pass directly through the lipid bilayer. Hydrophilic molecules and ions are transported via pores. Membranes respond to changing conditions by opening or sealing certain pores of various sizes.
The following processes and mechanisms are involved in the transport of substances, including toxicants, through membranes:
Active processes:
Diffusion
This represents the movement of molecules and ions through lipid bilayer or pores from a region of high concentration, or high electric potential, to a region of low concentration or potential (“downhill”). Difference in concentration or electric charge is the driving force influencing the intensity of the flux in both directions. In the equilibrium state, influx will be equal to efflux. The rate of diffusion follows Ficke’s law, stating that it is directly proportional to the available surface of membrane, difference in concentration (charge) gradient and characteristic diffusion coefficient, and inversely proportional to the membrane thickness.
Small lipophilic molecules pass easily through the lipid layer of membrane, according to the Nernst partition coefficient.
Large lipophilic molecules, water soluble molecules and ions will use aqueous pore channels for their passage. Size and stereoconfiguration will influence passage of molecules. For ions, besides size, the type of charge will be decisive. The protein molecules of pore walls can gain positive or negative charge. Narrow pores tend to be selective—negatively charged ligands will allow passage only for cations, and positively charged ligands will allow passage only for anions. With the increase of pore diameter hydrodynamic flow is dominant, allowing free passage of ions and molecules, according to Poiseuille’s law. This filtration is a consequence of the osmotic gradient. In some cases ions can penetrate through specific complex molecules—ionophores—which can be produced by micro-organisms with antibiotic effects (nonactin, valinomycin, gramacidin, etc.).
Facilitated or catalyzed diffusion
This requires the presence of a carrier in the membrane, usually a protein molecule (permease). The carrier selectively binds substances, resembling a substrate-enzyme complex. Similar molecules (including toxicants) can compete for the specific carrier until its saturation point is reached. Toxicants can compete for the carrier and when they are irreversibly bound to it the transport is blocked. The rate of transport is characteristic for each type of carrier. If transport is performed in both direction, it is called exchange diffusion.
Active transport
For transport of some substances vital for the cell, a special type of carrier is used, transporting against the concentration gradient or electric potential (“uphill”). The carrier is very stereospecific and can be saturated.
For uphill transport, energy is required. The necessary energy is obtained by catalytic cleavage of ATP molecules to ADP by the enzyme adenosine triphosphatase (ATP-ase).
Toxicants can interfere with this transport by competitive or non-competitive inhibition of the carrier or by inhibition of ATP-ase activity.
Endocytosis
Endocytosis is defined as a transport mechanism in which the cell membrane encircles material by enfolding to form a vesicle transporting it through the cell. When the material is liquid, the process is termed pinocytosis. In some cases the material is bound to a receptor and this complex is transported by a membrane vesicle. This type of transport is especially used by epithelial cells of the gastrointestinal tract, and cells of the liver and kidneys.
Absorption of Toxicants
People are exposed to numerous toxicants present in the work and living environment, which can penetrate into the human organism by three main portals of entry:
In the case of exposure in industry, inhalation represents the dominant way of entry of toxicants, followed by dermal penetration. In agriculture, pesticides exposure via dermal absorption is almost equal to cases of combined inhalation and dermal penetration. The general population is mostly exposed by ingestion of contaminated food, water and beverages, then by inhalation and less often by dermal penetration.
Absorption via the respiratory tract
Absorption in the lungs represents the main route of uptake for numerous airborne toxicants (gases, vapours, fumes, mists, smokes, dusts, aerosols, etc.).
The respiratory tract (RT) represents an ideal gas-exchange system possessing a membrane with a surface of 30m2 (expiration) to 100m2 (deep inspiration), behind which a network of about 2,000km of capillaries is located. The system, developed through evolution, is accommodated into a relatively small space (chest cavity) protected by ribs.
Anatomically and physiologically the RT can be divided into three compartments:
Hydrophilic toxicants are easily absorbed by the epithelium of the nasopharingeal region. The whole epithelium of the NP and TB regions is covered by a film of water. Lipophilic toxicants are partially absorbed in the NP and TB, but mostly in the alveoli by diffusion through alveolo-capillary membranes. The absorption rate depends on lung ventilation, cardiac output (blood flow through lungs), solubility of toxicant in blood and its metabolic rate.
In the alveoli, gas exchange is carried out. The alveolar wall is made up of an epithelium, an interstitial framework of basement membrane, connective tissue and the capillary endothelium. The diffusion of toxicants is very rapid through these layers, which have a thickness of about 0.8 μm. In alveoli, toxicant is transferred from the air phase into the liquid phase (blood). The rate of absorption (air to blood distribution) of a toxicant depends on its concentration in alveolar air and the Nernst partition coefficient for blood (solubility coefficient).
In the blood the toxicant can be dissolved in the liquid phase by simple physical processes or bound to the blood cells and/or plasma constituents according to chemical affinity or by adsorption. The water content of blood is 75% and, therefore, hydrophilic gases and vapours show a high solubility in plasma (e.g., alcohols). Lipophilic toxicants (e.g., benzene) are usually bound to cells or macro-molecules such as albumen.
From the very beginning of exposure in the lungs, two opposite processes are occurring: absorption and desorption. The equilibrium between these processes depends on the concentration of toxicant in alveolar air and blood. At the onset of exposure the toxicant concentration in the blood is 0 and retention in blood is almost 100%. With continuation of exposure, an equilibrium between absorption and desorption is attained. Hydrophilic toxicants will rapidly attain equilibrium, and the rate of absorption depends on pulmonary ventilation rather than on blood flow. Lipophilic toxicants need a longer time to achieve equilibrium, and here the flow of unsaturated blood governs the rate of absorption.
Deposition of particles and aerosols in the RT depends on physical and physiological factors, as well as particle size. In short, the smaller the particle the deeper it will penetrate into the RT.
Relatively constant low retention of dust particles in the lungs of persons who are highly exposed (e.g., miners) suggests the existence of a very efficient system for the clearance of particles. In the upper part of the RT (tracheo-bronchial) a mucociliary blanket performs the clearance. In the pulmonary part, three different mechanisms are at work.: (1) mucociliary blanket, (2) phagocytosis and (3) direct penetration of particles through the alveolar wall.
The first 17 of the 23 branchings of the tracheo-bronchial tree possess ciliated epithelial cells. By their strokes these cilia constantly move a mucous blanket toward the mouth. Particles deposited on this mucociliary blanket will be swallowed in the mouth (ingestion). A mucous blanket also covers the surface of the alveolar epithelium, moving toward the mucociliary blanket. Additionally the specialized moving cells—phagocytes—engulf particles and micro-organisms in the alveoli and migrate in two possible directions:
Absorption via gastrointestinal tract
Toxicants can be ingested in the case of accidental swallowing, intake of contaminated food and drinks, or swallowing of particles cleared from the RT.
The entire alimentary channel, from oesophagus to anus, is basically built in the same way. A mucous layer (epithelium) is supported by connective tissue and then by a network of capillaries and smooth muscle. The surface epithelium of the stomach is very wrinkled to increase the absorption/secretion surface area. The intestinal area contains numerous small projections (villi), which are able to absorb material by “pumping in”. The active area for absorption in the intestines is about 100m2.
In the gastrointestinal tract (GIT) all absorption processes are very active:
Some toxic metal ions use specialized transport systems for essential elements: thallium, cobalt and manganese use the iron system, while lead appears to use the calcium system.
Many factors influence the rate of absorption of toxicants in various parts of the GIT:
It is also necessary to mention the enterohepatic circulation. Polar toxicants and/or metabolites (glucuronides and other conjugates) are excreted with the bile into the duodenum. Here the enzymes of the microflora perform hydrolysis and liberated products can be reabsorbed and transported by the portal vein into the liver. This mechanism is very dangerous in the case of hepatotoxic substances, enabling their temporary accumulation in the liver.
In the case of toxicants biotransformed in the liver to less toxic or non-toxic metabolites, ingestion may represent a less dangerous portal of entry. After absorption in the GIT these toxicants will be transported by the portal vein to the liver, and there they can be partially detoxified by biotransformation.
Absorption through the skin (dermal, percutaneous)
The skin (1.8 m2 of surface in a human adult) together with the mucous membranes of the body orifices, covers the surface of the body. It represents a barrier against physical, chemical and biological agents, maintaining the body integrity and homeostasis and performing many other physiological tasks.
Basically the skin consists of three layers: epidermis, true skin (dermis) and subcutaneous tissue (hypodermis). From the toxicological point of view the epidermis is of most interest here. It is built of many layers of cells. A horny surface of flattened, dead cells (stratum corneum) is the top layer, under which a continuous layer of living cells (stratum corneum compactum) is located, followed by a typical lipid membrane, and then by stratum lucidum, stratum gramulosum and stratum mucosum. The lipid membrane represents a protective barrier, but in hairy parts of the skin, both hair follicles and sweat gland channels penetrate through it. Therefore, dermal absorption can occur by the following mechanisms:
The rate of absorption through the skin will depend on many factors:
Transport of Toxicants by Blood and Lymph
After absorption by any of these portals of entry, toxicants will reach the blood, lymph or other body fluids. The blood represents the major vehicle for transport of toxicants and their metabolites.
Blood is a fluid circulating organ, transporting necessary oxygen and vital substances to the cells and removing waste products of metabolism. Blood also contains cellular components, hormones, and other molecules involved in many physiological functions. Blood flows inside a relatively well closed, high-pressure circulatory system of blood vessels, pushed by the activity of the heart. Due to high pressure, leakage of fluid occurs. The lymphatic system represents the drainage system, in the form of a fine mesh of small, thin-walled lymph capillaries branching through the soft tissues and organs.
Blood is a mixture of a liquid phase (plasma, 55%) and solid blood cells (45%). Plasma contains proteins (albumins, globulins, fibrinogen), organic acids (lactic, glutamic, citric) and many other substances (lipids, lipoproteins, glycoproteins, enzymes, salts, xenobiotics, etc.). Blood cell elements include erythrocytes (Er), leukocytes, reticulocytes, monocytes, and platelets.
Toxicants are absorbed as molecules and ions. Some toxicants at blood pH form colloid particles as a third form in this liquid. Molecules, ions and colloids of toxicants have various possibilities for transport in blood:
Most of the toxicants in blood exist partially in a free state in plasma and partially bound to erythrocytes and plasma constituents. The distribution depends on the affinity of toxicants to these constituents. All fractions are in a dynamic equilibrium.
Some toxicants are transported by the blood elements—mostly by erythrocytes, very rarely by leukocytes. Toxicants can be adsorbed on the surface of Er, or can bind to the ligands of stroma. If they penetrate into Er they can bind to the haem (e.g. carbon monoxide and selenium) or to the globin (Sb111, Po210). Some toxicants transported by Er are arsenic, cesium, thorium, radon, lead and sodium. Hexavalent chromium is exclusively bound to the Er and trivalent chromium to the proteins of plasma. For zinc, competition between Er and plasma occurs. About 96% of lead is transported by Er. Organic mercury is mostly bound to Er and inorganic mercury is carried mostly by plasma albumin. Small fractions of beryllium, copper, tellurium and uranium are carried by Er.
The majority of toxicants are transported by plasma or plasma proteins. Many electrolytes are present as ions in an equilibrium with non-dissociated molecules free or bound to the plasma fractions. This ionic fraction of toxicants is very diffusible, penetrating through the walls of capillaries into tissues and organs. Gases and vapours can be dissolved in the plasma.
Plasma proteins possess a total surface area of about 600to 800km2 offered for absorption of toxicants. Albumin molecules possess about 109 cationic and 120 anionic ligands at the disposal of ions. Many ions are partially carried by albumin (e.g., copper, zinc and cadmium), as are such compounds as dinitro- and ortho-cresols, nitro- and halogenated derivatives of aromatic hydrocarbons, and phenols.
Globulin molecules (alpha and beta) transport small molecules of toxicants as well as some metallic ions (copper, zinc and iron) and colloid particles. Fibrinogen shows affinity for certain small molecules. Many types of bonds can be involved in binding of toxicants to plasma proteins: Van der Waals forces, attraction of charges, association between polar and non-polar groups, hydrogen bridges, covalent bonds.
Plasma lipoproteins transport lipophilic toxicants such as PCBs. The other plasma fractions serve as a transport vehicle too. The affinity of toxicants for plasma proteins suggests their affinity for proteins in tissues and organs during distribution.
Organic acids (lactic, glutaminic, citric) form complexes with some toxicants. Alkaline earths and rare earths, as well as some heavy elements in the form of cations, are complexed also with organic oxy- and amino acids. All these complexes are usually diffusible and easily distributed in tissues and organs.
Physiologically chelating agents in plasma such as transferrin and metallothionein compete with organic acids and amino acids for cations to form stable chelates.
Diffusible free ions, some complexes and some free molecules are easily cleared from the blood into tissues and organs. The free fraction of ions and molecules is in a dynamic equilibrium with the bound fraction. The concentration of a toxicant in blood will govern the rate of its distribution into tissues and organs, or its mobilization from them into the blood.
Distribution of Toxicants in the Organism
The human organism can be divided into the following compartments. (1) internal organs, (2) skin and muscles, (3) adipose tissues, (4) connective tissue and bones. This classification is mostly based on the degree of vascular (blood) perfusion in a decreasing order. For example internal organs (including the brain), which represent only 12% of the total body weight, receive about 75% of the total blood volume. On the other hand, connective tissues and bones (15% of total body weight) receive only one per cent of the total blood volume.
The well-perfused internal organs generally achieve the highest concentration of toxicants in the shortest time, as well as an equilibrium between blood and this compartment. The uptake of toxicants by less perfused tissues is much slower, but retention is higher and duration of stay much longer (accumulation) due to low perfusion.
Three components are of major importance for the intracellular distribution of toxicants: content of water, lipids and proteins in the cells of various tissues and organs. The above-mentioned order of compartments also follows closely a decreasing water content in their cells. Hydrophilic toxicants will be more rapidly distributed to the body fluids and cells with high water content, and lipophilic toxicants to cells with higher lipid content (fatty tissue).
The organism possesses some barriers which impair penetration of some groups of toxicants, mostly hydrophilic, to certain organs and tissues, such as:
As previously noted only the free forms of toxicants in plasma (molecules, ions, colloids) are available for penetration through the capillary walls participating in distribution. This free fraction is in a dynamic equilibrium with the bound fraction. Concentration of toxicants in blood is in a dynamic equilibrium with their concentration in organs and tissues, governing retention (accumulation) or mobilization from them.
The condition of the organism, functional state of organs (especially neuro-humoral regulation), hormonal balance and other factors play a role in distribution.
Retention of toxicant in a particular compartment is generally temporary and redistribution into other tissues can occur. Retention and accumulation is based on the difference between the rates of absorption and elimination. The duration of retention in a compartment is expressed by the biological half-life. This is the time interval in which 50% of the toxicant is cleared from the tissue or organ and redistributed, translocated or eliminated from the organism.
Biotransformation processes occur during distribution and retention in various organs and tissues. Biotransformation produces more polar, more hydrophilic metabolites, which are more easily eliminated. A low rate of biotransformation of a lipophilic toxicant will generally cause its accumulation in a compartment.
The toxicants can be divided into four main groups according to their affinity, predominant retention and accumulation in a particular compartment:
Accumulation in lipid-rich tissues
The “standard man” of 70kg body weight contains about 15% of body weight in the form of adipose tissue, increasing with obesity to 50%. However, this lipid fraction is not uniformly distributed. The brain (CNS) is a lipid-rich organ, and peripheral nerves are wrapped with a lipid-rich myelin sheath and Schwann cells. All these tissues offer possibilities for accumulation of lipophilic toxicants.
Numerous non-electrolytes and non-polar toxicants with a suitable Nernst partition coefficient will be distributed to this compartment, as well as numerous organic solvents (alcohols, aldehydes, ketones, etc.), chlorinated hydrocarbons (including organochlorine insecticides such as DDT), some inert gases (radon), etc.
Adipose tissue will accumulate toxicants due to its low vascularization and lower rate of biotransformation. Here accumulation of toxicants may represent a kind of temporary “neutralization” because of lack of targets for toxic effect. However, potential danger for the organism is always present due to the possibility of mobilization of toxicants from this compartment back to the circulation.
Deposition of toxicants in the brain (CNS) or lipid-rich tissue of the myelin sheath of the peripheral nervous system is very dangerous. The neurotoxicants are deposited here directly next to their targets. Toxicants retained in lipid-rich tissue of the endocrine glands can produce hormonal disturbances. Despite the blood-brain barrier, numerous neurotoxicants of a lipophilic nature reach the brain (CNS): anaesthetics, organic solvents, pesticides, tetraethyl lead, organomercurials, etc.
Retention in the reticuloendothelial system
In each tissue and organ a certain percentage of cells is specialized for phagocytic activity, engulfing micro-organisms, particles, colloid particles, and so on. This system is called the reticuloendothelial system (RES), comprising fixed cells as well as moving cells (phagocytes). These cells are present in non-active form. An increase of the above-mentioned microbes and particles will activate the cells up to a saturation point.
Toxicants in the form of colloids will be captured by the RES of organs and tissues. Distribution depends on the colloid particle size. For larger particles, retention in the liver will be favoured. With smaller colloid particles, more or less uniform distribution will occur between the spleen, bone marrow and liver. Clearance of colloids from the RES is very slow, although small particles are cleared relatively more quickly.
Accumulation in bones
About 60 elements can be identified as osteotropic elements, or bone seekers.
Osteotropic elements can be divided into three groups:
The skeleton of a standard man accounts for 10to 15% of the total body weight, representing a large potential storage depot for osteotropic toxicants. Bone is a highly specialized tissue consisting by volume of 54% minerals and 38% organic matrix. The mineral matrix of bone is hydroxyapatite, Ca10(PO4)6(OH)2 , in which the ratio of Ca to P is about 1.5 to one. The surface area of mineral available for adsorption is about 100m2 per g of bone.
Metabolic activity of the bones of the skeleton can be divided in two categories:
In the fetus, infant and young child metabolic bone (see “available skeleton”) represents almost 100% of the skeleton. With age this percentage of metabolic bone decreases. Incorporation of toxicants during exposure appears in the metabolic bone and in more slowly turning-over compartments.
Incorporation of toxicants into bone occurs in two ways:
Ion-exchange reactions
The bone mineral, hydroxyapatite, represents a complex ion- exchange system. Calcium cations can be exchanged by various cations. The anions present in bone can also be exchanged by anions: phosphate with citrates and carbonates, hydroxyl with fluorine. Ions which are not exchangeable can be adsorbed on the mineral surface. When toxicant ions are incorporated in the mineral, a new layer of mineral can cover the mineral surface, burying toxicant into the bone structure. Ion exchange is a reversible process, depending on the concentration of ions, pH and fluid volume. Thus, for example, an increase of dietary calcium may decrease the deposition of toxicant ions in the lattice of minerals. It has been mentioned that with age the percentage of metabolic bone is decreased, although ion exchange continues. With ageing, bone mineral resorption occurs, in which bone density actually decreases. At this point, toxicants in bone may be released (e.g., lead).
About 30% of the ions incorporated into bone minerals are loosely bound and can be exchanged, captured by natural chelating agents and excreted, with a biological half-life of 15 days. The other 70% is more firmly bound. Mobilization and excretion of this fraction shows a biological half-life of 2.5 years and more depending on bone type (remodelling processes).
Chelating agents (Ca-EDTA, penicillamine, BAL, etc.) can mobilize considerable quantities of some heavy metals, and their excretion in urine greatly increased.
Colloid adsorption
Colloid particles are adsorbed as a film on the mineral surface (100m2 per g) by Van der Waals forces or chemisorption. This layer of colloids on the mineral surfaces is covered with the next layer of formed minerals, and the toxicants are more buried into the bone structure. The rate of mobilization and elimination depends on remodelling processes.
Accumulation in hair and nails
The hair and nails contain keratin, with sulphydryl groups able to chelate metallic cations such as mercury and lead.
Distribution of toxicant inside the cell
Recently the distribution of toxicants, especially some heavy metals, within cells of tissues and organs has become of importance. With ultracentrifugation techniques, various fractions of the cell can be separated to determine their content of metal ions and other toxicants.
Animal studies have revealed that after penetration into the cell, some metal ions are bound to a specific protein, metallothionein. This low molecular weight protein is present in the cells of liver, kidney and other organs and tissues. Its sulphydryl groups can bind six ions per molecule. Increased presence of metal ions induces the biosynthesis of this protein. Ions of cadmium are the most potent inducer. Metallothionein serves also to maintain homeostasis of vital copper and zinc ions. Metallothionein can bind zinc, copper, cadmium, mercury, bismuth, gold, cobalt and other cations.
Biotransformation and Elimination of Toxicants
During retention in cells of various tissues and organs, toxicants are exposed to enzymes which can biotransform (metabolize) them, producing metabolites. There are many pathways for the elimination of toxicants and/or metabolites: by exhaled air via the lungs, by urine via the kidneys, by bile via the GIT, by sweat via the skin, by saliva via the mouth mucosa, by milk via the mammary glands, and by hair and nails via normal growth and cell turnover.
The elimination of an absorbed toxicant depends on the portal of entry. In the lungs the absorption/desorption process starts immediately and toxicants are partially eliminated by exhaled air. Elimination of toxicants absorbed by other paths of entry is prolonged and starts after transport by blood, eventually being completed after distribution and biotransformation. During absorption an equilibrium exists between the concentrations of a toxicant in the blood and in tissues and organs. Excretion decreases toxicant blood concentration and may induce mobilization of a toxicant from tissues into blood.
Many factors can influence the elimination rate of toxicants and their metabolites from the body:
Here we distinguish two groups of compartments: (1) the rapid-exchange system— in these compartments, tissue concentration of toxicant is similar to that of the blood; and (2) the slow-exchange system, where tissue concentration of toxicant is higher than in blood due to binding and accumulation—adipose tissue, skeleton and kidneys can temporarily retain some toxicants, e.g., arsenic and zinc.
A toxicant can be excreted simultaneously by two or more excretion routes. However, usually one route is dominant.
Scientists are developing mathematical models describing the excretion of a particular toxicant. These models are based on the movement from one or both compartments (exchange systems), biotransformation and so on.
Elimination by exhaled air via lungs
Elimination via the lungs (desorption) is typical for toxicants with high volatility (e.g., organic solvents). Gases and vapours with low solubility in blood will be quickly eliminated this way, whereas toxicants with high blood solubility will be eliminated by other routes.
Organic solvents absorbed by the GIT or skin are excreted partially by exhaled air in each passage of blood through the lungs, if they have a sufficient vapour pressure. The Breathalyser test used for suspected drunk drivers is based on this fact. The concentration of CO in exhaled air is in equilibrium with the CO-Hb blood content. The radioactive gas radon appears in exhaled air due to the decay of radium accumulated in the skeleton.
Elimination of a toxicant by exhaled air in relation to the post-exposure period of time usually is expressed by a three-phase curve. The first phase represents elimination of toxicant from the blood, showing a short half-life. The second, slower phase represents elimination due to exchange of blood with tissues and organs (quick-exchange system). The third, very slow phase is due to exchange of blood with fatty tissue and skeleton. If a toxicant is not accumulated in such compartments, the curve will be two-phase. In some cases a four-phase curve is also possible.
Determination of gases and vapours in exhaled air in the post-exposure period is sometimes used for evaluation of exposures in workers.
Renal excretion
The kidney is an organ specialized in the excretion of numerous water-soluble toxicants and metabolites, maintaining homeostasis of the organism. Each kidney possesses about one million nephrons able to perform excretion. Renal excretion represents a very complex event encompassing three different mechanisms:
Excretion of a toxicant via the kidneys to urine depends on the Nernst partition coefficient, dissociation constant and pH of urine, molecular size and shape, rate of metabolism to more hydrophilic metabolites, as well as health status of the kidneys.
The kinetics of renal excretion of a toxicant or its metabolite can be expressed by a two-, three- or four-phase excretion curve, depending on the distribution of the particular toxicant in various body compartments differing in the rate of exchange with the blood.
Saliva
Some drugs and metallic ions can be excreted through the mucosa of the mouth by saliva—for example, lead (“lead line”), mercury, arsenic, copper, as well as bromides, iodides, ethyl alcohol, alkaloids, and so on. The toxicants are then swallowed, reaching the GIT, where they can be reabsorbed or eliminated by faeces.
Sweat
Many non-electrolytes can be partially eliminated via skin by sweat: ethyl alcohol, acetone, phenols, carbon disulphide and chlorinated hydrocarbons.
Milk
Many metals, organic solvents and some organochlorine pesticides (DDT) are secreted via the mammary gland in mother’s milk. This pathway can represent a danger for nursing infants.
Hair
Analysis of hair can be used as an indicator of homeostasis of some physiological substances. Also exposure to some toxicants, especially heavy metals, can be evaluated by this kind of bioassay.
Elimination of toxicants from the body can be increased by:
Exposure Determinations
Determination of toxicants and metabolites in blood, exhaled air, urine, sweat, faeces and hair is more and more used for evaluation of human exposure (exposure tests) and/or evaluation of the degree of intoxication. Therefore biological exposure limits (Biological MAC Values, Biological Exposure Indices—BEI) have recently been established. These bioassays show “internal exposure” of the organism, that is, total exposure of the body in both the work and living environments by all portals of entry (see “Toxicology test methods: Biomarkers”).
Combined Effects Due to Multiple Exposure
People in the work and/or living environment are usually exposed simultaneously or consecutively to various physical and chemical agents. Also it is necessary to take into consideration that some persons use medications, smoke, consume alcohol and food containing additives and so on. That means that usually multiple exposure is occurring. Physical and chemical agents can interact in each step of toxicokinetic and/or toxicodynamic processes, producing three possible effects:
However, studies on combined effects are rare. This kind of study is very complex due to the combination of various factors and agents.
We can conclude that when the human organism is exposed to two or more toxicants simultaneously or consecutively, it is necessary to consider the possibility of some combined effects, which can increase or decrease the rate of toxicokinetic processes.
Toxic metals and organometallic compounds such as aluminium, antimony, inorganic arsenic, beryllium, cadmium, chromium, cobalt, lead, alkyl lead, metallic mercury and its salts, organic mercury compounds, nickel, selenium and vanadium have all been recognized for some time as posing potential health risks to exposed persons. In some cases, epidemiological studies on relationships between internal dose and resulting effect/response in occupationally exposed workers have been studied, thus permitting the proposal of health-based biological limit values (see table 1).
Table 1. Metals: Reference values and biological limit values proposed by the American Conference of Governmental Industrial Hygienists (ACGIH), Deutsche Forschungsgemeinschaft (DFG), and Lauwerys and Hoet (L and H)
Metal |
Sample |
Reference1 values* |
ACGIH (BEI) limit2 |
DFG (BAT) limit3 |
L and H limit4 (TMPC) |
Aluminium |
Serum/plasma Urine |
<1 μg/100 ml <30 μg/g |
200 μg/l (end of shift) |
150 μg/g (end of shift) |
|
Antimony |
Urine |
<1 μg/g |
35 μg/g (end of shift) |
||
Arsenic |
Urine (sum of inorganic arsenic and methylated metabolites) |
<10 μg/g |
50 μg/g (end of workweek) |
50 μg/g (if TWA: 0.05 mg/m3 ); 30 μg/g (if TWA: 0.01 mg/m3 ) (end of shift) |
|
Beryllium |
Urine |
<2 μg/g |
|||
Cadmium |
Blood Urine |
<0.5 μg/100 ml <2 μg/g |
0.5 μg/100 ml 5 μg/g |
1.5 μg/100 ml 15 μg/l |
0.5 μg/100 ml 5 μg/g |
Chromium (soluble compounds) |
Serum/plasma Urine |
<0.05 μg/100 ml <5 μg/g |
30 μg/g (end of shift, end of workweek); 10 μg/g (increase during shift) |
30 μg/g (end of shift) |
|
Cobalt |
Serum/plasma Blood Urine |
<0.05 μg/100 ml <0.2 μg/100 ml <2 μg/g |
0.1 μg/100 ml (end of shift, end of workweek) 15 μg/l (end of shift, end of workweek) |
0.5 μg/100 ml (EKA)** 60 μg/l (EKA)** |
30 μg/g (end of shift, end of workweek) |
Lead |
Blood (lead) ZPP in blood Urine (lead) ALA urine |
<25 μg/100 ml <40 μg/100 ml blood <2.5μg/g Hb <50 μg/g <4.5 mg/g |
30 μg/100 ml (not critical) |
female <45 years: 30 μg/100 ml male: 70 μg/100 ml female <45 years: 6 mg/l; male: 15 mg/l |
40 μg/100 ml 40 μg/100 ml blood or 3 μg/g Hb 50 μg/g 5 mg/g |
Manganese |
Blood Urine |
<1 μg/100 ml <3 μg/g |
|||
Mercury inorganic |
Blood Urine |
<1 μg/100 ml <5 μg/g |
1.5 μg/100 ml (end of shift, end of workweek) 35 μg/g (preshift) |
5 μg/100 ml 200 μg/l |
2 μg/100 ml (end of shift) 50 μg/g (end of shift) |
Nickel (soluble compounds) |
Serum/plasma Urine |
<0.05 μg/100 ml <2 μg/g |
45 μg/l (EKA)** |
30 μg/g |
|
Selenium |
Serum/plasma Urine |
<15 μg/100 ml <25 μg/g |
|||
Vanadium |
Serum/plasma Blood Urine |
<0.2 μg/100 ml <0.1 μg/100 ml <1 μg/g |
70 μg/g creatinine |
50 μg/g |
* Urine values are per gram of creatinine.
** EKA = Exposure equivalents for carcinogenic materials.
1 Taken with some modifications from Lauwerys and Hoet 1993.
2 From ACGIH 1996-97.
3 From DFG 1996.
4 Tentative maximum permissible concentrations (TMPCs) taken from Lauwerys and Hoet 1993.
One problem in seeking precise and accurate measurements of metals in biological materials is that the metallic substances of interest are often present in the media at very low levels. When biological monitoring consists of sampling and analyzing urine, as is often the case, it is usually performed on “spot” samples; correction of the results for the dilution of urine is thus usually advisable. Expression of the results per gram of creatinine is the method of standardization most frequently used. Analyses performed on too dilute or too concentrated urine samples are not reliable and should be repeated.
Aluminium
In industry, workers may be exposed to inorganic aluminium compounds by inhalation and possibly also by ingestion of dust containing aluminium. Aluminium is poorly absorbed by the oral route, but its absorption is increased by simultaneous intake of citrates. The rate of absorption of aluminium deposited in the lung is unknown; the bioavailability is probably dependent on the physicochemical characteristics of the particle. Urine is the main route of excretion of the absorbed aluminium. The concentration of aluminium in serum and in urine is determined by both the intensity of a recent exposure and the aluminium body burden. In persons non-occupationally exposed, aluminium concentration in serum is usually below 1 μg/100 ml and in urine rarely exceeds 30 μg/g creatinine. In subjects with normal renal function, urinary excretion of aluminium is a more sensitive indicator of aluminium exposure than its concentration in serum/plasma.
Data on welders suggest that the kinetics of aluminium excretion in urine involves a mechanism of two steps, the first one having a biological half-life of about eight hours. In workers who have been exposed for several years, some accumulation of the metal in the body effectively occurs and aluminium concentrations in serum and in urine are also influenced by the aluminium body burden. Aluminium is stored in several compartments of the body and excreted from these compartments at different rates over many years. High accumulation of aluminium in the body (bone, liver, brain) has also been found in patients suffering from renal insufficiency. Patients undergoing dialysis are at risk of bone toxicity and/or encephalopathy when their serum aluminium concentration chronically exceeds 20 μg/100 ml, but it is possible to detect signs of toxicity at even lower concentrations. The Commission of the European Communities has recommended that, in order to prevent aluminium toxicity, the concentration of aluminium in plasma should never exceed 20 μg/100 ml; a level above 10 μg/100 ml should lead to an increased monitoring frequency and health surveillance, and a concentration exceeding 6 μg/100 ml should be considered as evidence of an excessive build-up of the aluminium body burden.
Antimony
Inorganic antimony can enter the organism by ingestion or inhalation, but the rate of absorption is unknown. Absorbed pentavalent compounds are primarily excreted with urine and trivalent compounds via faeces. Retention of some antimony compounds is possible after long-term exposure. Normal concentrations of antimony in serum and urine are probably below 0.1 μg/100 ml and 1 μg/g creatinine, respectively.
A preliminary study on workers exposed to pentavalent antimony indicates that a time-weighted average exposure to 0.5 mg/m3 would lead to an increase in urinary antimony concentration of 35 μg/g creatinine during the shift.
Inorganic Arsenic
Inorganic arsenic can enter the organism via the gastrointestinal and respiratory tracts. The absorbed arsenic is mainly eliminated through the kidney either unchanged or after methylation. Inorganic arsenic is also excreted in the bile as a glutathione complex.
Following a single oral exposure to a low dose of arsenate, 25 and 45% of the administered dose is excreted in urine within one and four days, respectively.
Following exposure to inorganic trivalent or pentavalent arsenic, the urinary excretion consists of 10 to 20% inorganic arsenic, 10 to 20% monomethylarsonic acid, and 60 to 80% cacodylic acid. Following occupational exposure to inorganic arsenic, the proportion of the arsenical species in urine depends on the time of sampling.
The organoarsenicals present in marine organisms are also easily absorbed by the gastrointestinal tract but are excreted for the most part unchanged.
Long-term toxic effects of arsenic (including the toxic effects on genes) result mainly from exposure to inorganic arsenic. Therefore, biological monitoring aims at assessing exposure to inorganic arsenic compounds. For this purpose, the specific determination of inorganic arsenic (Asi), monomethylarsonic acid (MMA), and cacodylic acid (DMA) in urine is the method of choice. However, since seafood consumption might still influence the excretion rate of DMA, the workers being tested should refrain from eating seafood during the 48 hours prior to urine collection.
In persons non-occupationally exposed to inorganic arsenic and who have not recently consumed a marine organism, the sum of these three arsenical species does not usually exceed 10 μg/g urinary creatinine. Higher values can be found in geographical areas where the drinking water contains significant amounts of arsenic.
It has been estimated that in the absence of seafood consumption, a time-weighted average exposure to 50 and 200 μg/m3 inorganic arsenic leads to mean urinary concentrations of the sum of the metabolites (Asi, MMA, DMA) in post-shift urine samples of 54 and 88 μg/g creatinine, respectively.
In the case of exposure to less soluble inorganic arsenic compounds (e.g., gallium arsenide), the determination of arsenic in urine will reflect the amount absorbed but not the total dose delivered to the body (lung, gastrointestinal tract).
Arsenic in hair is a good indicator of the amount of inorganic arsenic absorbed during the growth period of the hair. Organic arsenic of marine origin does not appear to be taken up in hair to the same degree as inorganic arsenic. Determination of arsenic concentration along the length of the hair may provide valuable information concerning the time of exposure and the length of the exposure period. However, the determination of arsenic in hair is not recommended when the ambient air is contaminated by arsenic, as it will not be possible to distinguish between endogenous arsenic and arsenic externally deposited on the hair. Arsenic levels in hair are usually below 1 mg/kg. Arsenic in nails has the same significance as arsenic in hair.
As with urine levels, blood arsenic levels may reflect the amount of arsenic recently absorbed, but the relation between the intensity of arsenic exposure and its concentration in blood has not yet been assessed.
Beryllium
Inhalation is the primary route of beryllium uptake for occupationally exposed persons. Long-term exposure can result in the storage of appreciable amounts of beryllium in lung tissues and in the skeleton, the ultimate site of storage. Elimination of absorbed beryllium occurs mainly via urine and only to a minor degree in the faeces.
Beryllium levels can be determined in blood and urine, but at present these analyses can be used only as qualitative tests to confirm exposure to the metal, since it is not known to what extent the concentrations of beryllium in blood and urine may be influenced by recent exposure and by the amount already stored in the body. Furthermore, it is difficult to interpret the limited published data on the excretion of beryllium in exposed workers, because usually the external exposure has not been adequately characterized and the analytical methods have different sensitivities and precision. Normal urinary and serum levels of beryllium are probably below
2 μg/g creatinine and 0.03 μg/100 ml, respectively.
However, the finding of a normal concentration of beryllium in urine is not sufficient evidence to exclude the possibility of past exposure to beryllium. Indeed, an increased urinary excretion of beryllium has not always been found in workers even though they have been exposed to beryllium in the past and have consequently developed pulmonary granulomatosis, a disease characterized by multiple granulomas, that is, nodules of inflammatory tissue, found in the lungs.
Cadmium
In the occupational setting, absorption of cadmium occurs chiefly through inhalation. However, gastrointestinal absorption may significantly contribute to the internal dose of cadmium. One important characteristic of cadmium is its long biological half-life in the body, exceeding
10 years. In tissues, cadmium is mainly bound to metallothionein. In blood, it is mainly bound to red blood cells. In view of the property of cadmium to accumulate, any biological monitoring programme of population groups chronically exposed to cadmium should attempt to evaluate both the current and the integrated exposure.
By means of neutron activation, it is currently possible to carry out in vivo measurements of the amounts of cadmium accumulated in the main sites of storage, the kidneys and the liver. However, these techniques are not used routinely. So far, in the health surveillance of workers in industry or in large-scale studies on the general population, exposure to cadmium has usually been evaluated indirectly by measuring the metal in urine and blood.
The detailed kinetics of the action of cadmium in humans is not yet fully elucidated, but for practical purposes the following conclusions can be formulated regarding the significance of cadmium in blood and urine. In newly exposed workers, the levels of cadmium in blood increase progressively and after four to six months reach a concentration corresponding to the intensity of exposure. In persons with ongoing exposure to cadmium over a long period, the concentration of cadmium in the blood reflects mainly the average intake during recent months. The relative influence of the cadmium body burden on the cadmium level in the blood may be more important in persons who have accumulated a large amount of cadmium and have been removed from exposure. After cessation of exposure, the cadmium level in blood decreases relatively fast, with an initial half-time of two to three months. Depending on the body burden, the level may, however, remain higher than in control subjects. Several studies in humans and animals have indicated that the level of cadmium in urine can be interpreted as follows: in the absence of acute overexposure to cadmium, and as long as the storage capability of the kidney cortex is not exceeded or cadmium-induced nephropathy has not yet occurred, the level of cadmium in urine increases progressively with the amount of cadmium stored in the kidneys. Under such conditions, which prevail mainly in the general population and in workers moderately exposed to cadmium, there is a significant correlation between urinary cadmium and cadmium in the kidneys. If exposure to cadmium has been excessive, the cadmium-binding sites in the organism become progressively saturated and, despite continuous exposure, the cadmium concentration in the renal cortex levels off.
From this stage on, the absorbed cadmium cannot be further retained in that organ and it is rapidly excreted in the urine. Then at this stage, the concentration of urinary cadmium is influenced by both the body burden and the recent intake. If exposure is continued, some subjects may develop renal damage, which gives rise to a further increase of urinary cadmium as a result of the release of cadmium stored in the kidney and depressed reabsorption of circulating cadmium. However, after an episode of acute exposure, cadmium levels in urine may rapidly and briefly increase without reflecting an increase in the body burden.
Recent studies indicate that metallothionein in urine has the same biological significance. Good correlations have been observed between the urinary concentration of metallothionein and that of cadmium, independently of the intensity of exposure and the status of renal function.
The normal levels of cadmium in blood and in urine are usually below 0.5 μg/100 ml and
2 μg/g creatinine, respectively. They are higher in smokers than in nonsmokers. In workers chronically exposed to cadmium, the risk of renal impairment is negligible when urinary cadmium levels never exceed 10 μg/g creatinine. An accumulation of cadmium in the body which would lead to a urinary excretion exceeding this level should be prevented. However, some data suggest that certain renal markers (whose health significance is still unknown) may become abnormal for urinary cadmium values between 3 and 5 μg/g creatinine, so it seems reasonable to propose a lower biological limit value of 5 μg/g creatinine. For blood, a biological limit of 0.5 μg/100 ml has been proposed for long-term exposure. It is possible, however, that in the case of the general population exposed to cadmium via food or tobacco or in the elderly, who normally suffer a decline of renal function, the critical level in the renal cortex may be lower.
Chromium
The toxicity of chromium is attributable chiefly to its hexavalent compounds. The absorption of hexavalent compounds is relatively higher than the absorption of trivalent compounds. Elimination occurs mainly via urine.
In persons non-occupationally exposed to chromium, the concentration of chromium in serum and in urine usually does not exceed 0.05 μg/100 ml and 2 μg/g creatinine, respectively. Recent exposure to soluble hexavalent chromium salts (e.g., in electroplaters and stainless steel welders) can be assessed by monitoring chromium level in urine at the end of the workshift. Studies carried out by several authors suggest the following relation: a TWA exposure of 0.025 or 0.05 mg/m3 hexavalent chromium is associated with an average concentration at the end of the exposure period of 15 or 30 μg/g creatinine, respectively. This relation is valid only on a group basis. Following exposure to 0.025 mg/m3 hexavalent chromium, the lower 95% confidence limit value is approximately 5 μg/g creatinine. Another study among stainless steel welders has found that a urinary chromium concentration on the order of 40 μg/l corresponds to an average exposure to 0.1 mg/m3 chromium trioxide.
Hexavalent chromium readily crosses cell membranes, but once inside the cell, it is reduced to trivalent chromium. The concentration of chromium in erythrocytes might be an indicator of the exposure intensity to hexavalent chromium during the lifetime of the red blood cells, but this does not apply to trivalent chromium.
To what extent monitoring chromium in urine is useful for health risk estimation remains to be assessed.
Cobalt
Once absorbed, by inhalation and to some extent via the oral route, cobalt (with a biological half-life of a few days) is eliminated mainly with urine. Exposure to soluble cobalt compounds leads to an increase of cobalt concentration in blood and urine.
The concentrations of cobalt in blood and in urine are influenced chiefly by recent exposure. In non-occupationally exposed subjects, urinary cobalt is usually below 2 μg/g creatinine and serum/plasma cobalt below 0.05 μg/100 ml.
For TWA exposures of 0.1 mg/m3 and 0.05 mg/m3, mean urinary levels ranging from about 30 to 75 μg/l and 30 to 40 μg/l, respectively, have been reported (using end-of-shift samples). Sampling time is important as there is a progressive increase in the urinary levels of cobalt during the workweek.
In workers exposed to cobalt oxides, cobalt salts, or cobalt metal powder in a refinery, a TWA of 0.05 mg/m3 has been found to lead to an average cobalt concentration of 33 and 46 μg/g creatinine in the urine collected at the end of the shift on Monday and Friday, respectively.
Lead
Inorganic lead, a cumulative toxin absorbed by the lungs and the gastrointestinal tract, is clearly the metal that has been most extensively studied; thus, of all the metal contaminants, the reliability of methods for assessing recent exposure or body burden by biological methods is greatest for lead.
In a steady-state exposure situation, lead in whole blood is considered to be the best indicator of the concentration of lead in soft tissues and hence of recent exposure. However, the increase of blood lead levels (Pb-B) becomes progressively smaller with increasing levels of lead exposure. When occupational exposure has been prolonged, cessation of exposure is not necessarily associated with a return of Pb-B to a pre-exposure (background) value because of the continuous release of lead from tissue depots. The normal blood and urinary lead levels are generally below 20 μg/100 ml and 50 μg/g creatinine, respectively. These levels may be influenced by the dietary habits and the place of residence of the subjects. The WHO has proposed 40 μg/100 ml as the maximal tolerable individual blood lead concentration for adult male workers, and 30 μg/100 ml for women of child-bearing age. In children, lower blood lead concentrations have been associated with adverse effects on the central nervous system. Lead level in urine increases exponentially with increasing Pb-B and under a steady-state situation is mainly a reflection of recent exposure.
The amount of lead excreted in urine after administration of a chelating agent (e.g., CaEDTA) reflects the mobilizable pool of lead. In control subjects, the amount of lead excreted in urine within 24 hours after intravenous administration of one gram of EDTA usually does not exceed 600 μg. It seems that under constant exposure, chelatable lead values reflect mainly blood and soft tissues lead pool, with only a small fraction derived from bones.
An x-ray fluorescence technique has been developed for measuring lead concentration in bones (phalanges, tibia, calcaneus, vertebrae), but presently the limit of detection of the technique restricts its use to occupationally exposed persons.
Determination of lead in hair has been proposed as a method of evaluating the mobilizable pool of lead. However, in occupational settings, it is difficult to distinguish between lead incorporated endogenously into hair and that simply adsorbed on its surface.
The determination of lead concentration in the circumpulpal dentine of deciduous teeth (baby teeth) has been used to estimate exposure to lead during early childhood.
Parameters reflecting the interference of lead with biological processes can also be used for assessing the intensity of exposure to lead. The biological parameters which are currently used are coproporphyrin in urine (COPRO-U), delta-aminolaevulinic acid in urine (ALA-U), erythrocyte protoporphyrin (EP, or zinc protoporphyrin), delta-aminolaevulinic acid dehydratase (ALA-D), and pyrimidine-5’-nucleotidase (P5N) in red blood cells. In steady-state situations, the changes in these parameters are positively (COPRO-U, ALA-U, EP) or negatively (ALA-D, P5N) correlated with lead blood levels. The urinary excretion of COPRO (mostly the III isomer) and ALA starts to increase when the concentration of lead in blood reaches a value of about 40 μg/100 ml. Erythrocyte protoporphyrin starts to increase significantly at levels of lead in blood of about 35 μg/100 ml in males and 25 μg/100 ml in females. After the termination of occupational exposure to lead, the erythrocyte protoporphyrin remains elevated out of proportion to current levels of lead in blood. In this case, the EP level is better correlated with the amount of chelatable lead excreted in urine than with lead in blood.
Slight iron deficiency will also cause an elevated protoporphyrin concentration in red blood cells. The red blood cell enzymes, ALA-D and P5N, are very sensitive to the inhibitory action of lead. Within the range of blood lead levels of 10 to 40 μg/100 ml, there is a close negative correlation between the activity of both enzymes and blood lead.
Alkyl Lead
In some countries, tetraethyllead and tetramethyllead are used as antiknock agents in automobile fuels. Lead in blood is not a good indicator of exposure to tetraalkyllead, whereas lead in urine seems to be useful for evaluating the risk of overexposure.
Manganese
In the occupational setting, manganese enters the body mainly through the lungs; absorption via the gastrointestinal tract is low and probably depends on a homeostatic mechanism. Manganese elimination occurs through the bile, with only small amounts excreted with urine.
The normal concentrations of manganese in urine, blood, and serum or plasma are usually less than 3 μg/g creatinine, 1 μg/100 ml, and 0.1 μg/100 ml, respectively.
It seems that, on an individual basis, neither manganese in blood nor manganese in urine are correlated to external exposure parameters.
There is apparently no direct relation between manganese concentration in biological material and the severity of chronic manganese poisoning. It is possible that, following occupational exposure to manganese, early adverse central nervous system effects might already be detected at biological levels close to normal values.
Metallic Mercury and its Inorganic Salts
Inhalation represents the main route of uptake of metallic mercury. The gastrointestinal absorption of metallic mercury is negligible. Inorganic mercury salts can be absorbed through the lungs (inhalation of inorganic mercury aerosol) as well as the gastrointestinal tract. The cutaneous absorption of metallic mercury and its inorganic salts is possible.
The biological half-life of mercury is of the order of two months in the kidney but is much longer in the central nervous system.
Inorganic mercury is excreted mainly with the faeces and urine. Small quantities are excreted through salivary, lacrimal and sweat glands. Mercury can also be detected in expired air during the few hours following exposure to mercury vapour. Under chronic exposure conditions there is, at least on a group basis, a relationship between the intensity of recent exposure to mercury vapour and the concentration of mercury in blood or urine. The early investigations, during which static samples were used for monitoring general workroom air, showed that an average mercury-air, Hg–air, concentration of 100 μg/m3 corresponds to average mercury levels in blood (Hg–B) and in urine (Hg–U) of 6 μg Hg/100 ml and 200 to 260 μg/l, respectively. More recent observations, particularly those assessing the contribution of the external micro-environment close to the respiratory tract of the workers, indicate that the air (μg/m3)/urine (μg/g creatinine)/ blood (μg/100ml) mercury relationship is approximately 1/1.2/0.045. Several epidemiological studies on workers exposed to mercury vapour have demonstrated that for long-term exposure, the critical effect levels of Hg–U and Hg–B are approximately 50 μg/g creatinine and 2 μg/100 ml, respectively.
However, some recent studies seem to indicate that signs of adverse effects on the central nervous system or the kidney can already be observed at a urinary mercury level below 50 μg/g creatinine.
Normal urinary and blood levels are generally below 5 μg/g creatinine and 1 μg/100 ml, respectively. These values can be influenced by fish consumption and the number of mercury amalgam fillings in the teeth.
Organic Mercury Compounds
The organic mercury compounds are easily absorbed by all the routes. In blood, they are to be found mainly in red blood cells (around 90%). A distinction must be made, however, between the short chain alkyl compounds (mainly methylmercury), which are very stable and are resistant to biotransformation, and the aryl or alkoxyalkyl derivatives, which liberate inorganic mercury in vivo. For the latter compounds, the concentration of mercury in blood, as well as in urine, is probably indicative of the exposure intensity.
Under steady-state conditions, mercury in whole blood and in hair correlates with methylmercury body burden and with the risk of signs of methylmercury poisoning. In persons chronically exposed to alkyl mercury, the earliest signs of intoxication (paresthesia, sensory disturbances) may occur when the level of mercury in blood and in hair exceeds 20 μg/100 ml and 50 μg/g, respectively.
Nickel
Nickel is not a cumulative toxin and almost all the amount absorbed is excreted mainly via the urine, with a biological half-life of 17 to 39 hours. In non-occupationally exposed subjects, the urine and plasma concentrations of nickel are usually below 2 μg/g creatinine and 0.05 μg/100 ml, respectively.
The concentrations of nickel in plasma and in urine are good indicators of recent exposure to metallic nickel and its soluble compounds (e.g., during nickel electroplating or nickel battery production). Values within normal ranges usually indicate nonsignificant exposure and increased values are indicative of overexposure.
For workers exposed to soluble nickel compounds, a biological limit value of 30 μg/g creatinine (end of shift) has been tentatively proposed for nickel in urine.
In workers exposed to slightly soluble or insoluble nickel compounds, increased levels in body fluids generally indicate significant absorption or progressive release from the amount stored in the lungs; however, significant amounts of nickel may be deposited in the respiratory tract (nasal cavities, lungs) without any significant elevation of its plasma or urine concentration. Therefore, “normal” values have to be interpreted cautiously and do not necessarily indicate absence of health risk.
Selenium
Selenium is an essential trace element. Soluble selenium compounds seem to be easily absorbed through the lungs and the gastrointestinal tract. Selenium is mainly excreted in urine, but when exposure is very high it can also be excreted in exhaled air as dimethylselenide vapour. Normal selenium concentrations in serum and urine are dependent on daily intake, which may vary considerably in different parts of the world but are usually below 15 μg/100 ml and 25 μg/g creatinine, respectively. The concentration of selenium in urine is mainly a reflection of recent exposure. The relationship between the intensity of exposure and selenium concentration in urine has not yet been established.
It seems that the concentration in plasma (or serum) and urine mainly reflects short-term exposure, whereas the selenium content of erythrocytes reflects more long-term exposure.
Measuring selenium in blood or urine gives some information on selenium status. Currently it is more often used to detect a deficiency rather than an overexposure. Since the available data concerning the health risk of long-term exposure to selenium and the relationship between potential health risk and levels in biological media are too limited, no biological threshold value can be proposed.
Vanadium
In industry, vanadium is absorbed mainly via the pulmonary route. Oral absorption seems low (less than 1%). Vanadium is excreted in urine with a biological half-life of about 20 to 40 hours, and to a minor degree in faeces. Urinary vanadium seems to be a good indicator of recent exposure, but the relationship between uptake and vanadium levels in urine has not yet been sufficiently established. It has been suggested that the difference between post-shift and pre-shift urinary concentrations of vanadium permits the assessment of exposure during the workday, whereas urinary vanadium two days after cessation of exposure (Monday morning) would reflect accumulation of the metal in the body. In non-occupationally exposed persons, vanadium concentration in urine is usually below 1 μg/g creatinine. A tentative biological limit value of 50 μg/g creatinine (end of shift) has been proposed for vanadium in urine.
Origins
Standardization in the field of ergonomics has a relatively short history. It started in the beginning of the 1970s when the first committees were founded on the national level (e.g., in Germany within the standardization institute DIN), and it continued on an international level after the foundation of the ISO (International Organization for Standardization) TC (Technical Committee) 159 “Ergonomics”, in 1975. In the meantime ergonomics standardization takes place on regional levels as well, for example, on the European level within the CEN (Commission européenne de normalisation), which established its TC 122 “Ergonomics” in 1987. The existence of the latter committee underscores the fact that one of the important reasons for establishing committees for the standardization of ergonomics knowledge and principles can be found in legal (and quasi-legal) regulations, especially with respect to safety and health, which require the application of ergonomics principles and findings in the design of products and work systems. National laws requiring the application of well-established ergonomics findings were the reason for the establishment of the German ergonomics committee in 1970, and European Directives, especially the Machinery Directive (relating to safety standards), were responsible for establishing an ergonomics committee on the European level. Since legal regulations usually are not, cannot and should not be very specific, the task of specifying which ergonomics principles and findings should be applied was given to or taken up by ergonomics standardization committees. Especially on the European level, it can be recognized that ergonomics standardization can contribute to the task of providing for broad and comparable conditions of machinery safety, thus removing barriers to the free trade of machinery within the continent itself.
Perspectives
Ergonomics standardization thus started with a strong protective, although preventive, perspective, with ergonomics standards being developed with the aim of protecting workers against adverse effects at different levels of health protection. Ergonomics standards were thus prepared with the following intentions in view:
International standardization, which was not so closely coupled to legislation, on the other hand, always also tried to open a perspective in the direction of producing standards which would go beyond the prevention of and protection against adverse effects (e.g., by specifying minimal/maximal values) and instead proactively provide for optimal working conditions to promote the well-being and personal development of the worker, as well as the effectiveness, efficiency, reliability and productivity of the work system.
This is a point where it becomes evident that ergonomics, and especially ergonomics standardization, has very distinct social and political dimensions. Whereas the protective approach with respect to safety and health is generally accepted and agreed upon among the parties involved (employers, unions, administration and ergonomics experts) for all levels of standardization, the proactive approach is not equally accepted by all parties in the same way. This might be due to the fact that, especially where legislation requires the application of ergonomics principles (and thus either explicitly or implicitly the application of ergonomics standards), some parties feel that such standards might limit their freedom of action or negotiation. Since international standards are less compelling (transferring them into the body of national standards is at the discretion of the national standardization committees) the proactive approach has been developed furthest at the international level of ergonomics standardization.
The fact that certain regulations would indeed restrict the discretion of those to whom they applied served to discourage standardization in certain areas, for example in connection with the European Directives under Article 118a of the Single European Act, relating to safety and health in the use and operation of machinery at the workplace, and in the design of work systems and workplace design. On the other hand, under the Directives issued under Article 100a, relating to safety and health in the design of machinery with regard to the free trade of this machinery within the European Union (EU), European ergonomics standardization is mandated by the European Commission.
From an ergonomics point of view, however, it is difficult to understand why ergonomics in the design of machinery should be different from that in the use and operation of machinery within a work system. It is thus to be hoped that the distinction will be given up in the future, since it seems to be more detrimental than beneficial to the development of a consistent body of ergonomics standards.
Types of Ergonomics Standards
The first international ergonomics standard to have been developed (based on a German DIN national standard) is ISO 6385, “Ergonomic principles in the design of work systems”, published in 1981. It is the basic standard of the ergonomics standards series and set the stage for the standards which followed by defining the basic concepts and stating the general principles of the ergonomic design of work systems, including tasks, tools, machinery, workstations, work space, work environment and work organization. This international standard, which is now undergoing revision, is a guideline standard, and as such provides guidelines to be followed. It does not, however, provide technical or physical specifications which have to be met. These can be found in a different type of standards, that is, specification standards, for example, those on anthropometry or thermal conditions. Both types of standards fulfil different functions. While guideline standards intend to show their users “what to do and how to do it” and indicate those principles that must or should be observed, for example, with respect to mental workload, specification standards provide users with detailed information about safety distances or measurement procedures, for example, that have to be met and where compliance with these prescriptions can be tested by specified procedures. This is not always possible with guideline standards, although despite their relative lack of specificity it can usually be demonstrated when and where guidelines have been violated. A subset of specification standards are “database” standards, which provide the user with relevant ergonomics data, for example, body dimensions.
CEN standards are classified as A-, B- and C-type standards, depending on their scope and field of application. A-type standards are general, basic standards which apply to all kinds of applications, B-type standards are specific for an area of application (which means that most of the ergonomics standards within the CEN will be of this type), and C-type standards are specific for a certain kind of machinery, for example, hand-held drilling machines.
Standardization Committees
Ergonomics standards, like other standards, are produced in the appropriate technical committees (TCs), their subcommittees (SCs) or working groups (WGs). For the ISO this is TC 159, for CEN it is TC 122, and on the national level, the respective national committees. Besides the ergonomics committees, ergonomics is also dealt with in TCs working on machine safety (e.g., CEN TC 114 and ISO TC 199) with which liaison and close cooperation is maintained. Liaisons are also established with other committees for which ergonomics might be of relevance. Responsibility for ergonomics standards, however, is reserved to the ergonomics committees themselves.
A number of other organizations are engaged in the production of ergonomics standards, such as the IEC (International Electrotechnical Commission); CENELEC, or the respective national committees in the electrotechnical field; CCITT (Comité consultative international des organisations téléphoniques et télégraphiques) or ETSI (European Telecommunication Standards Institute) in the field of telecommunications; ECMA (European Computer Manufacturers Association) in the field of computer systems; and CAMAC (Computer Assisted Measurement and Control Association) in the field of new technologies in manufacturing, to name only a few. With some of these the ergonomics committees do have liaisons in order to avoid duplication of work or inconsistent specifications; with some organizations (e.g., the IEC) even joint technical committees are established for cooperation in areas of mutual interest. With other committees, however, there is no coordination or cooperation at all. The main purpose of these committees is to produce (ergonomics) standards that are specific to their field of activity. Since the number of such organizations at the different levels is rather large, it becomes quite complicated (if not impossible) to carry out a complete overview of ergonomics standardization. The present review will therefore be restricted to ergonomics standardization in the international and European ergonomics committees.
Structure of Standardization Committees
Ergonomics standardization committees are quite similar to one another in structure. Usually one TC within a standardization organization is responsible for ergonomics. This committee (e.g., ISO TC 159) mainly has to do with decisions about what should be standardized (e.g., work items) and how to organize and coordinate the standardization within the committee, but usually no standards are prepared at this level. Below the TC level are other committees. For example, the ISO has subcommittees (SCs), which are responsible for a defined field of standardization: SC 1 for general ergonomic guiding principles, SC 3 for anthropometry and biomechanics, SC 4 for human-system interaction and SC 5 for the physical work environment. CEN TC 122 has working groups (WGs) below the TC level which are so constituted as to deal with specified fields within ergonomics standardization. SCs within ISO TC 159 operate as steering committees for their field of responsibility and do the first voting, but usually they do not also prepare standards. This is done in their WGs, which are composed of experts nominated by their national committees, whereas SC and TC meetings are attended by national delegations representing national points of view. Within the CEN, duties are not sharply distinguished at the WG level; WGs operate both as steering and production committees, although a good deal of work is accomplished in ad hoc groups, which are composed of members of the WG (nominated by their national committees) and established to prepare the drafts for a standard. WGs within an ISO SC are established to do the practical standardization work, that is, prepare drafts, work on comments, identify needs for standardization, and prepare proposals to the SC and TC, which will then take the appropriate decisions or actions.
Preparation of Ergonomics Standards
The preparation of ergonomics standards has changed quite markedly within the last years in view of the stronger emphasis now being placed on European and other international developments. In the beginning, national standards, which had been prepared by experts from one country in their national committee and agreed upon by the interested parties among the general public of that country in a specified voting procedure, were transferred as input to the responsible SC and WG of ISO TC 159, after a formal vote had been taken at the TC level that such an international standard should be prepared. The working group, composed of ergonomics experts (and experts from politically interested parties) from all participating member bodies (i.e., the national standardization organizations) of TC 159 who were willing to cooperate in this work project, would then work on any inputs and prepare a working draft (WD). After this draft proposal is agreed upon in the WG, it becomes a committee draft (CD), which is distributed to the member bodies of the SC for approval and comments. If the draft receives substantial support from the SC member bodies (i.e., if at least two-thirds vote in favour) and after comments by the national committees have been incorporated by the WG in the improved version, a Draft International Standard (DIS) is submitted for voting to all members of TC 159. If substantial support, at this step from the member bodies of the TC, is achieved (and perhaps after incorporating editorial changes), this version will then be published as an International Standard (IS) by the ISO. Voting of the member bodies at the TC and SC level is based on voting at the national level, and comments can be supplied through the member bodies by experts or interested parties in each country. The procedure is roughly equivalent in CEN TC 122, with the exception that there are no SCs below the TC level and that voting takes part with weighted votes (according to the size of the country) whereas within the ISO the rule is one country, one vote. If a draft fails at any step, and unless the WG decides that an agreeable revision cannot be achieved, it has to be revised and then has to pass through the voting procedure again.
International standards are then transferred into national standards if the national committees vote accordingly. By contrast, European Standards (ENs) have to be transferred into national standards by the CEN members and conflicting national standards have to be withdrawn. That means that harmonized ENs will be effective in all CEN countries (and, due to their influence on trade, will be relevant to manufacturers in all other countries who intend to sell goods to a customer in a CEN country).
ISO-CEN Cooperation
In order to avoid conflicting standards and duplication of work and to allow non-CEN members to take part in developments in the CEN, a cooperative agreement between the ISO and the CEN has been achieved (the so-called Vienna Agreement) which regulates the formalities and provides for a so-called parallel voting procedure, which allows the same drafts to be voted upon in the CEN and the ISO in parallel, if the responsible committees agree to do so. Among the ergonomics committees the tendency is quite clear: avoid duplication of work (manpower and financial resources are too limited), avoid conflicting specifications, and try to achieve a consistent body of ergonomics standards based on a division of labour. Whereas CEN TC 122 is bound by the decisions of the EU administration and gets mandated work items to stipulate the specifications of European directives, ISO TC 159 is free to standardize whatever it thinks necessary or appropriate in the field of ergonomics. This has led to shifts in the emphasis of both committees, with the CEN concentrating on machinery and safety-related topics and the ISO concentrating on areas where broader market interests than Europe are concerned (e.g., work with VDUs and control-room design for process and related industries); on areas where the operation of machinery is concerned, as in work system design; and on such areas as work environment and work organization as well. The intention, however, is to transfer work results from the CEN to the ISO, and vice versa, in order to build up a body of consistent ergonomics standards which in fact are effective all over the world.
The formal procedure of producing standards is still the same today. But since the emphasis has shifted more and more to the international or the European level, more and more activities are being transferred to these committees. Drafts are now usually worked out directly in these committees and are no longer based on existing national standards. After the decision has been made that a standard should be developed, work directly starts at one of these supranational levels, based on whatever input there may be available, sometimes starting from zero. This changes the role of the national ergonomics committees quite dramatically. While heretofore they formally developed their own national standards according to their national rules, they now have the task of observing and influencing standardization on the supranational levels—via the experts who work out the standards or via comments made at the different steps of voting (within the CEN, a national standardization project will be halted if a comparable project is being simultaneously worked on at the CEN level). This makes the task still more complicated, since this influence can only be exerted indirectly and since the preparation of ergonomics standards is not just a matter of pure science but a matter of bargaining, consensus and agreement (not least due to the political implications which the standard might have). This, of course, is one of the reasons why the process of producing an international or European ergonomics standard usually takes several years and why ergonomics standards cannot reflect the latest state of the art in ergonomics. International ergonomics standards thus have to be examined every five years, and, if necessary, undergo revision.
Fields of Ergonomics Standardization
International ergonomics standardization started with guidelines on the general principles of ergonomics in the design of work systems; they were laid down in ISO 6385, which is now under revision in order to incorporate new developments. The CEN has produced a similar basic standard (EN 614, Part 1, 1994)—this is oriented more to machinery and safety—and is preparing a standard with guidelines on task design as a second part of this basic standard. The CEN thus emphasizes the importance of operator tasks in the design of machinery or work systems, for which appropriate tools or machinery have to be designed.
Another area where concepts and guidelines have been laid down in standards is the field of mental workload. ISO 10075, Part 1, defines terms and concepts (e.g., fatigue, monotony, reduced vigilance), and Part 2 (at the stage of a DIS in the latter half of the 1990s) provides guidelines for the design of work systems with respect to mental workload in order to avoid impairments.
SC 3 of ISO TC 159 and WG 1 of CEN TC 122 produce standards on anthropometry and biomechanics, covering, among other topics, methods of anthropometric measurements, body dimensions, safety distances and access dimensions, the evaluation of working postures and the design of workplaces in relation to machinery, recommended limits of physical strength and problems of manual handling.
SC 4 of ISO 159 shows how technological and social changes affect ergonomics standardization and the programme of such a subcommittee. SC 4 started as “Signals and Controls” by standardizing principles for displaying information and designing control actuators, with one of its work items being the visual display unit (VDU), used for office tasks. It soon became apparent, however, that standardizing the ergonomics of VDUs would not be sufficient, and that standardization “around” this workstation—in the sense of a work system—was required, covering areas such as hardware (e.g., the VDU itself, including displays, keyboards, non-keyboard input devices, workstations), work environment (e.g., lighting), work organization (e.g., task requirements), and software (e.g., dialogue principles, menu and direct manipulation dialogues). This led to a multipart standard (ISO 9241) covering “ergonomic requirements for office work with VDUs” with at the moment 17 parts, 3 of which have reached the status of an IS already. This standard will be transferred to the CEN (as EN 29241) which will specify requirements for the VDU directive (90/270 EEC) of the EU—although this is a directive under article 118a of the Single European Act. This series of standards provides guidelines as well as specifications, depending on the subject of the given part of the standard, and introduces a new concept of standardization, the user performance approach, which might help to solve some of the problems in ergonomics standardization. It is described more fully in the chapter Visual Display Units .
The user performance approach is based on the idea that the aim of standardization is to prevent impairment and to provide for optimal working conditions for the operator, but not to establish technical specification per se. Specification is thus regarded only as a means to the end of unimpaired, optimal user performance. The important thing is to achieve this unimpaired performance of the operator, regardless of whether a certain physical specification is met. This requires that the unimpaired operator performance which has to be achieved, for example, reading performance on a VDU, must be specified in the first place, and second, that technical specifications be developed which will enable the desired performance to be achieved, based on the available evidence. The manufacturer is then free to follow these technical specifications, which will ensure that the product complies with the ergonomics requirements. Or he may demonstrate, by comparison with a product that is known to fulfil the requirements (either by compliance with the technical specifications of the standard or by proven performance), that with the new product the performance requirements are equally or better fulfilled than with the reference product, with or without compliance to the technical specifications of the standard. A test procedure which has to be followed for demonstrating conformance with the user performance requirements of the standard is specified in the standard.
This approach helps to overcome two problems. Standards, by virtue of their specifications, which are based on the state of the art (and technology) at the time of preparation of the standard, can restrict new developments. Specifications that are based on a certain technology (e.g., cathode-ray tubes) may be inappropriate for other technologies. Independently of technology, however, the user of a display device (for instance) should be able to read and understand the information displayed effectively and efficiently without any impairments, irrespective of whatever technique may be used. Performance in this case must, however, not be restricted to the pure output (as measured in terms of speed or accuracy) but must include considerations of comfort and effort as well.
The second problem that can be dealt with by this approach is the problem of interactions between conditions. Physical specification usually is unidimensional, leaving other conditions out of consideration. In the case of interactive effects, however, this can be misleading or even wrong. By specifying performance requirements, on the other hand, and leaving the means to achieve these to the manufacturer, any solution that satisfies these performance requirements will be acceptable. Treating specification as a means to an end thus represents a genuine ergonomic perspective.
Another standard with a work system approach is under preparation in SC 4, which relates to the design of control rooms, for instance, for process industries or power stations. A multipart standard (ISO 11064) is expected to be prepared as a result, with the different parts dealing with such aspects of control-room design as layout, operator workstation design, and the design of displays and input devices for process control. Because these work items and the approach taken clearly exceed problems of the design of “displays and controls”, SC 4 has been renamed “Human-System Interaction”.
Environmental problems, especially those relating to thermal conditions and communication in noisy environments, are dealt with in SC 5, where standards have been or are being prepared on measurement methods, methods for the estimation of heat stress, conditions of thermal comfort, metabolic heat production, and on auditory and visual danger signals, speech interference level and the assessment of speech communication.
CEN TC 122 covers roughly the same fields of ergonomics standardization, although with a different emphasis and a different structure of its working groups. It is intended, however, that by a division of labour between the ergonomics committees, and mutual acceptance of work results, a general and usable set of ergonomics standards will be developed.
The priority objective of occupational and environmental toxicology is to improve the prevention or substantial limitation of health effects of exposure to hazardous agents in the general and occupational environments. To this end systems have been developed for quantitative risk assessment related to a given exposure (see the section “Regulatory toxicology”).
The effects of a chemical on particular systems and organs are related to the magnitude of exposure and whether exposure is acute or chronic. In view of the diversity of toxic effects even within one system or organ, a uniform philosophy concerning the critical organ and critical effect has been proposed for the purpose of risk assessment and development of health-based recommended concentration limits of toxic substances in different environmental media.
From the point of view of preventive medicine, it is of particular importance to identify early adverse effects, based on the general assumption that preventing or limiting early effects may prevent more severe health effects from developing.
Such an approach has been applied to heavy metals. Although heavy metals, such as lead, cadmium and mercury, belong to a specific group of toxic substances where the chronic effect of activity is dependent on their accumulation in the organs, the definitions presented below were published by the Task Group on Metal Toxicity (Nordberg 1976).
The definition of the critical organ as proposed by the Task Group on Metal Toxicity has been adopted with a slight modification: the word metal has been replaced with the expression potentially toxic substance (Duffus 1993).
Whether a given organ or system is regarded as critical depends not only on the toxicomechanics of the hazardous agent but also on the route of absorption and the exposed population.
The biological meaning of subcritical effect is sometimes not known; it may stand for exposure biomarker, adaptation index or a critical effect precursor (see “Toxicology test methods: Biomarkers”). The latter possibility can be particularly significant in view of prophylactic activities.
Table 1 displays examples of critical organs and effects for different chemicals. In chronic environmental exposure to cadmium, where the route of absorption is of minor importance (cadmium air concentrations range from 10 to 20μg/m3 in the urban and 1 to 2 μg/m3 in the rural areas), the critical organ is the kidney. In the occupational setting where the TLV reaches 50μg/m3 and inhalation constitutes the main route of exposure, two organs, lung and kidney, are regarded as critical.
Table 1. Examples of critical organs and critical effects
Substance | Critical organ in chronic exposure | Critical effect |
Cadmium | Lungs | Nonthreshold: Lung cancer (unit risk 4.6 x 10-3) |
Kidney | Threshold: Increased excretion of low molecular proteins (β2 –M, RBP) in urine |
|
Lungs | Emphysema slight function changes | |
Lead | Adults Haematopoietic system |
Increased delta-aminolevulinic acid excretion in urine (ALA-U); increased concentration of free erythrocyte protoporphyrin (FEP) in erythrocytes |
Peripheral nervous system | Slowing of the conduction velocities of the slower nerve fibres | |
Mercury (elemental) | Young children Central nervous system |
Decrease in IQ and other subtle effects; mercurial tremor (fingers, lips, eyelids) |
Mercury (mercuric) | Kidney | Proteinuria |
Manganese | Adults Central nervous system |
Impairment of psychomotor functions |
Children Lungs |
Respiratory symptoms | |
Central nervous system | Impairment of psychomotor functions | |
Toluene | Mucous membranes | Irritation |
Vinyl chloride | Liver | Cancer (angiosarcoma unit risk 1 x 10-6 ) |
Ethyl acetate | Mucous membrane | Irritation |
For lead, the critical organs in adults are the haemopoietic and peripheral nervous systems, where the critical effects (e.g., elevated free erythrocyte protoporphyrin concentration (FEP), increased excretion of delta-aminolevulinic acid in urine, or impaired peripheral nerve conduction) manifest when the blood lead level (an index of lead absorption in the system) approaches 200 to 300μg/l. In small children the critical organ is the central nervous system (CNS), and the symptoms of dysfunction detected with the use of a psychological test battery have been found to appear in the examined populations even at concentrations in the range of about 100μg/l Pb in blood.
A number of other definitions have been formulated which may better reflect the meaning of the notion. According to WHO (1989), the critical effect has been defined as “the first adverse effect which appears when the threshold (critical) concentration or dose is reached in the critical organ. Adverse effects, such as cancer, with no defined threshold concentration are often regarded as critical. Decision on whether an effect is critical is a matter of expert judgement.” In the International Programme on Chemical Safety (IPCS) guidelines for developing Environmental Health Criteria Documents, the critical effect is described as “the adverse effect judged to be most appropriate for determining the tolerable intake”. The latter definition has been formulated directly for the purpose of evaluating the health-based exposure limits in the general environment. In this context the most essential seems to be determining which effect can be regarded as an adverse effect. Following current terminology, the adverse effect is the “change in morphology, physiology, growth, development or lifespan of an organism which results in impairment of the capacity to compensate for additional stress or increase in susceptibility to the harmful effects of other environmental influences. Decision on whether or not any effect is adverse requires expert judgement.”
Figure 1 displays hypothetical dose-response curves for different effects. In the case of exposure to lead, A can represent a subcritical effect (inhibition of erythrocyte ALA-dehydratase), B the critical effect (an increase in erythrocyte zinc protoporphyrin or increase in the excretion of delta-aminolevulinic acid, C the clinical effect (anaemia) and D the fatal effect (death). For lead exposure there is abundant evidence illustrating how particular effects of exposure are dependent on lead concentration in blood (practical counterpart of the dose), either in the form of the dose-response relationship or in relation to different variables (sex, age, etc.). Determining the critical effects and the dose-response relationship for such effects in humans makes it possible to predict the frequency of a given effect for a given dose or its counterpart (concentration in biological material) in a certain population.
Figure 1. Hypothetical dose-response curves for various effects
The critical effects can be of two types: those considered to have a threshold and those for which there may be some risk at any exposure level (non-threshold, genotoxic carcinogens and germ mutagens). Whenever possible, appropriate human data should be used as a basis for the risk assessment. In order to determine the threshold effects for the general population, assumptions concerning the exposure level (tolerable intake, biomarkers of exposure) have to be made such that the frequency of the critical effect in the population exposed to a given hazardous agent corresponds to the frequency of that effect in the general population. In lead exposure, the maximum recommended blood lead concentration for the general population (200μg/l, median below 100μg/l) (WHO 1987) is practically below the threshold value for the assumed critical effect—the elevated free erythrocyte protoporphyrin level, although it is not below the level associated with effects on the CNS in children or blood pressure in adults. In general, if data from well-conducted human population studies defining a no observed adverse effect level are the basis for safety evaluation, then the uncertainty factor of ten has been considered appropriate. In the case of occupational exposure the critical effects may refer to a certain part of the population (e.g. 10%). Accordingly, in occupational lead exposure the recommended health-based concentration of blood lead has been adopted to be 400mg/l in men where a 10% response level for ALA-U of 5mg/l occurred at PbB concentrations of about 300 to 400mg/l. For the occupational exposure to cadmium (assuming the increased urinary excretion of low-weight proteins to be the critical effect), the level of 200ppm cadmium in renal cortex has been regarded as the admissible value, for this effect has been observed in 10% of the exposed population. Both these values are under consideration for lowering, in many countries, at the present time (i.e.,1996).
There is no clear consensus on appropriate methodology for the risk assessment of chemicals for which the critical effect may not have a threshold, such as genotoxic carcinogens. A number of approaches based largely on characterization of the dose- response relationship have been adopted for the assessment of such effects. Owing to the lack of socio-political acceptance of health risk caused by carcinogens in such documents as the Air Quality Guidelines for Europe (WHO 1987), only the values such as the unit lifetime risk (i.e., the risk associated with lifetime exposure to 1μg/m3 of the hazardous agent) are presented for non-threshold effects (see “Regulatory toxicology”).
Presently, the basic step in undertaking activities for risk assessment is determining the critical organ and critical effects. The definitions of both the critical and adverse effect reflect the responsibility of deciding which of the effects within a given organ or system should be regarded as critical, and this is directly related to the subsequent determination of recommended values for a given chemical in the general environment—for example, Air Quality Guidelines for Europe (WHO 1987) or health-based limits in occupational exposure (WHO 1980). Determining the critical effect from within the range of subcritical effects may lead to a situation where the recommended limits on toxic chemicals concentration in the general or occupational environment may be in practice impossible to maintain. Regarding as critical an effect that may overlap the early clinical effects may bring about the adoption of the values for which adverse effects may develop in some part of the population. The decision whether or not a given effect should be considered critical remains the responsibility of expert groups who specialize in toxicity and risk assessment.
Introduction
Organic solvents are volatile and generally soluble in body fat (lipophilic), although some of them, e.g., methanol and acetone, are water soluble (hydrophilic) as well. They have been extensively employed not only in industry but in consumer products, such as paints, inks, thinners, degreasers, dry-cleaning agents, spot removers, repellents, and so on. Although it is possible to apply biological monitoring to detect health effects, for example, effects on the liver and the kidney, for the purpose of health surveillance of workers who are occupationally exposed to organic solvents, it is best to use biological monitoring instead for “exposure” monitoring in order to protect the health of workers from the toxicity of these solvents, because this is an approach sensitive enough to give warnings well before any health effects may occur. Screening workers for high sensitivity to solvent toxicity may also contribute to the protection of their health.
Summary of Toxicokinetics
Organic solvents are generally volatile under standard conditions, although the volatility varies from solvent to solvent. Thus, the leading route of exposure in industrial settings is through inhalation. The rate of absorption through the alveolar wall of the lungs is much higher than that through the digestive tract wall, and a lung absorption rate of about 50% is considered typical for many common solvents such as toluene. Some solvents, for example, carbon disulphide and N,N-dimethylformamide in the liquid state, can penetrate intact human skin in amounts large enough to be toxic.
When these solvents are absorbed, a portion is exhaled in the breath without any biotransformation, but the greater part is distributed in organs and tissues rich in lipids as a result of their lipophilicity. Biotransformation takes place primarily in the liver (and also in other organs to a minor extent), and the solvent molecule becomes more hydrophilic, typically by a process of oxidation followed by conjugation, to be excreted via the kidney into the urine as metabolite(s). A small portion may be eliminated unchanged in the urine.
Thus, three biological materials, urine, blood and exhaled breath, are available for exposure monitoring for solvents from a practical viewpoint. Another important factor in selecting biological materials for exposure monitoring is the speed of disappearance of the absorbed substance, for which the biological half-life, or the time needed for a substance to diminish to one-half its original concentration, is a quantitative parameter. For example, solvents will disappear from exhaled breath much more rapidly than corresponding metabolites from urine, meaning they have a much shorter half-life. Within urinary metabolites, the biological half-life varies depending on how quickly the parent compound is metabolised, so that sampling time in relation to exposure is often of critical importance (see below). A third consideration in choosing a biological material is the specificity of the target chemical to be analysed in relation to the exposure. For example, hippuric acid is a long-used marker of exposure to toluene, but it is not only formed naturally by the body, but can also be derived from non-occupational sources such as some food additives, and is no longer considered a reliable marker when toluene exposure is low (less than 50 cm3/m3). Generally speaking, urinary metabolites have been most widely used as indicators of exposure to various organic solvents. Solvent in blood is analysed as a qualitative measure of exposure because it usually remains in the blood a shorter time and is more reflective of acute exposure, whereas solvent in exhaled breath is difficult to use for estimation of average exposure because the concentration in breath declines so rapidly after cessation of exposure. Solvent in urine is a promising candidate as a measure of exposure, but it needs further validation.
Biological Exposure Tests for Organic Solvents
In applying biological monitoring for solvent exposure, sampling time is important, as indicated above. Table 1 shows recommended sampling times for common solvents in the monitoring of everyday occupational exposure. When the solvent itself is to be analysed, attention should be paid to preventing possible loss (e.g., evaporation into room air) as well as contamination (e.g., dissolving from room air into the sample) during the sample handling process. In case the samples need to be transported to a distant laboratory or to be stored before analysis, care should be exercised to prevent loss. Freezing is recommended for metabolites, whereas refrigeration (but no freezing) in an airtight container without an air space (or more preferably, in a headspace vial) is recommended for analysis of the solvent itself. In chemical analysis, quality control is essential for reliable results (for details, see the article “Quality assurance” in this chapter). In reporting the results, ethics should be respected (see chapter Ethical Issues elsewhere in the Encyclopaedia).
Table 1. Some examples of target chemicals for biological monitoring and sampling time
Solvent |
Target chemical |
Urine/blood |
Sampling time1 |
Carbon disulphide |
2-Thiothiazolidine-4-carboxylicacid |
Urine |
Th F |
N,N-Dimethyl-formamide |
N-Methylformamide |
Urine |
M Tu W Th F |
2-Ethoxyethanol and its acetate |
Ethoxyacetic acid |
Urine |
Th F (end of last workshift) |
Hexane |
2,4-Hexanedione Hexane |
Urine Blood |
M Tu W Th F confirmation of exposure |
Methanol |
Methanol |
Urine |
M Tu W Th F |
Styrene |
Mandelic acid Phenylglyoxylic acid Styrene |
Urine Urine Blood |
Th F Th F confirmation of exposure |
Toluene |
Hippuric acid o-Cresol Toluene Toluene |
Urine Urine Blood Urine |
Tu W Th F Tu W Th F confirmation of exposure Tu W Th F |
Trichloroethylene |
Trichloroacetic acid (TCA) Total trichloro- compounds (sum of TCA and free and conjugated trichloroethanol) Trichloroethylene |
Urine Urine Blood |
Th F Th F confirmation of exposure |
Xylenes2 |
Methylhippuric acids Xylenes |
Urine Blood |
Tu W Th F Tu W Th F |
1 End of workshift unless otherwise noted: days of week indicate preferred sampling days.
2 Three isomers, either separately or in any combination.
Source: Summarized from WHO 1996.
Anumber of analytical procedures are established for many solvents. Methods vary depending on the target chemical, but most of the recently developed methods use gas chromatography (GC) or high-performance liquid chromatography (HPLC) for separation. Use of an autosampler and data processor is recommended for good quality control in chemical analysis. When a solvent itself in blood or in urine is to be analysed, an application of headspace technique in GC (headspace GC) is very convenient, especially when the solvent is volatile enough. Table 2 outlines some examples of the methods established for common solvents.
Table 2. Some examples of analytical methods for biological monitoring of exposure to organic solvents
Solvent |
Target chemical |
Blood/urine |
Analytical method |
Carbon disulphide |
2-Thiothiazolidine-4- |
Urine |
High performance liquid chromatograph with ultraviolet detection (UV-HPLC) |
N,N-Dimethylformamide |
N-Methylformamide |
Urine |
Gas chromatograph with flame thermionic detection (FTD-GC) |
2-Ethoxyethanol and its acetate |
Ethoxyacetic acid |
Urine |
Extraction, derivatization and gas chromatograph with flame ionization detection (FID-GC) |
Hexane |
2,4-Hexanedione Hexane |
Urine Blood |
Extraction, (hydrolysis) and FID-GC Head-space FID-GC |
Methanol |
Methanol |
Urine |
Head-space FID-GC |
Styrene |
Mandelic acid Phenylglyoxylic acid Styrene |
Urine Urine Blood |
Desalting and UV-HPLC Desalting and UV-HPLC Headspace FID-GC |
Toluene |
Hippuric acid o-Cresol Toluene Toluene |
Urine Urine Blood Urine |
Desalting and UV-HPLC Hydrolysis, extraction and FID-GC Headspace FID-GC Headspace FID-GC |
Trichloroethylene |
Trichloroacetic acid Total trichloro-compounds (sum of TCA and freeand conjugated trichloroethanol) Trichloroethylene |
Urine Urine Blood |
Colorimetry or esterification and gas chromatograph with electron capture detection (ECD-GC) Oxidation and colorimetry, or hydrolysis, oxidation, esterification and ECD-GC Headspace ECD-GC |
Xylenes |
Methylhippuric acids (three isomers, either separately orin combination) |
Urine |
Headspace FID-GC |
Source: Summarized from WHO 1996.
Evaluation
A linear relationship of the exposure indicators (listed in table 2) with the intensity of exposure to corresponding solvents may be established either through a survey of workers occupationally exposed to solvents, or by experimental exposure of human volunteers. Accordingly, the ACGIH (1994) and the DFG (1994), for example, have established the biological exposure index (BEI) and the biological tolerance value (BAT), respectively, as the values in the biological samples which are equivalent to the occupational exposure limit for airborne chemicals—that is, threshold limit value (TLV) and maximum workplace concentration (MAK), respectively. It is known, however, that the level of the target chemical in samples obtained from non-exposed people may vary, reflecting, for example, local customs (e.g., food), and that ethnic differences may exist in solvent metabolism. It is therefore desirable to establish limit values through the study of the local population of concern.
In evaluating the results, non-occupational exposure to the solvent (e.g., via use of solvent-containing consumer products or intentional inhalation) and exposure to chemicals which give rise to the same metabolites (e.g., some food additives) should be carefully excluded. In case there is a wide gap between the intensity of vapour exposure and the biological monitoring results, the difference may indicate the possibility of skin absorption. Cigarette smoking will suppress the metabolism of some solvents (e.g., toluene), whereas acute ethanol intake may suppress methanol metabolism in a competitive manner.
Work systems encompass such macro level organizational variables as the personnel subsystem, the technological subsystem and the external environment. The analysis of work systems is, therefore, essentially an effort to understand the allocation of functions between the worker and the technical outfit and the division of labour between people in a sociotechnical environment. Such an analysis can assist in making informed decisions to enhance systems safety, efficiency in work, technological development and the mental and physical well-being of workers.
Researchers examine work systems according to divergent approaches (mechanistic, biological, perceptual/motor, motivational) with corresponding individual and organizational outcomes (Campion and Thayer 1985). The selection of methods in work systems analysis is dictated by the specific approaches taken and the particular objective in view, the organizational context, the job and human characteristics, and the technological complexity of the system under study (Drury 1987). Checklists and questionnaires are the common means of assembling databases for organizational planners in prioritizing action plans in areas of personnel selection and placement, performance appraisal, safety and health management, worker-machine design and work design or redesign. Inventory methods of checklists, for example the Position Analysis Questionnaire, or PAQ (McCormick 1979), the Job Components Inventory (Banks and Miller 1984), the Job Diagnostic Survey (Hackman and Oldham 1975), and the Multi-method Job Design Questionnaire (Campion 1988) are the more popular instruments, and are directed to a variety of objectives.
The PAQ has six major divisions, comprising 189 behavioural items required for the assessment of job performance and seven supplementary items related to monetary compensation:
The Job Components Inventory Mark II contains seven sections. The introductory section deals with the details of the organization, job descriptions and biographical details of the job holder. Other sections are as follows:
The profile methods have common elements, that is, (1) a comprehensive set of job factors used to select the range of work, (2) a rating scale that permits the evaluation of job demands, and (3) the weighing of job characteristics based on organizational structure and sociotechnical requirements. Les profils des postes, another task profile instrument, developed in the Renault Organization (RNUR 1976), contains a table of entries of variables representing working conditions, and provides respondents with a five-point scale on which they can select the value of a variable that ranges from very satisfactory to very poor by way of registering standardized responses. The variables cover (1) the design of the workstation, (2) the physical environment, (3) the physical load factors, (4) nervous tension, (5) job autonomy, (6) relations, (7) repetitiveness and (8) contents of work.
The AET (Ergonomic Job Analysis) (Rohmert and Landau 1985), was developed based on the stress-strain concept. Each of the 216 items of the AET are coded: one code defines the stressors, indicating whether a work element does or does not qualify as a stressor; other codes define the degree of stress associated with a job; and yet others describe the duration and frequency of stress during the work shift.
The AET consists of three parts:
Broadly speaking, the checklists adopt one of two approaches, (1) the job-oriented approach (e.g., the AET, Les profils des postes) and (2) the worker-oriented approach (e.g., the PAQ). The task inventories and profiles offer subtle comparison of complex tasks and occupational profiling of jobs and determine the aspects of work which are considered a priori as inevitable factors in improving working conditions. The emphasis of the PAQ is on classifying job families or clusters (Fleishman and Quaintence 1984; Mossholder and Arvey 1984; Carter and Biersner 1987), inferring job component validity and job stress (Jeanneret 1980; Shaw and Riskind 1983). From the medical point of view, both the AET and the profile methods allow comparisons of constraints and aptitudes when required (Wagner 1985). The Nordic questionnaire is an illustrative presentation of ergonomic workplace analysis (Ahonen, Launis and Kuorinka 1989), which covers the following aspects:
Among the shortcomings of the general-purpose checklist format employed in ergonomic job analysis are the following:
A systematically constructed checklist obliges us to investigate the factors of work conditions which are visible or easy to modify, and permits us to engage in a social dialogue between employers, job holders and others concerned. One should exercise a degree of caution towards the illusion of simplicity and efficiency of the checklists, and towards their quantifying and technical approaches as well. Versatility in a checklist or questionnaire can be achieved by including specific modules to suit specific objectives. Therefore, the choice of variables is very much linked to the purpose for which the work systems are to be analysed and this determines the general approach for construction of a user-friendly checklist.
The suggested “Ergonomics Checklist” may be adopted for various applications. Data collection and computerized processing of the checklist data are relatively straightforward, by responding to the primary and secondary statements (q.v.).
ERGONOMICS CHECKLIST
A broad guideline for a modular-structured work systems checklist is suggested here, covering five major aspects (mechanistic, biological, perceptual/motor, technical and psychosocial). Weighting of the modules varies with the nature of the job(s) to be analysed, the specific features of the country or population under study, organizational priorities and the intended use of the results of the analysis. Respondents mark the “primary statement” as Yes/No. “Yes” answers indicate the apparent absence of a problem, although the advisability of further careful scrutiny should not be ruled out. “No” answers indicate a need for an ergonomics evaluation and improvement. Responses to “secondary statements” are indicated by a single digit on the severity of agreement/disagreement scale illustrated below.
0 Do not know or not applicable
1 Strongly disagree
2 Disagree
3 Neither agree nor disagree
4 Agree
5 Strongly agree
A. Organization, worker and the task Your answers/ratings
The checklist designer may provide a sample drawing/photograph of work and
workplace under study.
1. Description of organization and functions.
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
2. Worker characteristics: A brief account of the work group.
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
3. Task description: List activities and materials in use. Give some indication of
the work hazards.
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
B. Mechanistic aspect Your answers/ratings
I. Job Specialization
4.Tasks/work patterns are simple and uncomplicated. Yes/No
If No, rate the following: (Enter 0-5)
4.1 Job assignment is specific to the operative.
4.2 Tools and methods of work are specialized to the purpose of the job.
4.3 Production volume and quality of work.
4.4 Job holder performs multiple tasks.
II. Skill Requirement
5. Job requires simple motor act. Yes/No
If No, rate the following: (Enter 0-5)
5.1 Job requires knowledge and skilful ability.
5.2 Job demands training for skill acquisition.
5.3 Worker makes frequent mistakes at work.
5.4 Job demands frequent rotation, as directed.
5.5 Work operation is machine paced/assisted by automation.
Remarks and suggestions for improvement. Items 4 to 5.5:
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
q Analyst’s rating Worker’s rating q
C. Biological aspect Your answers/ratings
III. General Physical Activity
6. Physical activity is entirely determined and
regulated by the worker. Yes/No
If No, rate the following: (Enter 0-5)
6.1 Worker maintains target-oriented pace.
6.2 Job implies frequently repeated movements.
6.3 Cardiorespiratory demand of the job:
sedentary/light/moderate/heavy/ extremely heavy.
(What are the heavy work elements?):
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
(Enter 0-5)
6.4 Job demands high muscular strength exertion.
6.5 Job (operation of handle, steering wheel, pedal brake) is predominantly static work.
6.6. Job requires fixed working position (sitting or standing).
IV. Manual Materials Handling (MMH)
Nature of objects handled: animate/inanimate, size and shape.
_______________________________________________________________
_______________________________________________________________
7. Job requires minimal MMH activity. Yes/No
If No, specify the work:
7.1 Mode of work: (circle one)
pull/push/turn/lift/lower/carry
(Specify repetition cycle):
_______________________________________________________________
_______________________________________________________________
7.2 Load weight (kg): (circle one)
5-10, 10-20, 20-30, 30-40, >>40.
7.3 Subject-load horizontal distance (cm): (circle one)
<25, 25-40, 40-55, 55-70, >70.
7.4 Subject-load height: (circle one)
ground, knee, waist, chest, shoulder level.
(Enter 0-5)
7.5 Clothing restricts MMH tasks.
8. Task situation is free from risk of bodily injury. Yes/No
If No, rate the following: (Enter 0-5)
8.1 Task can be modified to reduce the load to be handled.
8.2 Materials can be packed in standard sizes.
8.3 Size/position of handles on objects may be improved.
8.4 Workers do not adopt safer methods of load handling.
8.5 Mechanical aids may reduce bodily strains.
List each item if hoists or other handling aids are available.
Suggestions for improvement, Items 6 to 8.5:
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
V. Workplace/Workspace Design
Workplace may be diagrammatically illustrated, showing human reach and
clearance:
9. Workplace is compatible with human dimensions. Yes/No
If No, rate the following: (Enter 0-5)
9.1 Work distance is away from normal reach in the horizontal or vertical plane (>60 cm).
9.2 Height of work desk/equipment is fixed or minimally adjustable.
9.3 No space for subsidiary operations (e.g., inspection and maintenance).
9.4 Workstations have obstacles, protruding parts or sharp edges.
9.5 Work surface floors are slippery, uneven, cluttered or unstable.
10. Seating arrangement is adequate (e.g., comfortable chair,
good postural support). Yes/No
If No, the causes are: (Enter 0-5)
10.1 Seat dimensions (e.g., seat height, back rest) mismatch with human dimensions.
10.2 Minimum adjustability of seat.
10.3 Workseat provides no hold/support (e.g., by means of vertical edges/extra stiff covering) to work with the machinery.
10.4 Absence of vibration damping mechanism in the workseat.
11. Sufficient auxiliary support is available for safety
at the workplace. Yes/No
If No, mention the following: (Enter 0-5)
11.1 Non-availability of storage space for tools, personal articles.
11.2 Doorways, entrance/exit routes, or corridors are restricted.
11.3 Design mismatch of handles, ladders, staircases, handrails.
11.4 Handholds and footholds demand awkward position of limbs.
11.5 Supports are unrecognizable by their place, form or construction.
11.6 Limited use of gloves/footwear to work and operate equipment controls.
Suggestions for improvement, Items 9 to 11.6:
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
VI. Work Posture
12. Job allows a relaxed work posture. Yes/No
If No, rate the following: (Enter 0-5)
12.1 Working with arms above shoulder and/or away from the body.
12.2 Hyperextension of wrist and demand of high strength.
12.3 Neck/shoulder are not maintained at an angle of about 15°.
12.4 Back bent and twisted.
12.5 Hips and legs are not well supported in seated position.
12.6 One-sided and unsymmetrical movement of the body.
12.7 Mention reasons of forced posture:
(1) machine location
(2) seat design,
(3) equipment handling,
(4) workplace/workspace
12.8 Specify OWAS code. (For a detailed description of the OWAS
method refer to Karhu et al. 1981.)
_______________________________________________________________
_______________________________________________________________
Suggestions for improvement, Items 12 to 12.7:
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
VII. Work Environment
(Give measurements where possible)
NOISE
[Identify noise sources, type and duration of exposure; refer to ILO 1984 code].
13. Noise level is below the maximum Yes/No
sound level recommended. (Use the following table.)
Rating |
Work requiring no verbal communication |
Work requiring verbal communication |
Work requiring concentration |
1 |
under 60 dBA |
under 50 dBA |
under 45 dBA |
2 |
60-70 dBA |
50-60 dBA |
45-55 dBA |
3 |
70-80 dBA |
60-70 dBA |
55-65 dBA |
4 |
80-90 dBA |
70-80 dBA |
65-75 dBA |
5 |
over 90 dBA |
over 80 dBA |
over 75 dBA |
Source: Ahonen et al. 1989.
Give your agreement/disagreement score (0-5)
14. Damaging noises are suppressed at the source. Yes/No
If No, rate countermeasures: (Enter 0-5)
14.1 No effective sound isolation present.
14.2 Noise emergency measures are not taken (e.g., restriction of working time, use of personal ear defenders/protectors).
15. CLIMATE
Specify climatic condition.
Temperature ____
Humidity ____
Radiant Temperature ____
Draughts ____
16. Climate is comfortable. Yes/No
If No, rate the following: (Enter 0-5)
16.1 Temperature sensation (circle one):
cool/slightly cool/neutral/warm/very hot
16.2 Ventilation devices (e.g., fans, windows, air conditioners) are not adequate.
16.3 Non-execution of regulatory measures on exposure limits (if available, please elaborate).
16.4 Workers do not wear heat protective/assistive clothing.
16.5 Drinking fountains of cool water are not available nearby.
17. LIGHTING
Workplace/machine(s) are sufficiently illuminated at all times. Yes/No
If No, rate the following: (Enter 0-5)
17.1 Illumination is sufficiently intense.
17.2 Illumination of work area is adequately uniform.
17.3 Flicker phenomena are minimal or absent.
17.4 Shadow formation is nonproblematical.
17.5 Annoying reflected glares are minimal or absent.
17.6 Colour dynamics (visual accentuation, colour warmth) are adequate.
18. DUST, SMOKE, TOXICANTS
Environment is free from excessive dust,
fumes and toxic substances. Yes/No
If No, rate the following: (Enter 0-5)
18.1 Ineffective ventilation and exhaust systems to carry off fumes, smoke and dirt.
18.2 Lack of protection measures against emergency release and contact with dangerous/toxic substances.
List the chemical toxicants:
_______________________________________________________________
_______________________________________________________________
18.3 Monitoring of the workplace for chemical toxicants is not regular.
18.4 Non-availability of personal protective measures (e.g., gloves, shoes, mask, apron).
19. RADIATION
Workers are effectively protected against radiation exposure. Yes/No
If No, mention the exposures
(see ISSA checklist, Ergonomics): (Enter 0-5)
19.1 UV radiation (200 nm – 400 nm).
19.2 IR radiation (780 nm – 100 μm).
19.3 Radioactivity/x-ray radiation (<200 nm).
19.4 Microwaves (1 mm – 1 m).
19.5 Lasers (300 nm – 1.4 μm).
19.6 Others (mention):
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
20. VIBRATION
Machine can be operated without vibration transmission
to the operator’s body. Yes/No
If No, rate the following: (Enter 0-5)
20.1 Vibration is transmitted to the whole body via the feet.
20.2 Vibration transmission occurs through the seat (e.g., mobile machines that are driven with operator seated).
20.3 Vibration is transmitted through the hand-arm system (e.g., power-driven handtools, machines driven when operator is walking).
20.4 Prolonged exposure to continuous/repetitive source of vibration.
20.5 Vibration sources cannot be isolated or eliminated.
20.6 Identify the sources of vibration.
Comments and suggestions, items 13 to 20:
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
VIII. Work Time Schedule
Indicate work time: work hours/day/week/year, including seasonal work and shift system.
21. Pressure of work time is minimum. Yes/No
If No, rate the following: (Enter 0-5)
21.1 Job requires night work.
21.2 Job involves overtime/extra work time.
Specify average duration:
_______________________________________________________________
21.3 Heavy tasks are unevenly distributed throughout the shift.
21.4 People work at a predetermined pace/time limit.
21.5 Fatigue allowances/work-rest patterns are not sufficiently incorporated (use cardio- respiratory criteria on work severity).
Comments and suggestions, items 21 to 21.5:
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
Analyst’s rating Worker’s ratin
D. Perceptual/motor aspect Your answers/ratings
IX. Displays
22. Visual displays (gauges, meters, warning signals)
are easy to read. Yes/No
If No, rate the difficulties: (Enter 0-5)
22.1 Insufficient lighting (refer to item No. 17).
22.2 Awkward head/eye positioning for visual line.
22.3 Display style of numerals/numerical progression creates confusion and causes reading errors.
22.4 Digital displays are not available for accurate reading.
22.5 Large visual distance for reading precision.
22.6 Displayed information is not easily understood.
23. Emergency signals/impulses are easily recognizable. Yes/No
If No, assess the reasons:
23.1 Signals (visual/auditory) do not conform to the work process.
23.2 Flashing signals are out of visual field.
23.3 Auditory display signals are not audible.
24. Groupings of the display features are logical. Yes/No
If No, rate the following:
24.1 Displays are not distinguished by form, position, colour or tone.
24.2 Frequently used and critical displays are removed from the central line of vision.
X. Controls
25. Controls (e.g., switches, knobs, cranes, driving wheels, pedals) are easy to handle. Yes/No
If No, the causes are: (Enter 0-5)
25.1 Hand/foot control positions are awkward.
25.2 Handedness of the controls/tools is incorrect.
25.3 Dimensions of controls do not match the operating body part.
25.4 Controls require high actuating force.
25.5 Controls require high precision and speed.
25.6 Controls are not shape-coded for good grip.
25.7 Controls are not colour/symbol-coded for identification.
25.8 Controls cause unpleasant feeling (warmth, cold, vibration).
26. Displays and controls (combined) are compatible with easy and comfortable human reactions. Yes/No
If No, rate the following: (Enter 0-5)
26.1 Placements are not sufficiently close to each other.
26.2 Display/controls are not sequentially arranged for functions/frequency of use.
26.3 Display/control operations are successive, without enough time span to complete operation (this creates sensory overloading).
26.4 Disharmony in movement direction of display/control (e.g., leftward control movement does not give leftward unit movement).
Comments and suggestions, items 22 to 26.4:
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
Analyst’s rating Worker’s rating
E. Technical aspect Your answers/ratings
XI. Machinery
27. Machine (e.g., conveyer trolley, lifting truck, machine tool)
is easy to drive and work with. Yes/No
If No, rate the following: (Enter 0-5)
27.1 Machine is unstable in operation.
27.2 Poor maintenance of the machinery.
27.3 Driving speed of the machine cannot be regulated.
27.4 Steering wheels/handles are operated, from standing position.
27.5 Operating mechanisms hamper body movements in the workspace.
27.6 Risk of injury due to lack of machine guarding.
27.7 Machinery is not equipped with warning signals.
27.8 Machine is poorly equipped for vibration damping.
27.9 Machine noise levels are above legal limits (refer to items No. 13 and 14)
27.10 Poor visibility of machine parts and adjacent area (refer to items No. 17 and 22).
XII. Small Tools/Implements
28. Tools/implements provided to the operatives are
comfortable to work with. Yes/No
If No, rate the following: (Enter 0-5)
28.1 Tool/implement has no carrying strap/back frame.
28.2 Tool cannot be used with alternate hands.
28.3 Heavy weight of the tool causes hyperextension of the wrist.
28.4 Form and position of the handle are not designed for convenient grip.
28.5 Power-driven tool is not designed for two-hand operation.
28.6 Sharp edges/ridges of the tool/equipment can cause injury.
28.7 Harnesses (gloves, etc.) are not regularly used in operating vibrating tool.
28.8 Noise levels of power-driven tool are above acceptable limits
(refer to item No. 13).
Suggestions for improvement, items 27 to 28.8:
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
XIII. Work Safety
29. Machine safety measures are adequate to prevent
accidents and health hazards. Yes/No
If No, rate the following: (Enter 0-5)
29.1 Machine accessories cannot be fastened and removed easily.
29.2 Dangerous points, moving parts and electrical installations are not adequately guarded.
29.3 Direct/indirect contact of body parts with machinery can cause hazards.
29.4 Difficulty in inspection and maintenance of the machine.
29.5 No clear instructions available for machine operation, maintenance and safety.
Suggestions for improvement, items 29 to 29. 5:
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
Analyst’s rating Worker’s rating
F. Psychosocial aspect Your answers/ratings
XIV. Job Autonomy
30. Job allows autonomy (e.g., freedom regarding method of work,
performance conditions, time schedule, quality control). Yes/No
If No, the possible causes are: (Enter 0-5)
30.1 No discretion on the starting/finishing times of the job.
30.2 No organizational support as regards calling for assistance at work.
30.3 Insufficient number of people for the task (teamwork).
30.4 Rigidity in work methods and conditions.
XV. Job Feedback (Intrinsic and Extrinsic)
31. Job allows direct feedback of information as to the quality
and quantity of one’s performance. Yes/No
If No, the reasons are: (Enter 0-5)
31.1 No participative role in task information and decision making.
31.2 Constraints of social contact due to physical barriers.
31.3 Communication difficulty due to high noise level.
31.4 Increased attentional demand in machine pacing.
31.5 Other people (managers, co-workers) inform the worker as to his/her effectiveness of job performance.
XVI. Task Variety/Clarity
32. Job has a variety of tasks and calls for spontaneity on the part of the worker. Yes/No
If No, rate the following: (Enter 0-5)
32.1 Job roles and goals are ambiguous.
32.2 Job restrictiveness is imposed by a machine, process or work group.
32.3 Worker-machine relation arouses conflict as to behaviour to be evinced by operator.
32.4 Restricted level of stimulation (e.g., unchanging visual and auditory environment).
32.5 High level of boredom on the job.
32.6 Limited scope for job enlargement.
XVII. Task Identity/Significance
33. Worker is given a batch of tasks Yes/No
and arranges his or her own schedule to complete the work
(e.g., one plans and executes the job and inspects and
manages the products).
Give your agreement/disagreement score (0-5)
34. Job is significant in the organization. Yes/No
It provides acknowledgement and recognition from others.
(Give your agreement/disagreement score)
XVIII. Mental Overload/Underload
35. Job consists of tasks for which clear communication and
unambiguous information support systems are available. Yes/No
If No, rate the following: (Enter 0-5)
35.1 Information supplied in connection with the job is extensive.
35.2 Information handling under pressure is required (e.g., emergency manoeuvering in process control).
35.3 High information-handling workload (e.g., difficult positioning task—no special motivation required).
35.4 Occasional attention is directed to information other than that needed for the actual task.
35.5 Task consists of repetitive simple motor act, with superficial attention needed.
35.6 Tools/equipment are not pre-positioned to avoid mental delay.
35.7 Multiple choices are required in decision making and judging risks.
(Comments and suggestions, items 30 to 35.7)
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
XIX. Training and Promotion
36. Job has opportunities for associated growth in competence
and task accomplishment. Yes/No
If No, the possible causes are: (Enter 0-5)
36.1 No opportunity for advancement to higher levels.
36.2 No periodic training for operators, specific to jobs.
36.3 Training programs/tools are not easy to learn and use.
36.4 No incentive pay schemes.
XX. Organizational Commitment
37. Defined commitment towards organizational Yes/No
effectiveness, and physical, mental and social well-being.
Assess the degree to which the following are made available: (Enter 0-5)
37.1 Organizational role in individual role conflicts and ambiguities.
37.2 Medical/administrative services for preventive intervention in the case of work hazards.
37.3 Promotional measures to control absenteeism in work group.
37.4 Effective safety regulations.
37.5 Labour inspection and monitoring of better work practices.
37.6 Follow-up action for accident/injury management.
The Summary Evaluation Sheet may be used for profiling and clustering of a selected group of items, which may form the basis for decisions on work systems. The process of analysis is often time-consuming and the users of these instruments must have a sound training in ergonomics both theoretical and practical, in the evaluation of work systems.
SUMMARY EVALUATION SHEET
A. Brief Description of Organization, Worker Characteristics and Task Description
...........................................................................................................................................................................................................................
...........................................................................................................................................................................................................................
Severity Agreement |
||||||||||
Modules |
Sections |
No. of |
|
|
|
|
|
|
Relative |
Item No(s). |
B. Mechanistic |
I. Job Specialization II. Skill Requirement |
4 5 |
||||||||
C. Biological |
III. General Physical Activity IV. Manual Materials Handling V. workplace/Workplace Design VI. Work Posture VII. Work Environment VIII. Work Time Schedule |
5 6 15 6 28 5 |
||||||||
D. Perceptual/motor |
IX. Displays X. Controls |
12 10 |
||||||||
E. Technical |
XI. Machinery XII. Small Tools/Implements XIII. Work Safety |
10 8 5 |
||||||||
F. Psychosocial |
XIV. Job Autonomy XV. Job Feedback XVI. Task Variety/Clarity XVII. Task Identity/Significance XVIII. Mental Overload/Underload XIX. Training and Promotion XX. Organizational Commitment |
5 5 6 2 7 4 6 |
Overall Assessment
Severity Agreement of the Modules |
Remarks |
||
A |
|||
B |
|||
C |
|||
D |
|||
E |
|||
F |
|||
Work Analyst: |
There are often large differences among humans in the intensity of response to toxic chemicals, and variations in susceptibility of an individual over a lifetime. These can be attributed to a variety of factors capable of influencing absorption rate, distribution in the body, biotransformation and/or excretion rate of a particular chemical. Apart from the known hereditary factors which have been clearly demonstrated to be linked with increased susceptibility to chemical toxicity in humans (see “Genetic determinants of toxic response”), other factors include: constitutional characteristics related to age and sex; pre-existing disease states or a reduction in organ function (non-hereditary, i.e., acquired); dietary habits, smoking, alcohol consumption and use of medications; concomitant exposure to biotoxins (various micro- organisms) and physical factors (radiation, humidity, extremely low or high temperatures or barometric pressures particularly relevant to the partial pressure of a gas), as well as concomitant physical exercise or psychological stress situations; previous occupational and/or environmental exposure to a particular chemical, and in particular concomitant exposure to other chemicals, not necessarily toxic (e.g., essential metals). The possible contributions of the aforementioned factors in either increasing or decreasing susceptibility to adverse health effects, as well as the mechanisms of their action, are specific for a particular chemical. Therefore only the most common factors, basic mechanisms and a few characteristic examples will be presented here, whereas specific information concerning each particular chemical can be found in elsewhere in this Encyclopaedia.
According to the stage at which these factors act (absorption, distribution, biotransformation or excretion of a particular chemical), the mechanisms can be roughly categorized according to two basic consequences of interaction: (1) a change in the quantity of the chemical in a target organ, that is, at the site(s) of its effect in the organism (toxicokinetic interactions), or (2) a change in the intensity of a specific response to the quantity of the chemical in a target organ (toxicodynamic interactions). The most common mechanisms of either type of interaction are related to competition with other chemical(s) for binding to the same compounds involved in their transport in the organism (e.g., specific serum proteins) and/or for the same biotransformation pathway (e.g., specific enzymes) resulting in a change in the speed or sequence between initial reaction and final adverse health effect. However, both toxicokinetic and toxicodynamic interactions may influence individual susceptibility to a particular chemical. The influence of several concomitant factors can result in either: (a) additive effects—the intensity of the combined effect is equal to the sum of the effects produced by each factor separately, (b) synergistic effects—the intensity of the combined effect is greater than the sum of the effects produced by each factor separately, or (c) antagonistic effects—the intensity of the combined effect is smaller than the sum of the effects produced by each factor separately.
The quantity of a particular toxic chemical or characteristic metabolite at the site(s) of its effect in the human body can be more or less assessed by biological monitoring, that is, by choosing the correct biological specimen and optimal timing of specimen sampling, taking into account biological half-lives for a particular chemical in both the critical organ and in the measured biological compartment. However, reliable information concerning other possible factors that might influence individual susceptibility in humans is generally lacking, and consequently the majority of knowledge regarding the influence of various factors is based on experimental animal data.
It should be stressed that in some cases relatively large differences exist between humans and other mammals in the intensity of response to an equivalent level and/or duration of exposure to many toxic chemicals; for example, humans appear to be considerably more sensitive to the adverse health effects of several toxic metals than are rats (commonly used in experimental animal studies). Some of these differences can be attributed to the fact that the transportation, distribution and biotransformation pathways of various chemicals are greatly dependent on subtle changes in the tissue pH and the redox equilibrium in the organism (as are the activities of various enzymes), and that the redox system of the human differs considerably from that of the rat.
This is obviously the case regarding important antioxidants such as vitamin C and glutathione (GSH), which are essential for maintaining redox equilibrium and which have a protective role against the adverse effects of the oxygen- or xenobiotic-derived free radicals which are involved in a variety of pathological conditions (Kehrer 1993). Humans cannot auto-synthesize vitamin C, contrary to the rat, and levels as well as the turnover rate of erythrocyte GSH in humans are considerably lower than that in the rat. Humans also lack some of the protective antioxidant enzymes, compared to the rat or other mammals (e.g., GSH- peroxidase is considered to be poorly active in human sperm). These examples illustrate the potentially greater vulnerability to oxidative stress in humans (particularly in sensitive cells, e.g., apparently greater vulnerability of the human sperm to toxic influences than that of the rat), which can result in different response or greater susceptibility to the influence of various factors in humans compared to other mammals (Telišman 1995).
Influence of Age
Compared to adults, very young children are often more susceptible to chemical toxicity because of their relatively greater inhalation volumes and gastrointestinal absorption rate due to greater permeability of the intestinal epithelium, and because of immature detoxification enzyme systems and a relatively smaller excretion rate of toxic chemicals. The central nervous system appears to be particularly susceptible at the early stage of development with regard to neurotoxicity of various chemicals, for example, lead and methylmercury. On the other hand, the elderly may be susceptible because of chemical exposure history and increased body stores of some xenobiotics, or pre-existing compromised function of target organs and/or relevant enzymes resulting in lowered detoxification and excretion rate. Each of these factors can contribute to weakening of the body’s defences—a decrease in reserve capacity, causing increased susceptibility to subsequent exposure to other hazards. For example, the cytochrome P450 enzymes (involved in the biotransformation pathways of almost all toxic chemicals) can be either induced or have lowered activity because of the influence of various factors over a lifetime (including dietary habits, smoking, alcohol, use of medications and exposure to environmental xenobiotics).
Influence of Sex
Gender-related differences in susceptibility have been described for a large number of toxic chemicals (approximately 200), and such differences are found in many mammalian species. It appears that males are generally more susceptible to renal toxins and females to liver toxins. The causes of the different response between males and females have been related to differences in a variety of physiological processes (e.g., females are capable of additional excretion of some toxic chemicals through menstrual blood loss, breast milk and/or transfer to the foetus, but they experience additional stress during pregnancy, delivery and lactation), enzyme activities, genetic repair mechanisms, hormonal factors, or the presence of relatively larger fat depots in females, resulting in greater accumulation of some lipophilic toxic chemicals, such as organic solvents and some medications.
Influence of Dietary Habits
Dietary habits have an important influence on susceptibility to chemical toxicity, mostly because adequate nutrition is essential for the functioning of the body’s chemical defence system in maintaining good health. Adequate intake of essential metals (including metalloids) and proteins, especially the sulphur-containing amino acids, is necessary for the biosynthesis of various detoxificating enzymes and the provision of glycine and glutathione for conjugation reactions with endogenous and exogenous compounds. Lipids, especially phospholipids, and lipotropes (methyl group donors) are necessary for the synthesis of biological membranes. Carbohydrates provide the energy required for various detoxification processes and provide glucuronic acid for conjugation of toxic chemicals and their metabolites. Selenium (an essential metalloid), glutathione, and vitamins such as vitamin C (water soluble), vitamin E and vitamin A (lipid soluble), have an important role as antioxidants (e.g., in controlling lipid peroxidation and maintaining integrity of cellular membranes) and free-radical scavengers for protection against toxic chemicals. In addition, various dietary constituents (protein and fibre content, minerals, phosphates, citric acid, etc.) as well as the amount of food consumed can greatly influence the gastrointestinal absorption rate of many toxic chemicals (e.g., the average absorption rate of soluble lead salts taken with meals is approximately eight per cent, as opposed to approximately 60% in fasting subjects). However, diet itself can be an additional source of individual exposure to various toxic chemicals (e.g., considerably increased daily intakes and accumulation of arsenic, mercury, cadmium and/or lead in subjects who consume contaminated seafood).
Influence of Smoking
The habit of smoking can influence individual susceptibility to many toxic chemicals because of the variety of possible interactions involving the great number of compounds present in cigarette smoke (especially polycyclic aromatic hydrocarbons, carbon monoxide, benzene, nicotine, acrolein, some pesticides, cadmium, and, to a lesser extent, lead and other toxic metals, etc.), some of which are capable of accumulating in the human body over a lifetime, including pre-natal life (e.g., lead and cadmium). The interactions occur mainly because various toxic chemicals compete for the same binding site(s) for transport and distribution in the organism and/or for the same biotransformation pathway involving particular enzymes. For example, several cigarette smoke constituents can induce cytochrome P450 enzymes, whereas others can depress their activity, and thus influence the common biotransformation pathways of many other toxic chemicals, such as organic solvents and some medications. Heavy cigarette smoking over a long period can considerably reduce the body’s defence mechanisms by decreasing reserve capacity to cope with the adverse influence of other life-style factors.
Influence of Alcohol
Consumption of alcohol (ethanol) can influence susceptibility to many toxic chemicals in several ways. It can influence the absorption rate and distribution of certain chemicals in the body—for example, increase the gastrointestinal absorption rate of lead, or decrease the pulmonary absorption rate of mercury vapour by inhibiting oxidation which is necessary for retention of inhaled mercury vapour. Ethanol can also influence susceptibility to various chemicals through short-term changes in tissue pH and increase in the redox potential resulting from ethanol metabolism, as both ethanol oxidizing to acetaldehyde and acetaldehyde oxidizing to acetate produce an equivalent of reduced nicotinamide adenine dinucleotide (NADH) and hydrogen (H+). Because the affinity of both essential and toxic metals and metalloids for binding to various compounds and tissues is influenced by pH and changes in the redox potential (Telišman 1995), even a moderate intake of ethanol may result in a series of consequences such as: (1) redistribution of long-term accumulated lead in the human organism in favour of a biologically active lead fraction, (2) replacement of essential zinc by lead in zinc-containing enzyme(s), thus affecting enzyme activity, or influence of mobil- ized lead on the distribution of other essential metals and metalloids in the organism such as calcium, iron, copper and selenium, (3) increased urinary excretion of zinc and so on. The effect of possible aforementioned events can be augmented due to the fact that alcoholic beverages can contain an appreciable amount of lead from vessels or processing (Prpic-Majic et al. 1984; Telišman et al. 1984; 1993).
Another common reason for ethanol-related changes in susceptibility is that many toxic chemicals, for example, various organic solvents, share the same biotransformation pathway involving the cytochrome P450 enzymes. Depending on the intensity of exposure to organic solvents as well as the quantity and frequency of ethanol ingestion (i.e., acute or chronic alcohol consumption), ethanol can either decrease or increase biotransformation rates of various organic solvents and thus influence their toxicity (Sato 1991).
Influence of Medications
The common use of various medications can influence susceptibility to toxic chemicals mainly because many drugs bind to serum proteins and thus influence the transport, distribution or excretion rate of various toxic chemicals, or because many drugs are capable of inducing relevant detoxifying enzymes or depressing their activity (e.g., the cytochrome P450 enzymes), thus affecting the toxicity of chemicals with the same biotransformation pathway. Characteristic for either of the mechanisms is increased urinary excretion of trichloroacetic acid (the metabolite of several chlorinated hydrocarbons) when using salicylate, sulphonamide or phenylbutazone, and an increased hepato-nephrotoxicity of carbon tetrachloride when using phenobarbital. In addition, some medications contain a considerable amount of a potentially toxic chemical, for example, the aluminium-containing antacids or preparations used for therapeutic management of the hyperphosphataemia arising in chronic renal failure.
Influence of Concomitant Exposure to Other Chemicals
The changes in susceptibility to adverse health effects due to interaction of various chemicals (i.e., possible additive, synergistic or antagonistic effects) have been studied almost exclusively in experimental animals, mostly in the rat. Relevant epidemiological and clinical studies are lacking. This is of concern particularly considering the relatively greater intensity of response or the variety of adverse health effects of several toxic chemicals in humans compared to the rat and other mammals. Apart from published data in the field of pharmacology, most data are related only to combinations of two different chemicals within specific groups, such as various pesticides, organic solvents, or essential and/or toxic metals and metalloids.
Combined exposure to various organic solvents can result in various additive, synergistic or antagonistic effects (depending on the combination of certain organic solvents, their intensity and duration of exposure), mainly due to the capability of influencing each other’s biotransformation (Sato 1991).
Another characteristic example are the interactions of both essential and/or toxic metals and metalloids, as these are involved in the possible influence of age (e.g., a lifetime body accumulation of environmental lead and cadmium), sex (e.g., common iron deficiency in women), dietary habits (e.g., increased dietary intake of toxic metals and metalloids and/or deficient dietary intake of essential metals and metalloids), smoking habit and alcohol consumption (e.g., additional exposure to cadmium, lead and other toxic metals), and use of medications (e.g., a single dose of antacid can result in a 50-fold increase in the average daily intake of aluminium through food). The possibility of various additive, synergistic or antagonistic effects of exposure to various metals and metalloids in humans can be illustrated by basic examples related to the main toxic elements (see table 1), apart from which further interactions may occur because essential elements can also influence one another (e.g., the well-known antagonistic effect of copper on the gastrointestinal absorption rate as well as the metabolism of zinc, and vice versa). The main cause of all these interactions is the competition of various metals and metalloids for the same binding site (especially the sulphhydryl group, -SH) in various enzymes, metalloproteins (especially metallothionein) and tissues (e.g., cell membranes and organ barriers). These interactions may have a relevant role in the development of several chronic diseases which are mediated through the action of free radicals and oxidative stress (Telišman 1995).
Table 1. Basic effects of possible multiple interactions concerning the main toxic and/or essential metals and matalloids in mammals
Toxic metal or metalloid | Basic effects of the interaction with other metal or metalloid |
Aluminium (Al) | Decreases the absorption rate of Ca and impairs the metabolism of Ca; deficient dietary Ca increases the absorption rate of Al. Impairs phosphate metabolism. Data on interactions with Fe, Zn and Cu are equivocal (i.e., the possible role of another metal as a mediator). |
Arsenic (As) | Affects the distribution of Cu (an increase of Cu in the kidney, and a decrease of Cu in the liver, serum and urine). Impairs the metabolism of Fe (an increase of Fe in the liver with concomitant decrease in haematocrit). Zn decreases the absorption rate of inorganic As and decreases the toxicity of As. Se decreases the toxicity of As and vice versa. |
Cadmium (Cd) | Decreases the absorption rate of Ca and impairs the metabolism of Ca; deficient dietary Ca increases the absorption rate of Cd. Impairs the phosphate metabolism, i.e., increases urinary excretion of phosphates. Impairs the metabolism of Fe; deficient dietary Fe increases the absorption rate of Cd. Affects the distribution of Zn; Zn decreases the toxicity of Cd, whereas its influence on the absorption rate of Cd is equivocal. Se decreases the toxicity of Cd. Mn decreases the toxicity of Cd at low-level exposure to Cd. Data on the interaction with Cu are equivocal (i.e., the possible role of Zn, or another metal, as a mediator). High dietary levels of Pb, Ni, Sr, Mg or Cr(III) can decrease the absorption rate of Cd. |
Mercury (Hg) | Affects the distribution of Cu (an increase of Cu in the liver). Zn decreases the absorption rate of inorganic Hg and decreases the toxicity of Hg. Se decreases the toxicity of Hg. Cd increases the concentration of Hg in the kidney, but at the same time decreases the toxicity of Hg in the kidney (the influence ofthe Cd-induced metallothionein synthesis). |
Lead (Pb) | Impairs the metabolism of Ca; deficient dietary Ca increases the absorption rate of inorganic Pb and increases the toxicity of Pb. Impairs the metabolism of Fe; deficient dietary Fe increases the toxicity of Pb, whereas its influence on the absorption rate of Pb is equivocal. Impairs the metabolism of Zn and increases urinary excretion of Zn; deficient dietary Zn increases the absorption rate of inorganic Pb andincreases the toxicity of Pb. Se decreases the toxicity of Pb. Data on interactions with Cu and Mg are equivocal (i.e., the possible role of Zn, or another metal, as a mediator). |
Note: Data are mostly related to experimental studies in the rat, whereas relevant clinical and epidemiological data (particularly regarding quantitative dose-response relationships) are generally lacking (Elsenhans et al. 1991; Fergusson 1990; Telišman et al. 1993).
Human biological monitoring uses samples of body fluids or other easily obtainable biological material for the measurement of exposure to specific or nonspecific substances and/or their metabolites or for the measurement of the biological effects of this exposure. Biological monitoring allows one to estimate total individual exposure through different exposure pathways (lungs, skin, gastrointestinal tract) and different sources of exposure (air, diet, lifestyle or occupation). It is also known that in complex exposure situations, which are very often encountered in workplaces, different exposing agents may interact with one another, either enhancing or inhibiting the effects of the individual compounds. And since individuals differ in their genetic constitution, they exhibit variability in their response to chemical exposures. Thus, it may be more reasonable to look for early effects directly in the exposed individuals or groups than to try to predict potential hazards of the complex exposure patterns from data pertaining to single compounds. This is an advantage of genetic biomonitoring for early effects, an approach employing techniques that focus on cytogenetic damage, point mutations, or DNA adducts in surrogate human tissue (see the article “General principles” in this chapter).
What Is Genotoxicity?
Genotoxicity of chemical agents is an intrinsic chemical character, based on the agent’s electrophilic potential to bind with such nucleophilic sites in the cellular macromolecules as deoxyribonucleic acid, DNA, the carrier of hereditary information. Genotoxicity is thus toxicity manifested in the genetic material of cells.
The definition of genotoxicity, as discussed in a consensus report (IARC 1992), is broad, and includes both direct and indirect effects in DNA: (1) the induction of mutations (gene, chromosomal, genomial, recombinational) that at the molecular level are similar to events known to be involved in carcinogenesis, (2) indirect surrogate events associated with mutagenesis (e.g., unscheduled DNA synthesis (UDS) and sister chromatid exchange (SCE), or (3) DNA damage (e.g., the formation of adducts), which may eventually lead to mutations.
Genotoxicity, Mutagenicity And Carcinogenicity
Mutations are permanent hereditary changes in the cell lines, either horizontally in the somatic cells or vertically in the germinal (sex) cells of the body. That is, mutations may affect the organism itself through changes in body cells, or they may be passed on to other generations through alteration of the sex cells. Genotoxicity thus preceeds mutagenicity although most of genotoxicity is repaired and is never expressed as mutations. Somatic mutations are induced at the cellular level and in the event that they lead to cell death or malignancies, may become manifest as various disorders of tissues or of the organism itself. Somatic mutations are thought to be related to ageing effects or to the induction of atherosclerotic plaques (see figure 1 and the chapter on Cancer).
Figure 1. Schematic view of the scientific paradigm in genetic toxicology and human health effects
Mutations in the germ cell line may be transferred to the zygote—the fertilized egg cell—and be expressed in the offspring generation (see also the chapter Reproductive System). The most important mutational disorders found in the newborn are induced by malsegregation of chromosomes during gametogenesis (the development of germ cells) and result in severe chromosomal syndromes (e.g., trisomy 21 or Down’s syndrome, and monosomy X or Turner’s syndrome).
The paradigm of genotoxicology from exposure to anticipated effects may be simplified as shown in figure 1.
The relationship of genotoxicity to carcinogenicity is well supported by various indirect research facts, as shown in figure 2.
Figure 2. The interrelationships of genotoxicity and carcinogenicity
This correlation provides the basis for applying biomarkers of genotoxicity to be used in human monitoring as indicators of cancer hazard.
Genetic Toxicity in Hazard Identification
The role of genetic changes in carcinogenesis underscores the importance of genetic toxicity testing in the identification of potential carcinogens. Various short-term test methods have been developed which are able to detect some of the endpoints in genotoxicity supposedly relevant in carcinogenesis.
Several extensive surveys have been performed to compare the carcinogenicity of chemicals with results obtained by examining them in short-term tests. The general conclusion has been that since no single validated test can provide information on all of the above-mentioned genetic end-points; it is necessary to test each chemical in more than one assay. Also, the value of short-term tests of genetic toxicity for prediction of chemical carcinogenicity has been discussed and reviewed repeatedly. On the basis of such reviews, a working group at the International Agency for Research on Cancer (IARC) concluded that most human carcinogens give positive results in routinely used short-term tests such as the Salmonella assay and the chromosome aberration assays (table 1). However, it must be realized that the epigenetic carcinogens—such as hormonally active compounds which can increase genotoxic activity without themselves being genotoxic—cannot be detected by short-term tests, which measure only the intrinsic genotoxic activity of a substance.
Table 1. Genotoxicity of chemicals evaluated in Supplements 6 and 7 to the IARC Monographs (1986)
Carcinogenicity classification |
Ratio of evidence for genotoxicity/carcinogenicity |
% |
1: human carcinogens |
24/30 |
80 |
2A: probable human carcinogens |
14/20 |
70 |
2B: possible human carcinogens |
72/128 |
56 |
3: not classifiable |
19/66 |
29 |
Genetic Biomonitoring
Genetic monitoring utilizes genetic toxicology methods for biological monitoring of genetic effects or assessment of genotoxic exposure in a group of individuals with defined exposure at a worksite or through environment or lifestyle. Thus, genetic monitoring has the potential for early identification of genotoxic exposures in a group of persons and enables identification of high-risk populations and thus priorities for intervention. Use of predictive biomarkers in an exposed population is warranted to save time (as compared with epidemiological techniques) and to prevent unnecessary end effects, namely cancer (figure 3).
Figure 3. The predictiveness of biomarkers enables preventive actions to be taken to decrease risks to health in human populations
The methods currently used for biomonitoring of genotoxic exposure and early biological effects are listed in table 2. The samples used for biomonitoring must meet several criteria, including the necessity that they be both easily obtainable and comparable with the target tissue.
Table 2. Biomarkers in genetic monitoring of genotoxicity exposure and the most commonly used cell/tissue samples.
Marker of genetic monitoring |
Cell/tissue samples |
Chromosomal aberrations (CA) |
Lymphocytes |
Sister chromatid exchanges (SCE) |
Lymphocytes |
Micronuclei (MN) |
Lymphocytes |
Point mutations (e.g., HPRT gene) |
Lymphocytes and other tissues |
DNA adducts |
DNA isolated from cells/tissues |
Protein adducts |
Haemoglobin, albumin |
DNA strand breaks |
DNA isolated from cells/tissues |
Oncogene activation |
DNA or specific proteins isolated |
Mutations/oncoproteins |
Various cells and tissues |
DNA repair |
Isolated cells from blood samples |
The types of molecularly recognisable DNA damage include the formation of DNA adducts and reorganization of the DNA sequence. These kinds of damage can be detected by measurements of DNA adducts using various techniques, for example, either 32P-postlabelling or the detection of monoclonal antibodies to DNA adducts. Measurement of DNA strand breaks is conventionally carried out using alkaline elution or unwinding assays. Mutations may be detected by sequencing the DNA of a specific gene, for example, the HPRT gene.
Several methodological reports have appeared that discuss the techniques of table 2 in detail (CEC 1987; IARC 1987, 1992, 1993).
Genotoxicity can also be monitored indirectly through the measurement of protein adducts, that is, in haemoglobin instead of DNA, or the monitoring of DNA repair activity. As a measuring strategy, the monitoring activity may be either one time or continuous. In all cases the results must be applied to the development of safe working conditions.
Cytogenetic Biomonitoring
A theoretical and empirical rationale links cancer to chromosome damage. Mutational events altering the activity or expression of growth-factor genes are key steps in carcinogenesis. Many types of cancers have been associated with specific or nonspecific chromosomal aberrations. In several hereditary human diseases, chromosome instability is associated with increased susceptibility to cancer.
Cytogenetic surveillance of people exposed to carcinogenic and/or mutagenic chemicals or radiation can bring to light effects on the genetic material of the individuals concerned. Chromosomal aberration studies of people exposed to ionizing radiation have been applied for biological dosimetry for decades, but well-documented positive results are as yet available only for a limited number of chemical carcinogens.
Microscopically recognizable chromosomal damage includes both structural chromosomal aberrations (CA), in which a gross change in the morphology (shape) of a chromosome has occurred, and by sister chromatid exchanges (SCE). SCE is the symmetrical exchange of chromosomal materials between two sister chromatids. Micronuclei (MN) can arise either from acentric chromosome fragments or from lagging whole chromosomes. These types of changes are illustrated in figure 4.
Figure 4. Human lymphocyte chromosomes at metaphase, revealing an induced chromosome mutation (arrow pointing to an acentric fragment)
Peripheral blood lymphocytes in humans are suitable cells to be used in surveillance studies because of their easy accessibility and because they can integrate exposure over a relatively long lifespan. Exposure to a variety of chemical mutagens may result in increased frequencies of CAs and/or SCEs in blood lymphocytes of exposed individuals. Also, the extent of damage is roughly correlated with exposure, although this has been shown with only a few chemicals.
When cytogenetic tests on peripheral blood lymphocytes show that the genetic material has been damaged, the results can be used to estimate risk only at the level of the population. An increased frequency of CAs in a population should be considered an indication of increased risk to cancer, but cytogenetic tests do not, as such, allow individual risk prediction of cancer.
The health significance of somatic genetic damage as seen through the narrow window of a sample of peripheral blood lymphocytes has little or no significance to the health of an individual, since most of the lymphocytes carrying genetic damage die and are replaced.
Problems and their Control in Human Biomonitoring Studies
Rigorous study design is necessary in the application of any human biomonitoring method, since many interindividual factors that are not related to the specific chemical exposure(s) of interest may affect the biological responses studied. Since human biomonitoring studies are tedious and difficult in many respects, careful preplanning is very important. In performing human cytogenetic studies, experimental confirmation of the chromosome damaging potential of the exposing agent(s) should always be an experimental prerequisite.
In cytogenetic biomonitoring studies, two major types of variations have been documented. The first includes technical factors associated with slide-reading discrepancies and with culture conditions, specifically with the type of medium, temperature, and concentration of chemicals (such as bromodeoxyuridine or cytochalasin-B). Also, sampling times can alter chromosome aberration yields, and possibly also findings of SCE incidence, through changes in subpopulations of T- and B-lymphocytes. In micronucleus analyses, methodological differences (e.g., use of binucleated cells induced by cytochalasin-B) quite clearly affect the scoring results.
The lesions induced in the DNA of lymphocytes by chemical exposure that lead to formation of structural chromosome aberrations, sister chromatid exchange, and micronuclei must persist in vivo until the blood is withdrawn and then in vitro until the cultured lymphocyte begins DNA synthesis. It is, therefore, important to score cells directly after the first division (in the case of chromosome aberrations or micronuclei) or after the second division (sister chromatid exchanges) in order to obtain the best estimate of induced damage.
Scoring constitutes an extremely important element in cytogenetic biomonitoring. Slides must be randomized and coded to avoid scorer bias as far as possible. Consistent scoring criteria, quality control and standardized statistical analyses and reporting should be maintained. The second category of variability is due to conditions associated with the subjects, such as age, sex, medication and infections. Individual variations can also be caused by genetic susceptibility to environmental agents.
It is critical to obtain a concurrent control group that is matched as closely as possible on internal factors such as sex and age as well as on factors such as smoking status, viral infections and vaccinations, alcohol and drug intake, and exposure to x-rays. Additionally, it is necessary to obtain qualitative (job category, years exposed) and quantitative (e.g., breathing zone air samples for chemical analysis and specific metabolites, if possible) estimates or exposure to the putative genotoxic agent(s) in the workplace. Special consideration should be paid to proper statistical treatment of the results.
Relevancy of genetic biomonitoring to cancer risk assessment
The number of agents repeatedly shown to induce cytogenetic changes in humans is still relatively limited, but most known carcinogens induce damage in lymphocyte chromosomes.
The extent of damage is a function of exposure level, as has been shown to be the case with, for example, vinyl chloride, benzene, ethylene oxide, and alkylating anticancer agents. Even if the cytogenetic end points are not very sensitive or specific as regards the detection of exposures occurring in present-day occupational settings, positive results of such tests have often prompted implementation of hygienic controls even in the absence of direct evidence relating somatic chromosomal damage to adverse health outcomes.
Most experience with application of cytogenetic biomonitoring derives from “high exposure” occupational situations. Very few exposures have been confirmed by several independent studies, and most of these have been performed using chromosomal aberration biomonitoring. The database of the International Agency for Research on Cancer lists in its updated volumes 43–50 of the IARC Monographs a total of 14 occupational carcinogens in groups 1, 2A or 2B, for which there is positive human cytogenetic data available that are in most cases supported by corresponding animal cytogenetics (table 3). This limited database suggests that there is a tendency for carcinogenic chemicals to be clastogenic, and that clastogenicity tends to be associated with known human carcinogens. Quite clearly, however, not all carcinogens induce cytogenetic damage in humans or experimental animals in vivo. Cases in which the animal data are positive and the human findings are negative may represent differences in exposure levels. Also, the complex and long-term human exposures at work may not be comparable with short-term animal experiments.
Table 3. Proven, probable and possible human carcinogens for which occupational exposure exists and for which cytogenetic end points have been measured in both humans and experimental animals
Cytogenic findings1 |
||||||
Humans |
Animals |
|||||
Agent/exposure |
CA |
SCE |
MN |
CA |
SCE |
MN |
GROUP 1, Human carcinogens |
||||||
Arsenic and arsenic compounds |
? |
? |
|
+ |
|
+ |
Asbestos |
|
? |
|
– |
|
– |
Benzene |
+ |
|
|
+ |
+ |
+ |
Bis(chloromethyl)ether and chloromethyl methyl ether (technical grade) |
(+) |
|
|
– |
|
|
Cyclophosphamide |
+ |
+ |
|
+ |
+ |
+ |
Hexavalent chromium compounds |
+ |
+ |
|
+ |
+ |
+ |
Melphalan |
+ |
+ |
|
+ |
|
|
Nickel compounds |
+ |
– |
|
? |
|
|
Radon |
+ |
|
|
– |
|
|
Tobacco smoke |
+ |
+ |
+ |
|
+ |
|
Vinyl chloride |
+ |
? |
|
+ |
+ |
+ |
GROUP 2A, Probable human carcinogens |
||||||
Acrylonitrile |
– |
|
|
– |
|
– |
Adriamycin |
+ |
+ |
|
+ |
+ |
+ |
Cadmium and cadmium compounds |
– |
(–) |
|
– |
|
|
Cisplatin |
|
+ |
|
+ |
+ |
|
Epichlorohydrin |
+ |
|
|
? |
+ |
– |
Ethylene dibromide |
– |
– |
|
– |
+ |
– |
Ethylene oxide |
+ |
+ |
+ |
+ |
+ |
+ |
Formaldehyde |
? |
? |
|
– |
|
– |
GROUP 2B, Possible human carcinogens |
||||||
Chlorophenoxy herbicides (2,4-D and 2,4,5-T) |
– |
– |
|
+ |
+ |
– |
DDT |
? |
|
|
+ |
|
– |
Dimethylformamide |
(+) |
|
|
|
– |
– |
Lead compounds |
? |
? |
|
? |
– |
? |
Styrene |
+ |
? |
+ |
? |
+ |
+ |
2,3,7,8-Tetrachlorodibenzo-para-dioxin |
? |
|
|
– |
– |
– |
Welding fumes |
+ |
+ |
|
– |
– |
|
1 CA, chromosomal aberration; SCE, sister chromatid exchange; MN, micronuclei.
(–) = negative relationship for one study; – = negative relationship;
(+) = positive relationship for one study; + = positive relationship;
? = inconclusive; blank area = not studied
Source: IARC, 1987; updated through volumes 43–50 of IARC monographs.
Studies of genotoxicity in exposed humans include various end points other than chromosomal end points, such as DNA damage, DNA repair activity, and adducts in DNA and in proteins. Some of these end points may be more relevant than others for the prediction of carcinogenic hazard. Stable genetic changes (e.g., chromosomal rearrangements, deletions, and point mutations) are highly relevant, since these types of damage are known to be related to carcinogenesis. The significance of DNA adducts is dependent upon their chemical identification and evidence that they result from the exposure. Some endpoints, such as SCE, UDS, SSB, DNA strand breakage, are potential indicators and/or markers of genetic events; however, their value is reduced in the absence of a mechanistic understanding of their ability to lead to genetic events. Clearly, the most relevant genetic marker in humans would be the induction of a specific mutation that has been directly associated with cancer in rodents exposed to the agent under study (figure 5).
Figure 5. Relevance of different genetic biomonitoring effects for potential cancer risk
Ethical Considerations for Genetic Biomonitoring
Rapid advances in molecular genetic techniques, the enhanced speed of sequencing of the human genome, and the identification of the role of tumour suppressor genes and proto-oncogenes in human carcinogenesis, raise ethical issues in the interpretation, communication, and use of this kind of personal information. Quickly improving techniques for the analysis of human genes will soon allow the identification of yet more inherited susceptibility genes in healthy, asymptomatic individuals (US Office of Technology Assessment 1990), lending themselves to be used in genetic screening.
Many questions of social and ethical concern will be raised if the application of genetic screening soon becomes a reality. Already at present roughly 50 genetic traits of metabolism, enzyme polymorphisms, and DNA repair are suspected for specific disease sensitivities, and a diagnostic DNA test is available for about 300 genetic diseases. Should any genetic screening at all be performed at the workplace? Who is to decide who will undergo testing, and how will the information be used in employment decisions? Who will have access to the information obtained from genetic screening, and how will the results be communicated to the person(s) involved? Many of these questions are strongly linked to social norms and prevailing ethical values. The main objective must be the prevention of disease and human suffering, but respect must be accorded to the individual’s own will and ethical premises. Some of the relevant ethical questions which must be answered well before the outset of any workplace biomonitoring study are given in table 4 and are also discussed in the chapter Ethical Issues.
Table 4. Some ethical principles relating to the need to know in occupational genetic biomonitoring studies
Groups to whom information is given |
|||
Information given |
Persons studied |
Occupational health unit |
Employer |
What is being studied |
|||
Why is the study performed |
|||
Are there risks involved |
|||
Confidentiality issues |
|||
Preparedness for possible hygienic improvements, exposure reductions indicated |
Time and effort must be put into the planning phase of any genetic biomonitoring study, and all necessary parties—the employees, employers, and the medical personnel of the collaborating workplace—must be well-informed before the study, and the results made known to them after the study as well. With proper care and reliable results, genetic biomonitoring can help to ensure safer workplaces and improve workers’ health.
It has long been recognized that each person’s response to environmental chemicals is different. The recent explosion in molecular biology and genetics has brought a clearer understanding about the molecular basis of such variability. Major determinants of individual response to chemicals include important differences among more than a dozen superfamilies of enzymes, collectively termed xenobiotic- (foreign to the body) or drug-metabolizing enzymes. Although the role of these enzymes has classically been regarded as detoxification, these same enzymes also convert a number of inert compounds to highly toxic intermediates. Recently, many subtle as well as gross differences in the genes encoding these enzymes have been identified, which have been shown to result in marked variations in enzyme activity. It is now clear that each individual possesses a distinct complement of xenobiotic-metabolizing enzyme activities; this diversity might be thought of as a “metabolic fingerprint”. It is the complex interplay of these many different enzyme superfamilies which ultimately determines not only the fate and the potential for toxicity of a chemical in any given individual, but also assessment of exposure. In this article we have chosen to use the cytochrome P450 enzyme superfamily to illustrate the remarkable progress made in understanding individual response to chemicals. The development of relatively simple DNA-based tests designed to identify specific gene alterations in these enzymes, is now providing more accurate predictions of individual response to chemical exposure. We hope the result will be preventive toxicology. In other words, each individual might learn about those chemicals to which he or she is particularly sensitive, thereby avoiding previously unpredictable toxicity or cancer.
Although it is not generally appreciated, human beings are exposed daily to a barrage of innumerable diverse chemicals. Many of these chemicals are highly toxic, and they are derived from a wide variety of environmental and dietary sources. The relationship between such exposures and human health has been, and continues to be, a major focus of biomedical research efforts worldwide.
What are some examples of this chemical bombardment? More than 400 chemicals from red wine have been isolated and characterized. At least 1,000 chemicals are estimated to be produced by a lighted cigarette. There are countless chemicals in cosmetics and perfumed soaps. Another major source of chemical exposure is agriculture: in the United States alone, farmlands receive more than 75,000 chemicals each year in the form of pesticides, herbicides and fertilizing agents; after uptake by plants and grazing animals, as well as fish in nearby waterways, humans (at the end of the food chain) ingest these chemicals. Two other sources of large concentrations of chemicals taken into the body include (a) drugs taken chronically and (b) exposure to hazardous substances in the workplace over a lifetime of employment.
It is now well established that chemical exposure may adversely affect many aspects of human health, causing chronic diseases and the development of many cancers. In the last decade or so, the molecular basis of many of these relationships has begun to be unravelled. In addition, the realization has emerged that humans differ markedly in their susceptibility to the harmful effects of chemical exposure.
Current efforts to predict human response to chemical exposure combine two fundamental approaches (figure 1): monitoring the extent of human exposure through biological markers (biomarkers), and predicting the likely response of an individual to a given level of exposure. Although both of these approaches are extremely important, it should be emphasized that the two are distinctly different from one another. This article will focus on the genetic factors underlying individual susceptibility to any particular chemical exposure. This field of research is broadly termed ecogenetics, or pharmacogenetics (see Kalow 1962 and 1992). Many of the recent advances in determining individual susceptibility to chemical toxicity have evolved from a greater appreciation of the processes by which humans and other mammals detoxify chemicals, and the remarkable complexity of the enzyme systems involved.
Figure 1. The interrelationships among exposure assessment, ethnic differences, age, diet, nutrition and genetic susceptibility assessment - all of which play a role in the individual risk of toxicity and cancer
We will first describe the variability of toxic responses in humans. We will then introduce some of the enzymes responsible for such variation in response, due to differences in the metabolism of foreign chemicals. Next, the history and nomenclature of the cytochrome P450 superfamily will be detailed. Five human P450 polymorphisms as well as several non-P450 polymorphisms will be briefly described; these are responsible for human differences in toxic response. We will then discuss an example to emphasize the point that genetic differences in individuals can influence exposure assessment, as determined by environmental monitoring. Lastly, we will discuss the role of these xenobiotic-metabolizing enzymes in critical life functions.
Variation in Toxic Response Among the Human Population
Toxicologists and pharmacologists commonly speak about the average lethal dose for 50% of the population (LD50), the average maximal tolerated dose for 50% of the population (MTD50), and the average effective dose of a particular drug for 50% of the population (ED50). However, how do these doses affect each of us on an individual basis? In other words, a highly sensitive individual may be 500 times more affected or 500 times more likely to be affected than the most resistant individual in a population; for these people, the LD50 (and MTD50 and ED50) values would have little meaning. LD50, MTD50 and ED50 values are only relevant when referring to the population as a whole.
Figure 2 illustrates a hypothetical dose-response relationship for a toxic response by individuals in any given population. This generic diagram might represent bronchogenic carcinoma in response to the number of cigarettes smoked, chloracne as a function of dioxin levels in the workplace, asthma as a function of air concentrations of ozone or aldehyde, sunburn in response to ultraviolet light, decreased clotting time as a function of aspirin intake, or gastrointestinal distress in response to the number of jalapeño peppers consumed. Generally, in each of these instances, the greater the exposure, the greater the toxic response. Most of the population will exhibit the mean and standard deviation of toxic response as a function of dose. The “resistant outlier” (lower right in figure 2) is an individual having less of a response at higher doses or exposures. A “sensitive outlier” (upper left) is an individual having an exaggerated response to a relatively small dose or exposure. These outliers, with extreme differences in response compared to the majority of individuals in the population, may represent important genetic variants that can help scientists in attempting to understand the underlying molecular mechanisms of a toxic response.
Figure 2. Generic relationship between any toxic response and the dose of any environmental, chemical or physical agent
Using these outliers in family studies, scientists in a number of laboratories have begun to appreciate the importance of Mendelian inheritance for a given toxic response. Subsequently, one can then turn to molecular biology and genetic studies to pinpoint the underlying mechanism at the gene level (genotype) responsible for the environmentally caused disease (phenotype).
Xenobiotic- or Drug-metabolizing Enzymes
How does the body respond to the myriad of exogenous chemicals to which we are exposed? Humans and other mammals have evolved highly complex metabolic enzyme systems comprising more than a dozen distinct superfamilies of enzymes. Almost every chemical to which humans are exposed will be modified by these enzymes, in order to facilitate removal of the foreign substance from the body. Collectively, these enzymes are frequently referred to as drug-metabolizing enzymes or xenobiotic-metabolizing enzymes. Actually, both terms are misnomers. First, many of these enzymes not only metabolize drugs but hundreds of thousands of environmental and dietary chemicals. Second, all of these enzymes also have normal body compounds as substrates; none of these enzymes metabolizes only foreign chemicals.
For more than four decades, the metabolic processes mediated by these enzymes have commonly been classified as either Phase I or Phase II reactions (figure 3). Phase I (“functionalization”) reactions generally involve relatively minor structural modifications of the parent chemical via oxidation, reduction or hydrolysis in order to produce a more water-soluble metabolite. Frequently, Phase I reactions provide a “handle” for further modification of a compound by subsequent Phase II reactions. Phase I reactions are primarily mediated by a superfamily of highly versatile enzymes, collectively termed cytochromes P450, although other enzyme superfamilies can also be involved (figure 4).
Figure 3. The classical designation of Phase I and Phase II xenobiotic- or drug-metabolizing enzymes
Figure 4. Examples of drug-metabolizing enzymes
Phase II reactions involve the coupling of a water-soluble endogenous molecule to a chemical (parent chemical or Phase I metabolite) in order to facilitate excretion. Phase II reactions are frequently termed “conjugation” or “derivatization” reactions. The enzyme superfamilies catalyzing Phase II reactions are generally named according to the endogenous conjugating moiety involved: for example, acetylation by the N-acetyltransferases, sulphation by the sulphotransferases, glutathione conjugation by the glutathione transferases, and glucuronidation by the UDP glucuronosyltransferases (figure 4). Although the major organ of drug metabolism is the liver, the levels of some drug- metabolizing enzymes are quite high in the gastrointestinal tract, gonads, lung, brain and kidney, and such enzymes are undoubtedly present to some extent in every living cell.
Xenobiotic-metabolizing Enzymes Represent Double-edged Swords
As we learn more about the biological and chemical processes leading to human health aberrations, it has become increasingly evident that drug-metabolizing enzymes function in an ambivalent manner (figure 3). In the majority of cases, lipid-soluble chemicals are converted to more readily excreted water-soluble metabolites. However, it is clear that on many occasions the same enzymes are capable of transforming other inert chemicals into highly reactive molecules. These intermediates can then interact with cellular macromolecules such as proteins and DNA. Thus, for each chemical to which humans are exposed, there exists the potential for the competing pathways of metabolic activation and detoxification.
Brief Review of Genetics
In human genetics, each gene (locus) is located on one of the 23 pairs of chromosomes. The two alleles (one present on each chromosome of the pair) can be the same, or they can be different from one another. For example, the B and b alleles, in which B (brown eyes) is dominant over b (blue eyes): individuals of the brown-eyed phenotype can have either the BB or Bb genotypes, whereas individuals of the blue-eyed phenotype can only have the bb genotype.
A polymorphism is defined as two or more stably inherited phenotypes (traits)—derived from the same gene(s)—that are maintained in the population, often for reasons not necessarily obvious. For a gene to be polymorphic, the gene product must not be essential for development, reproductive vigour or other critical life processes. In fact, a “balanced polymorphism,” wherein the heterozygote has a distinct survival advantage over either homozygote (e.g., resistance to malaria, and the sickle-cell haemoglobin allele) is a common explanation for maintaining an allele in the population at otherwise unexplained high frequencies (see Gonzalez and Nebert 1990).
Human Polymorphisms of Xenobiotic-metabolizing Enzymes
Genetic differences in the metabolism of various drugs and environmental chemicals have been known for more than four decades (Kalow 1962 and 1992). These differences are frequently referred to as pharmacogenetic or, more broadly, ecogenetic polymorphisms. These polymorphisms represent variant alleles that occur at a relatively high frequency in the population and are generally associated with aberrations in enzyme expression or function. Historically, polymorphisms were usually identified following unexpected responses to therapeutic agents. More recently, recombinant DNA technology has enabled scientists to identify the precise alterations in genes that are responsible for some of these polymorphisms. Polymorphisms have now been characterized in many drug-metabolizing enzymes—including both Phase I and Phase II enzymes. As more and more polymorphisms are identified, it is becoming increasingly apparent that each individual may possess a distinct complement of drug-metabolizing enzymes. This diversity might be described as a “metabolic fingerprint”. It is the complex interplay of the various drug- metabolizing enzyme superfamilies within any individual that will ultimately determine his or her particular response to a given chemical (Kalow 1962 and 1992; Nebert 1988; Gonzalez and Nebert 1990; Nebert and Weber 1990).
Expressing Human Xenobiotic-metabolizingEnzymes in Cell Culture
How might we develop better predictors of human toxic responses to chemicals? Advances in defining the multiplicity of drug-metabolizing enzymes must be accompanied by precise knowledge as to which enzymes determine the metabolic fate of individual chemicals. Data gleaned from laboratory rodent studies have certainly provided useful information. However, significant interspecies differences in xenobiotic-metabolizing enzymes necessitate caution in extrapolating data to human populations. To overcome this difficulty, many laboratories have developed systems in which various cell lines in culture can be engineered to produce functional human enzymes that are stable and in high concentrations (Gonzalez, Crespi and Gelboin 1991). Successful production of human enzymes has been achieved in a variety of diverse cell lines from sources including bacteria, yeast, insects and mammals.
In order to define the metabolism of chemicals even more accurately, multiple enzymes have also been successfully produced in a single cell line (Gonzalez, Crespi and Gelboin 1991). Such cell lines provide valuable insights into the precise enzymes involved in the metabolic processing of any given compound and likely toxic metabolites. If this information can then be combined with knowledge regarding the presence and level of an enzyme in human tissues, these data should provide valuable predictors of response.
Cytochrome P450
History and nomenclature
The cytochrome P450 superfamily is one of the most studied drug-metabolizing enzyme superfamilies, having a great deal of individual variability in response to chemicals. Cytochrome P450 is a convenient generic term used to describe a large superfamily of enzymes pivotal in the metabolism of innumerable endogenous and exogenous substrates. The term cytochrome P450 was first coined in 1962 to describe an unknown pigment in cells which, when reduced and bound with carbon monoxide, produced a characteristic absorption peak at 450 nm. Since the early 1980s, cDNA cloning technology has resulted in remarkable insights into the multiplicity of cytochrome P450 enzymes. To date, more than 400 distinct cytochrome P450 genes have been identified in animals, plants, bacteria and yeast. It has been estimated that any one mammalian species, such as humans, may possess 60 or more distinct P450 genes (Nebert and Nelson 1991). The multiplicity of P450 genes has necessitated the development of a standardized nomenclature system (Nebert et al. 1987; Nelson et al. 1993). First proposed in 1987 and updated on a biannual basis, the nomenclature system is based on divergent evolution of amino acid sequence comparisons between P450 proteins. The P450 genes are divided into families and subfamilies: enzymes within a family display greater than 40% amino acid similarity, and those within the same subfamily display 55% similarity. P450 genes are named with the root symbol CYP followed by an arabic numeral designating the P450 family, a letter denoting the subfamily, and a further arabic numeral designating the individual gene (Nelson et al. 1993; Nebert et al. 1991). Thus, CYP1A1 represents P450 gene 1 in family 1 and subfamily A.
As of February 1995, there are 403 CYP genes in the database, composed of 59 families and 105 sub- families. These include eight lower eukaryotic families, 15 plant families, and 19 bacterial families. The 15 human P450 gene families comprise 26 subfamilies, 22 of which have been mapped to chromosomal locations throughout most of the genome. Some sequences are clearly orthologous across many species—for example, only one CYP17 (steroid 17α-hydroxylase) gene has been found in all vertebrates examined to date; other sequences within a subfamily are highly duplicated, making the identification of orthologous pairs impossible (e.g., the CYP2C subfamily). Interestingly, human and yeast share an orthologous gene in the CYP51 family. Numerous comprehensive reviews are available for readers seeking further information on the P450 superfamily (Nelson et al. 1993; Nebert et al. 1991; Nebert and McKinnon 1994; Guengerich 1993; Gonzalez 1992).
The success of the P450 nomenclature system has resulted in similar terminology systems being developed for the UDP glucuronosyltransferases (Burchell et al. 1991) and flavin-containing mono-oxygenases (Lawton et al. 1994). Similar nomenclature systems based on divergent evolution are also under development for several other drug-metabolizing enzyme superfamilies (e.g., sulphotransferases, epoxide hydrolases and aldehyde dehydrogenases).
Recently, we divided the mammalian P450 gene superfamily into three groups (Nebert and McKinnon 1994)—those involved principally with foreign chemical metabolism, those involved in the synthesis of various steroid hormones, and those participating in other important endogenous functions. It is the xenobiotic-metabolizing P450 enzymes that assume the most significance for prediction of toxicity.
Xenobiotic-metabolizing P450 enzymes
P450 enzymes involved in the metabolism of foreign compounds and drugs are almost always found within families CYP1, CYP2, CYP3 and CYP4. These P450 enzymes catalyze a wide variety of metabolic reactions, with a single P450 often capable of meta-bolizing many different compounds. In addition, multiple P450 enzymes may metabolize a single compound at different sites. Also, a compound may be metabolized at the same, single site by several P450s, although at varying rates.
A most important property of the drug-metabolizing P450 enzymes is that many of these genes are inducible by the very substances which serve as their substrates. On the other hand, other P450 genes are induced by nonsubstrates. This phenomenon of enzyme induction underlies many drug-drug interactions of therapeutic importance.
Although present in many tissues, these particular P450 enzymes are found in relatively high levels in the liver, the primary site of drug metabolism. Some of the xenobiotic-metabolizing P450 enzymes exhibit activity toward certain endogenous substrates (e.g., arachidonic acid). However, it is generally believed that most of these xenobiotic-metabolizing P450 enzymes do not play important physiological roles—although this has not been established experimentally as yet. The selective homozygous disruption, or “knock-out,” of individual xenobiotic-metabolizing P450 genes by means of gene targeting methodologies in mice is likely to provide unequivocal information soon with regard to physiological roles of the xenobiotic-metabolizing P450s (for a review of gene targeting, see Capecchi 1994).
In contrast to P450 families encoding enzymes involved primarily in physiological processes, families encoding xenobiotic-metabolizing P450 enzymes display marked species specificity and frequently contain many active genes per subfamily (Nelson et al. 1993; Nebert et al. 1991). Given the apparent lack of physiological substrates, it is possible that P450 enzymes in families CYP1, CYP2, CYP3 and CYP4 that have appeared in the past several hundred million years have evolved as a means of detoxifying foreign chemicals encountered in the environment and diet. Clearly, evolution of the xenobiotic-metabolizing P450s would have occurred over a time period which far precedes the synthesis of most of the synthetic chemicals to which humans are now exposed. The genes in these four gene families may have evolved and diverged in animals due to their exposure to plant metabolites during the last 1.2 billion years—a process descriptively termed “animal-plant warfare” (Gonzalez and Nebert 1990). Animal-plant warfare is the phenomenon in which plants developed new chemicals (phytoalexins) as a defence mechanism in order to prevent ingestion by animals, and animals, in turn, responded by developing new P450 genes to accommodate the diversifying substrates. Providing further impetus to this proposal are the recently described examples of plant-insect and plant-fungus chemical warfare involving P450 detoxification of toxic substrates (Nebert 1994).
The following is a brief introduction to several of the human xenobiotic-metabolizing P450 enzyme polymorphisms in which genetic determinants of toxic response are believed to be of high significance. Until recently, P450 polymorphisms were generally suggested by unexpected variance in patient response to administered therapeutic agents. Several P450 polymorphisms are indeed named according to the drug with which the polymorphism was first identified. More recently, research efforts have focused on identification of the precise P450 enzymes involved in the metabolism of chemicals for which variance is observed and the precise characterization of the P450 genes involved. As described earlier, the measurable activity of a P450 enzyme towards a model chemical can be called the phenotype. Allelic differences in a P450 gene for each individual is termed the P450 genotype. As more and more scrutiny is applied to the analysis of P450 genes, the precise molecular basis of previously documented phenotypic variance is becoming clearer.
The CYP1A subfamily
The CYP1A subfamily comprises two enzymes in humans and all other mammals: these are designated CYP1A1 and CYP1A2 under standard P450 nomenclature. These enzymes are of considerable interest, because they are involved in the metabolic activation of many procarcinogens and are also induced by several compounds of toxicological concern, including dioxin. For example, CYP1A1 metabolically activates many compounds found in cigarette smoke. CYP1A2 metabolically activates many arylamines—associated with urinary bladder cancer—found in the chemical dye industry. CYP1A2 also metabolically activates 4-(methylnitrosamino)-1-(3-pyridyl)-1-butanone (NNK), a tobacco-derived nitrosamine. CYP1A1 and CYP1A2 are also found at higher levels in the lungs of cigarette smokers, due to induction by polycyclic hydrocarbons present in the smoke. The levels of CYP1A1 and CYP1A2 activity are therefore considered to be important determinants of individual response to many potentially toxic chemicals.
Toxicological interest in the CYP1A subfamily was greatly intensified by a 1973 report correlating the level of CYP1A1 inducibility in cigarette smokers with individual susceptibility to lung cancer (Kellermann, Shaw and Luyten-Kellermann 1973). The molecular basis of CYP1A1 and CYP1A2 induction has been a major focus of numerous laboratories. The induction process is mediated by a protein termed the Ah receptor to which dioxins and structurally related chemicals bind. The name Ah is derived from the aryl hydrocarbon nature of many CYP1A inducers. Interestingly, differences in the gene encoding the Ah receptor between strains of mice result in marked differences in chemical response and toxicity. A polymorphism in the Ah receptor gene also appears to occur in humans: approximately one-tenth of the population displays high induction of CYP1A1 and may be at greater risk than the other nine-tenths of the population for development of certain chemically induced cancers. The role of the Ah receptor in the control of enzymes in the CYP1A subfamily, and its role as a determinant of human response to chemical exposure, has been the subject of several recent reviews (Nebert, Petersen and Puga 1991; Nebert, Puga and Vasiliou 1993).
Are there other polymorphisms that might control the level of CYP1A proteins in a cell? A polymorphism in the CYP1A1 gene has also been identified, and this appears to influence lung cancer risk amongst Japanese cigarette smokers, although this same polymorphism does not appear to influence risk in other ethnic groups (Nebert and McKinnon 1994).
CYP2C19
Variations in the rate at which individuals metabolize the anticonvulsant drug (S)-mephenytoin have been well documented for many years (Guengerich 1989). Between 2% and 5% of Caucasians and as many as 25% of Asians are deficient in this activity and may be at greater risk of toxicity from the drug. This enzyme defect has long been known to involve a member of the human CYP2C subfamily, but the precise molecular basis of this deficiency has been the subject of considerable controversy. The major reason for this difficulty was the six or more genes in the human CYP2C subfamily. It was recently demonstrated, however, that a single-base mutation in the CYP2C19 gene is the primary cause of this deficiency (Goldstein and de Morais 1994). A simple DNA test, based on the polymerase chain reaction (PCR), has also been developed to identify this mutation rapidly in human populations (Goldstein and de Morais 1994).
CYP2D6
Perhaps the most extensively characterized variation in a P450 gene is that involving the CYP2D6 gene. More than a dozen examples of mutations, rearrangements and deletions affecting this gene have been described (Meyer 1994). This polymorphism was first suggested 20 years ago by clinical variability in patients’ response to the antihypertensive agent debrisoquine. Alterations in the CYP2D6 gene giving rise to altered enzyme activity are therefore collectively termed the debrisoquine polymorphism.
Prior to the advent of DNA-based studies, individuals had been classified as poor or extensive metabolizers (PMs, EMs) of debrisoquine based on metabolite concentrations in urine samples. It is now clear that alterations in the CYP2D6 gene may result in individuals displaying not only poor or extensive debrisoquine metabolism, but also ultrarapid metabolism. Most alterations in the CYP2D6 gene are associated with partial or total deficiency of enzyme function; however, individuals in two families have recently been described who possess multiple functional copies of the CYP2D6 gene, giving rise to ultrarapid metabolism of CYP2D6 substrates (Meyer 1994). This remarkable observation provides new insights into the wide spectrum of CYP2D6 activity previously observed in population studies. Alterations in CYP2D6 function are of particular significance, given the more than 30 commonly prescribed drugs metabolized by this enzyme. An individual’s CYP2D6 function is therefore a major determinant of both therapeutic and toxic response to administered therapy. Indeed, it has recently been argued that consideration of a patient’s CYP2D6 status is necessary for the safe use of both psychiatric and cardiovascular drugs.
The role of the CYP2D6 polymorphism as a determinant of individual susceptibility to human diseases such as lung cancer and Parkinson’s disease has also been the subject of intense study (Nebert and McKinnon 1994; Meyer 1994). While conclusions are difficult to define given the diverse nature of the study protocols utilized, the majority of studies appear to indicate an association between extensive metabolizers of debrisoquine (EM phenotype) and lung cancer. The reasons for such an association are presently unclear. However, the CYP2D6 enzyme has been shown to metabolize NNK, a tobacco-derived nitrosamine.
As DNA-based assays improve—enabling even more accurate assessment of CYP2D6 status—it is anticipated that the precise relationship of CYP2D6 to disease risk will be clarified. Whereas the extensive metabolizer may be linked with susceptibility to lung cancer, the poor metabolizer (PM phenotype) appears to be associated with Parkinson’s disease of unknown cause. Whereas these studies are also difficult to compare, it appears that PM individuals having a diminished capacity to metabolize CYP2D6 substrates (e.g., debrisoquine) have a 2- to 2.5-fold increase in risk of developing Parkinson’s disease.
CYP2E1
The CYP2E1 gene encodes an enzyme that metabolizes many chemicals, including drugs and many low-molecular-weight carcinogens. This enzyme is also of interest because it is highly inducible by alcohol and may play a role in liver injury induced by chemicals such as chloroform, vinyl chloride and carbon tetrachloride. The enzyme is primarily found in the liver, and the level of enzyme varies markedly between individuals. Close scrutiny of the CYP2E1 gene has resulted in the identification of several polymorphisms (Nebert and McKinnon 1994). A relationship has been reported between the presence of certain structural variations in the CYP2E1 gene and apparent lowered lung cancer risk in some studies; however, there are clear interethnic differences which require clarification of this possible relationship.
The CYP3A subfamily
In humans, four enzymes have been identified as members of the CYP3A subfamily due to their similarity in amino acid sequence. The CYP3A enzymes metabolize many commonly prescribed drugs such as erythromycin and cyclosporin. The carcinogenic food contaminant aflatoxin B1 is also a CYP3A substrate. One member of the human CYP3A subfamily, designated CYP3A4, is the principal P450 in human liver as well as being present in the gastrointestinal tract. As is true for many other P450 enzymes, the level of CYP3A4 is highly variable between individuals. A second enzyme, designated CYP3A5, is found in only approximately 25% of livers; the genetic basis of this finding has not been elucidated. The importance of CYP3A4 or CYP3A5 variability as a factor in genetic determinants of toxic response has not yet been established (Nebert and McKinnon 1994).
Non-P450 Polymorphisms
Numerous polymorphisms also exist within other xenobiotic-metabolizing enzyme superfamilies (e.g., glutathione transferases, UDP glucuronosyltransferases, para-oxonases, dehydrogenases, N-acetyltransferases and flavin-containing mono-oxygenases). Because the ultimate toxicity of any P450-generated intermediate is dependent on the efficiency of subsequent Phase II detoxification reactions, the combined role of multiple enzyme polymorphisms is important in determining susceptibility to chemically induced diseases. The metabolic balance between Phase I and Phase II reactions (figure 3) is therefore likely to be a major factor in chemically induced human diseases and genetic determinants of toxic response.
The GSTM1 gene polymorphism
A well studied example of a polymorphism in a Phase II enzyme is that involving a member of the glutathione S-transferase enzyme superfamily, designated GST mu or GSTM1. This particular enzyme is of considerable toxicological interest because it appears to be involved in the subsequent detoxification of toxic metabolites produced from chemicals in cigarette smoke by the CYP1A1 enzyme. The identified polymorphism in this glutathione transferase gene involves a total absence of functional enzyme in as many as half of all Caucasians studied. This lack of a Phase II enzyme appears to be associated with increased susceptibility to lung cancer. By grouping individuals on the basis of both variant CYP1A1 genes and the deletion or presence of a functional GSTM1 gene, it has been demonstrated that the risk of developing smoking-induced lung cancer varies significantly (Kawajiri, Watanabe and Hayashi 1994). In particular, individuals displaying one rare CYP1A1 gene alteration, in combination with an absence of the GSTM1 gene, were at higher risk (as much as ninefold) of developing lung cancer when exposed to a relatively low level of cigarette smoke. Interestingly, there appear to be interethnic differences in the significance of variant genes which necessitate further study in order to elucidate the precise role of such alterations in susceptibility to disease (Kalow 1962; Nebert and McKinnon 1994; Kawajiri, Watanabe and Hayashi 1994).
Synergistic effect of two or more polymorphisms on the toxic response
A toxic response to an environmental agent may be greatly exaggerated by the combination of two pharmacogenetic defects in the same individual, for example, the combined effects of the N-acetyltransferase (NAT2) polymorphism and the glucose-6-phosphate dehydrogenase (G6PD) polymorphism.
Occupational exposure to arylamines constitutes a grave risk of urinary bladder cancer. Since the elegant studies of Cartwright in 1954, it has become clear that the N-acetylator status is a determinant of azo-dye-induced bladder cancer. There is a highly significant correlation between the slow-acetylator phenotype and the occurrence of bladder cancer, as well as the degree of invasiveness of this cancer in the bladder wall. On the contrary, there is a significant association between the rapid-acetylator phenotype and the incidence of colorectal carcinoma. The N-acetyltransferase (NAT1, NAT2) genes have been cloned and sequenced, and DNA-based assays are now able to detect the more than a dozen allelic variants which account for the slow-acetylator phenotype. The NAT2 gene is polymorphic and responsible for most of the variability in toxic response to environmental chemicals (Weber 1987; Grant 1993).
Glucose-6-phosphate dehydrogenase (G6PD) is an enzyme critical in the generation and maintenance of NADPH. Low or absent G6PD activity can lead to severe drug- or xenobiotic-induced haemolysis, due to the absence of normal levels of reduced glutathione (GSH) in the red blood cell. G6PD deficiency affects at least 300 million people worldwide. More than 10% of African-American males exhibit the less severe phenotype, while certain Sardinian communities exhibit the more severe “Mediterranean type” at frequencies as high as one in every three persons. The G6PD gene has been cloned and localized to the X chromosome, and numerous diverse point mutations account for the large degree of phenotypic heterogeneity seen in G6PD-deficient individuals (Beutler 1992).
Thiozalsulphone, an arylamine sulpha drug, was found to cause a bimodal distribution of haemolytic anaemia in the treated population. When treated with certain drugs, individuals with the combination of G6PD deficiency plus the slow-acetylator phenotype are more affected than those with the G6PD deficiency alone or the slow-acetylator phenotype alone. G6PD-deficient slow acetylators are at least 40 times more susceptible than normal-G6PD rapid acetylators to thiozalsulphone-induced haemolysis.
Effect of genetic polymorphisms on exposure assessment
Exposure assessment and biomonitoring (figure 1) also requires information on the genetic make-up of each individual. Given identical exposure to a hazardous chemical, the level of haemoglobin adducts (or other biomarkers) might vary by two or three orders of magnitude among individuals, depending upon each person’s metabolic fingerprint.
The same combined pharmacogenetics has been studied in chemical factory workers in Germany (table 1). Haemoglobin adducts among workers exposed to aniline and acetanilide are by far the highest in G6PD-deficient slow acetylators, as compared with the other possible combined pharmacogenetic phenotypes. This study has important implications for exposure assessment. These data demonstrate that, although two individuals might be exposed to the same ambient level of hazardous chemical in the work place, the amount of exposure (via biomarkers such as haemoglobin adducts) might be estimated to be two or more orders of magnitude less, due to the underlying genetic predisposition of the individual. Likewise, the resulting risk of an adverse health effect may vary by two or more orders of magnitude.
Table 1: Haemoglobin adducts in workers exposed to aniline and acetanilide
Acetylator status | G6PD deficiency | |||
Fast | Slow | No | Yes | Hgb adducts |
+ | + | 2 | ||
+ | + | 30 | ||
+ | + | 20 | ||
+ | + | 100 |
Source: Adapted from Lewalter and Korallus 1985.
Genetic differences in binding as well as metabolism
It should be emphasized that the same case made here for meta-bolism can also be made for binding. Heritable differences in the binding of environmental agents will greatly affect the toxic response. For example, differences in the mouse cdm gene can profoundly affect individual sensitivity to cadmium-induced testicular necrosis (Taylor, Heiniger and Meier 1973). Differences in the binding affinity of the Ah receptor are likely affect dioxin-induced toxicity and cancer (Nebert, Petersen and Puga 1991; Nebert, Puga and Vasiliou 1993).
Figure 5 summarizes the role of metabolism and binding in toxicity and cancer. Toxic agents, as they exist in the environment or following metabolism or binding, elicit their effects by either a genotoxic pathway (in which damage to DNA occurs) or a non-genotoxic pathway (in which DNA damage and mutagenesis need not occur). Interestingly, it has recently become clear that “classical” DNA-damaging agents can operate via a reduced glutathione (GSH)-dependent nongenotoxic signal transduction pathway, which is initiated on or near the cell surface in the absence of DNA and outside the cell nucleus (Devary et al. 1993). Genetic differences in metabolism and binding remain, however, as the major determinants in controlling different individual toxic responses.
Figure 5. The general means by which toxicity occurs
Role of Drug-metabolizing Enzymesin Cellular Function
Genetically based variation in drug-metabolizing enzyme function is of major importance in determining individual response to chemicals. These enzymes are pivotal in determining the fate and time course of a foreign chemical following exposure.
As illustrated in figure 5, the importance of drug-metabolizing enzymes in individual susceptibility to chemical exposure may in fact present a far more complex issue than is evident from this simple discussion of xenobiotic metabolism. In other words, during the past two decades, genotoxic mechanisms (measurements of DNA adducts and protein adducts) have been greatly emphasized. However, what if nongenotoxic mechanisms are at least as important as genotoxic mechanisms in causing toxic responses?
As mentioned earlier, the physiological roles of many drug-metabolizing enzymes involved in xenobiotic metabolism have not been accurately defined. Nebert (1994) has proposed that, because of their presence on this planet for more than 3.5 billion years, drug-metabolizing enzymes were originally (and are now still primarily) responsible for regulating the cellular levels of many nonpeptide ligands important in the transcriptional activation of genes affecting growth, differentiation, apoptosis, homeostasis and neuroendocrine functions. Furthermore, the toxicity of most, if not all, environmental agents occurs by means of agonist or antagonist action on these signal transduction pathways (Nebert 1994). Based on this hypothesis, genetic variability in drug-metabolizing enzymes may have quite dramatic effects on many critical biochemical processes within the cell, thereby leading to important differences in toxic response. It is indeed possible that such a scenario may also underlie many idiosyncratic adverse reactions encountered in patients using commonly prescribed drugs.
Conclusions
The past decade has seen remarkable progress in our understanding of the genetic basis of differential response to chemicals in drugs, foods and environmental pollutants. Drug-metabolizing enzymes have a profound influence on the way humans respond to chemicals. As our awareness of drug-metabolizing enzyme multiplicity continues to evolve, we are increasingly able to make improved assessments of toxic risk for many drugs and environmental chemicals. This is perhaps most clearly illustrated in the case of the CYP2D6 cytochrome P450 enzyme. Using relatively simple DNA-based tests, it is possible to predict the likely response of any drug predominantly metabolized by this enzyme; this prediction will ensure the safer use of valuable, yet potentially toxic, medication.
The future will no doubt see an explosion in the identification of further polymorphisms (phenotypes) involving drug-metabolizing enzymes. This information will be accompanied by improved, minimally invasive DNA-based tests to identify genotypes in human populations.
Such studies should be particularly informative in evaluating the role of chemicals in the many environmental diseases of presently unknown origin. The consideration of multiple drug-metabolizing enzyme polymorphisms, in combination (e.g., table 1), is also likely to represent a particularly fertile research area. Such studies will clarify the role of chemicals in the causation of cancers. Collectively, this information should enable the formulation of increasingly individualized advice on avoidance of chemicals likely to be of individual concern. This is the field of preventive toxicology. Such advice will no doubt greatly assist all individuals in coping with the ever increasing chemical burden to which we are exposed.
" DISCLAIMER: The ILO does not take responsibility for content presented on this web portal that is presented in any language other than English, which is the language used for the initial production and peer-review of original content. Certain statistics have not been updated since the production of the 4th edition of the Encyclopaedia (1998)."