Banner ToolsApproach

Children categories

28. Epidemiology and Statistics

28. Epidemiology and Statistics (12)

Banner 4

 

28. Epidemiology and Statistics

Chapter Editors:  Franco Merletti, Colin L. Soskolne and Paolo Vineis


Table of Contents

Tables and Figures

Epidemiological Method Applied to Occupational Health and Safety
Franco Merletti, Colin L. Soskolne and Paolo Vineis

Exposure Assessment
M. Gerald Ott

Summary Worklife Exposure Measures
Colin L. Soskolne

Measuring Effects of Exposures
Shelia Hoar Zahm

     Case Study: Measures
     Franco Merletti, Colin L. Soskolne and Paola Vineis

Options in Study Design
Sven Hernberg

Validity Issues in Study Design
Annie J. Sasco

Impact of Random Measurement Error
Paolo Vineis and Colin L. Soskolne

Statistical Methods
Annibale Biggeri and Mario Braga

Causality Assessment and Ethics in Epidemiological Research
Paolo Vineis

Case Studies Illustrating Methodological Issues in the Surveillance of Occupational Diseases
Jung-Der Wang

Questionnaires in Epidemiological Research
Steven D. Stellman and Colin L. Soskolne

Asbestos Historical Perspective
Lawrence Garfinkel

Tables

Click a link below to view table in article context.

1. Five selected summary measures of worklife exposure

2. Measures of disease occurrence

3. Measures of association for a cohort study

4. Measures of association for case-control studies

5. General frequency table layout for cohort data

6. Sample layout of case-control data

7. Layout case-control data - one control per case

8. Hypothetical cohort of 1950 individuals to T2

9. Indices of central tendency & dispersion

10. A binomial experiment & probabilities

11. Possible outcomes of a binomial experiment

12. Binomial distribution, 15 successes/30 trials

13. Binomial distribution, p = 0.25; 30 trials

14. Type II error & power; x = 12, n = 30, a = 0.05

15. Type II error & power; x = 12, n = 40, a = 0.05

16. 632 workers exposed to asbestos 20 years or longer

17. O/E number of deaths among 632 asbestos workers

Figures

Point to a thumbnail to see figure caption, click to see the figure in article context.

EPI110F1EPI110F2


Click to return to top of page

View items...
29. Ergonomics

29. Ergonomics (27)

Banner 4

 

29. Ergonomics

Chapter Editors:  Wolfgang Laurig and Joachim Vedder

 


 

Table of Contents 

Tables and Figures

Overview
Wolfgang Laurig and Joachim Vedder

Goals, Principles and Methods

The Nature and Aims of Ergonomics
William T. Singleton

Analysis of Activities, Tasks and Work Systems
Véronique De Keyser

Ergonomics and Standardization
Friedhelm Nachreiner

Checklists
Pranab Kumar Nag

Physical and Physiological Aspects

Anthropometry
Melchiorre Masali

Muscular Work
Juhani Smolander and Veikko Louhevaara

Postures at Work
Ilkka Kuorinka

Biomechanics
Frank Darby

General Fatigue
Étienne Grandjean

Fatigue and Recovery
Rolf Helbig and Walter Rohmert

Psychological Aspects

Mental Workload
Winfried Hacker

Vigilance
Herbert Heuer

Mental Fatigue
Peter Richter

Organizational Aspects of Work

Work Organization
Eberhard Ulich and Gudela Grote

Sleep Deprivation
Kazutaka Kogi

Work Systems Design

Workstations
Roland Kadefors

Tools
T.M. Fraser

Controls, Indicators and Panels
Karl H. E. Kroemer

Information Processing and Design
Andries F. Sanders

Designing for Everyone

Designing for Specific Groups
Joke H. Grady-van den Nieuwboer

     Case Study: The International Classification of Functional Limitation in People

Cultural Differences
Houshang Shahnavaz

Elderly Workers
Antoine Laville and Serge Volkoff

Workers with Special Needs
Joke H. Grady-van den Nieuwboer

Diversity and Importance of Ergonomics--Two Examples

System Design in Diamond Manufacturing
Issachar Gilad

Disregarding Ergonomic Design Principles: Chernobyl
Vladimir M. Munipov 

Tables

Click a link below to view table in article context.

1. Basic anthropometric core list

2. Fatigue & recovery dependent on activity levels

3. Rules of combination effects of two stress factors on strain

4. Differenting among several negative consequences of mental strain

5. Work-oriented principles for production structuring

6. Participation in organizational context

7. User participation in the technology process

8. Irregular working hours & sleep deprivation

9. Aspects of advance, anchor & retard sleeps

10. Control movements & expected effects

11. Control-effect relations of common hand controls

12. Rules for arrangement of controls

13. Guidelines for labels

Figures

Point to a thumbnail to see figure caption, click to see the figure in the article context.

ERG040T1ERG040F1ERG040F2ERG040F3ERG040T2ERG040F5ERG070F1ERG070F2ERG070F3ERG060F2ERG060F1ERG060F3ERG080F1ERG080F4ERG090F1ERG090F2ERG090F3ERG090F4ERG225F1ERG225F2ERG150F1ERG150F2ERG150F4ERG150F5ERG150F6ERG120F1ERG130F1ERG290F1ERG160T1ERG160F1ERG185F1ERG185F2ERG185F3ERG185F4ERG190F1ERG190F2ERG190F3ERG210F1ERG210F2ERG210F3ERG210F4ERG210T4ERG210T5ERG210T6ERG220F1ERG240F1ERG240F2ERG240F3ERG240F4ERG260F1ERG300F1ERG255F1

View items...
32. Record Systems and Surveillance

32. Record Systems and Surveillance (9)

Banner 4

 

32. Record Systems and Surveillance

Chapter Editor:  Steven D. Stellman

 


 

Table of Contents 

Tables and Figures

Occupational Disease Surveillance and Reporting Systems
Steven B. Markowitz

Occupational Hazard Surveillance
David H. Wegman and Steven D. Stellman

Surveillance in Developing Countries
David Koh and Kee-Seng Chia

Development and Application of an Occupational Injury and Illness Classification System
Elyce Biddle

Risk Analysis of Nonfatal Workplace Injuries and Illnesses
John W. Ruser

Case Study: Worker Protection and Statistics on Accidents and Occupational Diseases - HVBG, Germany
Martin Butz and Burkhard Hoffmann

Case Study: Wismut - A Uranium Exposure Revisited
Heinz Otten and Horst Schulz

Measurement Strategies and Techniques for Occupational Exposure Assessment in Epidemiology
Frank Bochmann and Helmut Blome

Case Study: Occupational Health Surveys in China

Tables

Click a link below to view the table in article context.

1. Angiosarcoma of the liver - world register

2. Occupational illness, US, 1986 versus 1992

3. US Deaths from pneumoconiosis & pleural mesothelioma

4. Sample list of notifiable occupational diseases

5. Illness & injury reporting code structure, US

6. Nonfatal occupational injuries & illnesses, US 1993

7. Risk of occupational injuries & illnesses

8. Relative risk for repetitive motion conditions

9. Workplace accidents, Germany, 1981-93

10. Grinders in metalworking accidents, Germany, 1984-93

11. Occupational disease, Germany, 1980-93

12. Infectious diseases, Germany, 1980-93

13. Radiation exposure in the Wismut mines

14. Occupational diseases in Wismut uranium mines 1952-90

Figures

Point to a thumbnail to see figure caption, click to see the figure in article context.

REC60F1AREC060F2REC100F1REC100T1REC100T2


Click to return to top of page

View items...
33. Toxicology

33. Toxicology (21)

Banner 4

 

33. Toxicology

Chapter Editor: Ellen K. Silbergeld


Table of Contents

Tables and Figures

Introduction
Ellen K. Silbergeld, Chapter Editor

General Principles of Toxicology

Definitions and Concepts
Bo Holmberg, Johan Hogberg and Gunnar Johanson

Toxicokinetics
Dušan Djuríc

Target Organ And Critical Effects
Marek Jakubowski

Effects Of Age, Sex And Other Factors
Spomenka Telišman

Genetic Determinants Of Toxic Response
Daniel W. Nebert and Ross A. McKinnon

Mechanisms of Toxicity

Introduction And Concepts
Philip G. Watanabe

Cellular Injury And Cellular Death
Benjamin F. Trump and Irene K. Berezesky

Genetic Toxicology
R. Rita Misra and Michael P. Waalkes

Immunotoxicology
Joseph G. Vos and Henk van Loveren

Target Organ Toxicology
Ellen K. Silbergeld

Toxicology Test Methods

Biomarkers
Philippe Grandjean

Genetic Toxicity Assessment
David M. DeMarini and James Huff

In Vitro Toxicity Testing
Joanne Zurlo

Structure Activity Relationships
Ellen K. Silbergeld

Regulatory Toxicology

Toxicology In Health And Safety Regulation
Ellen K. Silbergeld

Principles Of Hazard Identification - The Japanese Approach
Masayuki Ikeda

The United States Approach to Risk Assessment Of Reproductive Toxicants and Neurotoxic Agents
Ellen K. Silbergeld

Approaches To Hazard Identification - IARC
Harri Vainio and Julian Wilbourn

Appendix - Overall Evaluations of Carcinogenicity to Humans: IARC Monographs Volumes 1-69 (836)

Carcinogen Risk Assessment: Other Approaches
Cees A. van der Heijden

Tables 

Click a link below to view table in article context.

  1. Examples of critical organs & critical effects
  2. Basic effects of possible multiple interactions of metals
  3. Haemoglobin adducts in workers exposed to aniline & acetanilide
  4. Hereditary, cancer-prone disorders & defects in DNA repair
  5. Examples of chemicals that exhibit genotoxicity in human cells
  6. Classification of tests for immune markers
  7. Examples of biomarkers of exposure
  8. Pros & cons of methods for identifying human cancer risks
  9. Comparison of in vitro systems for hepatotoxicity studies
  10. Comparison of SAR & test data: OECD/NTP analyses
  11. Regulation of chemical substances by laws, Japan
  12. Test items under the Chemical Substance Control Law, Japan
  13. Chemical substances & the Chemical Substances Control Law
  14. Selected major neurotoxicity incidents
  15. Examples of specialized tests to measure neurotoxicity
  16. Endpoints in reproductive toxicology
  17. Comparison of low-dose extrapolations procedures
  18. Frequently cited models in carcinogen risk characterization

Figures

Point to a thumbnail to see figure caption, click to see figure in article context.

testTOX050F1TOX050F2TOX050F4TOX050T1TOX050F6TOX210F1TOX210F2TOX060F1TOX090F1TOX090F2TOX090F3TOX090F4TOX110F1TOX260F1TOX260T4


Click to return to top of page

View items...
Monday, 28 February 2011 20:35

Pesticides

Introduction

Human exposure to pesticides has different characteristics according to whether it occurs during industrial production or use (table 1). The formulation of commercial products (by mixing active ingredients with other coformulants) has some exposure characteristics in common with pesticide use in agriculture. In fact, since formulation is typically performed by small industries which manufacture many different products in successive operations, the workers are exposed to each of several pesticides for a short time. In public health and agriculture, the use of a variety of compounds is generally the rule, although in some specific applications (for example, cotton defoliation or malaria control programmes) a single product may be used.

Table 1. Comparison of exposure characteristics during production and use of pesticides

 

Exposure on production

Exposure on use

Duration of exposure

Continuous and prolonged

Variable and intermittent

Degree of exposure

Fairly constant

Extremely variable

Type of exposure

To one or few compounds

To numerous compounds either in sequence or concomitantly

Skin absorption

Easy to control

Variable according to work procedures

Ambient monitoring

Useful

Seldom informative

Biological monitoring

Complementary to ambient monitoring

Very useful when available

Source: WHO 1982a, modified.

The measurement of biological indicators of exposure is particularly useful for pesticide users where the conventional techniques of exposure assessment through ambient air monitoring are scarcely applicable. Most pesticides are lipid-soluble substances that penetrate the skin. The occurrence of percutaneous (skin) absorption makes the use of biological indicators very important in assessing the level of exposure in these circumstances.

Organophosphate Insecticides

Biological indicators of effect:

Cholinesterases are the target enzymes accounting for organophosphate (OP) toxicity to insect and mammalian species. There are two principal types of cholinesterases in the human organism: acetylcholinesterase (ACHE) and plasma cholinesterase (PCHE). OP causes toxic effects in humans through the inhibition of synaptic acetylcholinesterase in the nervous system. Acetylcholinesterase is also present in red blood cells, where its function is unknown. Plasma cholinesterase is a generic term covering an inhomogeneous group of enzymes present in glial cells, plasma, liver and some other organs. PCHE is inhibited by OPs, but its inhibition does not produce known functional derangements.

Inhibition of blood ACHE and PCHE activity is highly correlated with intensity and duration of OP exposure. Blood ACHE, being the same molecular target as that responsible for acute OP toxicity in the nervous system, is a more specific indicator than PCHE. However, sensitivity of blood ACHE and PCHE to OP inhibition varies among the individual OP compounds: at the same blood concentration, some inhibit more ACHE and others more PCHE.

A reasonable correlation exists between blood ACHE activity and the clinical signs of acute toxicity (table 2). The correlation tends to be better as the rate of inhibition is faster. When inhibition occurs slowly, as with chronic low-level exposures, the correlation with illness may be low or totally non-existent. It must be noted that blood ACHE inhibition is not predictive for chronic or delayed effects.

Table 2. Severity and prognosis of acute OP toxicity at different levels of ACHE inhibition

ACHE

inhibition (%)

Level of

poisoning

Clinical symptoms

Prognosis

50–60

Mild

Weakness, headache, dizziness, nausea, salivation, lacrimation, miosis, moderate bronchial spasm

Convalescence in 1-3 days

60–90

Moderate

Abrupt weakness, visual disturbance, excess salivation, sweating, vomiting, diarrhoea, bradycardia, hypertonia, tremors of hands and head, disturbed gait, miosis, pain in the chest, cyanosis of the mucous membranes

Convalescence in 1-2 weeks

90–100

Severe

Abrupt tremor, generalized convulsions, psychic disturbance, intensive cyanosis, lung oedema, coma

Death from respiratory or cardiac failure

 

Variations of ACHE and PCHE activities have been observed in healthy people and in specific physiopathological conditions (table 3). Thus, the sensitivity of these tests in monitoring OP exposure can be increased by adopting individual pre-exposure values as a reference. Cholinesterase activities after exposure are then compared with the individual baseline values. One should make use of population cholinesterase activity reference values only when pre-exposure cholinesterase levels are not known (table 4).

Table 3. Variations of ACHE and PCHE activities in healthy people and in selected physiopathological conditions

Condition

ACHE activity

PCHE activity

 

Healthy people

Interindividual variation1

10–18 %

15–25 %

Intraindividual variation1

3–7 %

6%

Sex differences

No

10–15 % higher in male

Age

Reduced up to 6 months old

 

Body mass

 

Positive correlation

Serum cholesterol

 

Positive correlation

Seasonal variation

No

No

Circadian variation

No

No

Menstruation

 

Decreased

Pregnancy

 

Decreased

 

Pathological conditions

Reduced activity

Leukaemia, neoplasm

Liver disease; uraemia; cancer; heart failure; allergic reactions

Increased activity

Polycythaemia; thalassaemia; other congenital blood dyscrasias

Hyperthyroidism; other conditions of high metabolic rate

1 Source: Augustinsson 1955 and Gage 1967.

Table 4. Cholinesterase activities of healthy people without exposure to OP measured with selected methods

Method

Sex

ACHE*

PCHE*

Michel1 (DpH/h)

male

female

0.77±0.08

0.75±0.08

0.95±0.19

0.82±0.19

Titrimetric1 (mmol/min ml)

male/female

13.2±0.31

4.90±0.02

Ellman’s modified2 (UI/ml)

male

female

4.01±0.65

3.45±0.61

3.03±0.66

3.03±0.68

* mean result, ± standard deviation.
Source: 1 Laws 1991.    2 Alcini et al. 1988.

Blood should preferably be sampled within two hours after exposure. Venipuncture is preferred to extracting capillary blood from a finger or earlobe because the sampling point can be contaminated with pesticide residing on the skin in exposed subjects. Three sequential samples are recommended to establish a normal baseline for each worker before exposure (WHO 1982b).

Several analytical methods are available for the determination of blood ACHE and PCHE. According to WHO, the Ellman spectrophotometric method (Ellman et al. 1961) should serve as a reference method.

Biological indicators of exposure.

The determination in urine of metabolites that are derived from the alkyl phosphate moiety of the OP molecule or of the residues generated by the hydrolysis of the P–X bond (figure 1) has been used to monitor OP exposure.

Figure 1. Hydrolysis of OP insecticides

BMO060F1

Alkyl phosphate metabolites.

The alkyl phosphate metabolites detectable in urine and the main parent compound from which they can originate are listed in table 5. Urinary alkyl phosphates are sensitive indicators of exposure to OP compounds: the excretion of these metabolites in urine is usually detectable at an exposure level at which plasma or erythrocyte cholinesterase inhibition cannot be detected. The urinary excretion of alkyl phosphates has been measured for different conditions of exposure and for various OP compounds (table 6). The existence of a relationship between external doses of OP and alkyl phosphate urinary concentrations has been established in a few studies. In some studies a significant relationship between cholinesterase activity and levels of alkyl phosphates in urine has also been demonstrated.

Table 5. Alkyl phosphates detectable in urine as metabolites of OP pesticides

Metabolite

Abbreviation

Principal parent compounds

Monomethylphosphate

MMP

Malathion, parathion

Dimethylphosphate

DMP

Dichlorvos, trichlorfon, mevinphos, malaoxon, dimethoate, fenchlorphos

Diethylphosphate

DEP

Paraoxon, demeton-oxon, diazinon-oxon, dichlorfenthion

Dimethylthiophosphate

DMTP

Fenitrothion, fenchlorphos, malathion, dimethoate

Diethylthiophosphate

DETP

Diazinon, demethon, parathion,fenchlorphos

Dimethyldithiophosphate

DMDTP

Malathion, dimethoate, azinphos-methyl

Diethyldithiophosphate

DEDTP

Disulfoton, phorate

Phenylphosphoric acid

 

Leptophos, EPN

Table 6. Examples of levels of urinary alkyl phosphates measured in various conditions of exposure to OP

Compound

Condition of exposure

Route of exposure

Metabolite concentrations1 (mg/l)

Parathion2

Nonfatal poisoning

Oral

DEP = 0.5

DETP = 3.9

Disulfoton2

Formulators

Dermal/inhalation

DEP = 0.01-4.40

DETP = 0.01-1.57

DEDTP = <0.01-.05

Phorate2

Formulators

Dermal/inhalation

DEP = 0.02-5.14

DETP = 0.08-4.08

DEDTP = <0.01-0.43

Malathion3

Sprayers

Dermal

DMDTP = <0.01

Fenitrothion3

Sprayers

Dermal

DMP = 0.01-0.42

DMTP = 0.02-0.49

Monocrotophos4

Sprayers

Dermal/inhalation

DMP = <0.04-6.3/24 h

1 For abbreviations see table 27.12 [BMO12TE].
2 Dillon and Ho 1987.
3 Richter 1993.
4 van Sittert and Dumas 1990.

 Alkyl phosphates are usually excreted in urine within a short time. Samples collected soon after the end of the workday are suitable for metabolite determination.

The measurement of alkyl phosphates in urine requires a rather sophisticated analytical method, based on derivatization of the compounds and detection by gas-liquid chromatography (Shafik et al. 1973a; Reid and Watts 1981).

Hydrolytic residues.

p-Nitrophenol (PNP) is the phenolic metabolite of parathion, methylparathion and ethyl parathion, EPN. The measurement of PNP in urine (Cranmer 1970) has been widely used and has proven to be successful in evaluating exposure to parathion. Urinary PNP correlates well with the absorbed dose of parathion. With PNP urinary levels up to 2 mg/l, the absorption of parathion does not cause symptoms, and little or no reduction of cholinesterase activities is observed. PNP excretion occurs rapidly and urinary levels of PNP become insignificant 48 hours after exposure. Thus, urine samples should be collected soon after exposure.

Carbamates

Biological indicators of effect.

Carbamate pesticides include insecticides, fungicides and herbicides. Insecticidal carbamate toxicity is due to the inhibition of synaptic ACHE, while other mechanisms of toxicity are involved for herbicidal and fungicidal carbamates. Thus, only exposure to carbamate insecticides can be monitored through the assay of cholinesterase activity in red blood cells (ACHE) or plasma (PCHE). ACHE is usually more sensitive to carbamate inhibitors than PCHE. Cholinergic symptoms have been usually observed in carbamate-exposed workers with a blood ACHE activity lower than 70% of the individual baseline level (WHO 1982a).

Inhibition of cholinesterases by carbamates is rapidly reversible. Therefore, false negative results can be obtained if too much time elapses between exposure and biological sampling or between sampling and analysis. In order to avoid such problems, it is recommended that blood samples be collected and analysed within four hours after exposure. Preference should be given to the analytical methods that allow the determination of cholinesterase activity immediately after blood sampling, as discussed for organophosphates.

Biological indicators of exposure.

The measurement of urinary excretion of carbamate metabolites as a method to monitor human exposure has so far been applied only to few compounds and in limited studies. Table 7 summarizes the relevant data. Since carbamates are promptly excreted in the urine, samples collected soon after the end of exposure are suitable for metabolite determination. Analytical methods for the measurements of carbamate metabolites in urine have been reported by Dawson et al. (1964); DeBernardinis and Wargin (1982) and Verberk et al. (1990).

Table 7. Levels of urinary carbamate metabolites measured in field studies

Compound

Biological index

Condition of exposure

Environmental concentrations

Results

References

Carbaryl

a-naphthol

a-naphthol

a-naphthol

formulators

mixer/applicators

unexposed population

0.23–0.31 mg/m3

x=18.5 mg/l1 , max. excretion rate = 80 mg/day

x=8.9 mg/l, range = 0.2–65 mg/l

range = 1.5–4 mg/l

WHO 1982a

Pirimicarb

metabolites I2 and V3

applicators

 

range = 1–100 mg/l

Verberk et al. 1990

1 Systemic poisonings have been occasionally reported.
2 2-dimethylamino-4-hydroxy-5,6-dimethylpyrimidine.
3 2-methylamino-4-hydroxy-5,6-dimethylpyrimidine.
x = standard deviation.

Dithiocarbamates

Biological indicators of exposure.

Dithiocarbamates (DTC) are widely used fungicides, chemically grouped in three classes: thiurams, dimethyldithiocarbamates and ethylene-bis-dithiocarbamates.

Carbon disulphide (CS2) and its main metabolite 2-thiothiazolidine-4-carboxylic acid (TTCA) are metabolites common to almost all DTC. A significant increase in urinary concentrations of these compounds has been observed for different conditions of exposure and for various DTC pesticides. Ethylene thiourea (ETU) is an important urinary metabolite of ethylene-bis-dithiocarbamates. It may also be present as an impurity in market formulations. Since ETU has been determined to be a teratogen and a carcinogen in rats and in other species and has been associated with thyroid toxicity, it has been widely applied to monitor ethylene-bis-dithiocarbamate exposure. ETU is not compound-specific, as it may be derived from maneb, mancozeb or zineb.

Measurement of the metals present in the DTC has been proposed as an alternative approach in monitoring DTC exposure. Increased urinary excretion of manganese has been observed in workers exposed to mancozeb (table 8).

Table 8. Levels of urinary dithiocarbamate metabolites measured in field studies

Compound

Biological index

Condition of

exposure

Environmental concentrations*

± standard deviation

Results ± standard deviation

References

Ziram

Carbon disulphide (CS2)

TTCA1

formulators

formulators

1.03 ± 0.62 mg/m3

3.80 ± 3.70 mg/l

0.45 ± 0.37 mg/l

Maroni et al. 1992

Maneb/Mancozeb

ETU2

applicators

 

range = < 0.2–11.8 mg/l

Kurttio et al. 1990

Mancozeb

Manganese

applicators

57.2 mg/m3

pre-exposure: 0.32 ± 0.23 mg/g creatinine;

post-exposure: 0.53 ± 0.34 mg/g creatinine

Canossa et al. 1993

* Mean result according to Maroni et al. 1992.
1 TTCA = 2-thiothiazolidine-4-carbonylic acid.
2 ETU = ethylene thiourea.

 CS2, TTCA, and manganese are commonly found in urine of non-exposed subjects. Thus, the measurement of urinary levels of these compounds prior to exposure is recommended. Urine samples should be collected in the morning following the cessation of exposure. Analytical methods for the measurements of CS2, TTCA and ETU have been reported by Maroni et al. (1992).

Synthetic Pyrethroids

Biological indicators of exposure.

Synthetic pyrethroids are insecticides similar to natural pyrethrins. Urinary metabolites suitable for application in biological monitoring of exposure have been identified through studies with human volunteers. The acidic metabolite 3-(2,2’-dichloro-vinyl)-2,2’-dimethyl-cyclopropane carboxylic acid (Cl2CA) is excreted both by subjects orally dosed with permethrin and cypermethrin and the bromo-analogue (Br2CA) by subjects treated with deltamethrin. In the volunteers treated with cypermethrin, a phenoxy metabolite, 4-hydroxy-phenoxy benzoic acid (4-HPBA), has also been identified. These tests, however, have not often been applied in monitoring occupational exposures because of the complex analytical techniques required (Eadsforth, Bragt and van Sittert 1988; Kolmodin-Hedman, Swensson and Akerblom 1982). In applicators exposed to cypermethrin, urinary levels of Cl2CA have been found to range from 0.05 to 0.18 mg/l, while in formulators exposed to a-cypermethrin, urinary levels of 4-HPBA have been found to be lower than 0.02 mg/l.

A 24-hour urine collection period started after the end of exposure is recommended for metabolite determinations.

Organochlorines

Biological indicators of exposure.

Organochlorine (OC) insecticides were widely used in the 1950s and 1960s. Subsequently, the use of many of these compounds was discontinued in many countries because of their persistence and consequent contamination of the environment.

Biological monitoring of OC exposure can be carried out through the determination of intact pesticides or their metabolites in blood or serum (Dale, Curley and Cueto 1966; Barquet, Morgade and Pfaffenberger 1981). After absorption, aldrin is rapidly metabolized to dieldrin and can be measured as dieldrin in blood. Endrin has a very short half-life in blood. Therefore, endrin blood concentration is of use only in determining recent exposure levels. The determination of the urinary metabolite anti-12-hydroxy-endrin has also proven to be useful in monitoring endrin exposure (van Sittert and Tordoir 1987) .

Significant correlations between the concentration of biological indicators and the onset of toxic effects have been demonstrated for some OC compounds. Instances of toxicity due to aldrin and dieldrin exposure have been related to levels of dieldrin in blood above 200 μg/l. A blood lindane concentration of 20 μg/l has been indicated as the upper critical level as far as neurological signs and symptoms are concerned. No acute adverse effects have been reported in workers with blood endrin concentrations below 50 μg/l. Absence of early adverse effects (induction of liver microsomal enzymes) has been shown on repeated exposures to endrin at urinary anti-12-hydroxy-endrin concentrations below 130 μg/g creatinine and on repeated exposures to DDT at DDT or DDE serum concentrations below 250 μg/l.

OC may be found in low concentrations in the blood or urine of the general population. Examples of observed values are as follows: lindane blood concentrations up to 1 μg/l, dieldrin up to 10 μg/l, DDT or DDE up to 100 μg/l, and anti-12-hydroxy-endrin up to 1 μg/g creatinine. Thus, a baseline assessment prior to exposure is recommended.

For exposed subjects, blood samples should be taken immediately after the end of a single exposure. For conditions of long-term exposure, the time of collection of the blood sample is not critical. Urine spot samples for urinary metabolite determination should be collected at the end of exposure.

Triazines

Biological indicators of exposure.

The measurement of urinary excretion of triazinic metabolites and the unmodified parent compound has been applied to subjects exposed to atrazine in limited studies. Figure 2 shows the urinary excretion profiles of atrazine metabolites of a manufacturing worker with dermal exposure to atrazine ranging from 174 to 275 μmol/workshift (Catenacci et al. 1993). Since other chlorotriazines (simazine, propazine, terbuthylazine) follow the same biotransformation pathway of atrazine, levels of dealkylated triazinic metabolites may be determined to monitor exposure to all chlorotriazine herbicides. 

Figure 2. Urinary excretion profiles of atrazine metabolites

BMO060F2

The determination of unmodified compounds in urine may be useful as a qualitative confirmation of the nature of the compound that has generated the exposure. A 24–hour urine collection period started at the beginning of exposure is recommended for metabolite determination.

Recently, by using an enzyme-linked immunosorbent assay (ELISA test), a mercapturic acid conjugate of atrazine has been identified as its major urinary metabolite in exposed workers. This compound has been found in concentrations at least 10 times higher than those of any dealkylated products. A relationship between cumulative dermal and inhalation exposure and total amount of the mercapturic acid conjugate excreted over a 10-day period has been observed (Lucas et al. 1993).

 

 

 

 

Coumarin Derivatives

Biological indicators of effect.

Coumarin rodenticides inhibit the activity of the enzymes of the vitamin K cycle in the liver of mammals, humans included (figure 3), thus causing a dose-related reduction of the synthesis of vitamin K-dependent clotting factors, namely factor II (prothrombin), VII, IX, and X. Anticoagulant effects appear when plasma levels of clotting factors have dropped below approximately 20% of normal.

Figure 3. Vitamin K cycle

BMO060F3

These vitamin K antagonists have been grouped into so-called “first generation” (e.g., warfarin) and “second generation” compounds (e.g., brodifacoum, difenacoum), the latter characterized by a very long biological half-life (100 to 200 days).

The determination of prothrombin time is widely used in monitoring exposure to coumarins. However, this test is sensitive only to a clotting factor decrease of approximately 20% of normal plasma levels. The test is not suitable for detection of early effects of exposure. For this purpose, the determination of                                                                                                                       the prothrombin concentration in plasma is recommended.

In the future, these tests might be replaced by the determination of coagulation factor precursors (PIVKA), which are substances detectable in blood only in the case of blockage of the vitamin K cycle by coumarins.

With conditions of prolonged exposure, the time of blood collection is not critical. In cases of acute overexposure, biological monitoring should be carried out for at least five days after the event, in view of the latency of the anticoagulant effect. To increase the sensitivity of these tests, the measurement of baseline values prior to exposure is recommended.

Biological indicators of exposure.

The measurement of unmodified coumarins in blood has been proposed as a test to monitor human exposure. However, experience in applying these indices is very limited mainly because the analytical techniques are much more complex (and less standardized) in comparison with those required to monitor the effects on the coagulation system (Chalermchaikit, Felice and Murphy 1993).

Phenoxy Herbicides

Biological indicators of exposure.

Phenoxy herbicides are scarcely biotransformed in mammals. In humans, more than 95% of a 2,4-dichlorophenoxyacetic acid (2,4-D) dose is excreted unchanged in urine within five days, and 2,4,5-trichlorophenoxyacetic acid (2,4,5-T) and 4-chloro-2-methylphenoxyacetic acid (MCPA) are also excreted mostly unchanged via urine within a few days after oral absorption. The measurement of unchanged compounds in urine has been applied in monitoring occupational exposure to these herbicides. In field studies, urinary levels of exposed workers have been found to range from 0.10 to 8 μg/l for 2,4-D, from 0.05 to 4.5 μg/l for 2,4,5-T and from below 0.1 μg/l to 15 μg/l for MCPA. A 24-hour period of urine collection starting at the end of exposure is recommended for the determination of unchanged compounds. Analytical methods for the measurements of phenoxy herbicides in urine have been reported by Draper (1982).

Quaternary Ammonium Compounds

Biological indicators of exposure.

Diquat and paraquat are herbicides scarcely biotransformed by the human organism. Because of their high water solubility, they are readily excreted unchanged in urine. Urine concentrations below the analytical detection limit (0.01 μg/l) have been often observed in paraquat exposed workers; while in tropical countries, concentrations up to 0.73 μg/l have been measured after improper paraquat handling. Urinary diquat concentrations lower than the analytical detection limit (0.047 μg/l) have been reported for subjects with dermal exposures from 0.17 to 1.82 μg/h and inhalation exposures lower than 0.01 μg/h. Ideally, 24 hours sampling of urine collected at the end of exposure should be used for analysis. When this is impractical, a spot sample at the end of the workday can be used.

Determination of paraquat levels in serum is useful for prognostic purposes in case of acute poisoning: patients with serum paraquat levels up to 0.1 μg/l twenty-four hours after ingestion are likely to survive.

The analytical methods for paraquat and diquat determination have been reviewed by Summers (1980).

Miscellaneous Pesticides

4,6-Dinitro-o-cresol (DNOC).

DNOC is an herbicide introduced in 1925, but the use of this compound has been progressively decreased due to its high toxicity to plants and to humans. Since blood DNOC concentrations correlate to a certain extent with the severity of adverse health effects, the measure of unchanged DNOC in blood has been proposed for monitoring occupational exposures and for the evaluation of the clinical course of poisonings.

Pentachlorophenol.

Pentachlorophenol (PCP) is a wide-spectrum biocide with pesticidal action against weeds, insects, and fungi. Measurements of blood or urinary unchanged PCP have been recommended as suitable indices in monitoring occupational exposures (Colosio et al. 1993), because these parameters are significantly correlated with PCP body burden. In workers with prolonged exposure to PCP the time of blood collection is not critical, while urine spot samples should be collected on the morning after exposure.

A multiresidue method for the measurement of halogenated and nitrophenolic pesticides has been described by Shafik et al.(1973b).

Other tests proposed for the biological monitoring of pesticide exposure are listed in table 9.

Table 9. Other indices proposed in the literature for the biological monitoring of pesticide exposure

Compound

Biological index

 

Urine

Blood

Bromophos

Bromophos

Bromophos

Captan

Tetrahydrophtalimide

 

Carbofuran

3-Hydroxycarbofuran

 

Chlordimeform

4-Chloro-o-toluidine derivatives

 

Chlorobenzilate

p,p-1-Dichlorobenzophenone

 

Dichloropropene

Mercapturic acid metabolites

 

Fenitrothion

p-Nitrocresol

 

Ferbam

 

Thiram

Fluazifop-Butyl

Fluazifop

 

Flufenoxuron

 

Flufenoxuron

Glyphosate

Glyphosate

 

Malathion

Malathion

Malathion

Organotin compounds

Tin

Tin

Trifenomorph

Morpholine, triphenylcarbinol

 

Ziram

 

Thiram

 

Conclusions

Biological indicators for monitoring pesticide exposure have been applied in a number of experimental and field studies.

Some tests, such as those for cholinesterase in blood or for selected unmodified pesticides in urine or blood, have been validated by extensive experience. Biological exposure limits have been proposed for these tests (table 10). Other tests, in particular those for blood or urinary metabolites, suffer from greater limitations because of analytical difficulties or because of limitations in interpretation of results.

Table 10. Recommended biological limit values (as of 1996)

Compound

Biological index

BEI1

BAT2

HBBL3

BLV4

ACHE inhibitors

ACHE in blood

70%

70%

70%,

 

DNOC

DNOC in blood

   

20 mg/l,

 

Lindane

Lindane in blood

 

0.02mg/l

0.02mg/l

 

Parathion

PNP in urine

0.5mg/l

0.5mg/l

   

Pentachlorophenol (PCP)

PCP in urine

PCP in plasma

2 mg/l

5 mg/l

0.3mg/l

1 mg/l

   

Dieldrin/Aldrin

Dieldrin in blood

     

100 mg/l

Endrin

Anti-12-hydroxy-endrin in urine

     

130 mg/l

DDT

DDT and DDEin serum

     

250 mg/l

Coumarins

Prothrombin time in plasma

Prothrombin concentration in plasma

     

10% above baseline

60% of baseline

MCPA

MCPA in urine

     

0.5 mg/l

2,4-D

2,4-D in urine

     

0.5 mg/l

1 Biological exposure indices (BEIs) are recommended by the American Conference of Governmental Industrial Hygienists (ACGIH 1995).
2 Biological tolerance values (BATs) are recommended by the German Commission for the Investigation of Health Hazards of Chemical Compounds in the Work Area (DFG 1992).
3 Health-based biological limits (HBBLs) are recommended by a WHO Study Group (WHO 1982a).
4 Biological limit values (BLVs) are proposed by a Study Group of the Scientific Committee on Pesticides of the International Commission on Occupational Health (Tordoir et al. 1994). Assessment of working conditions is called for if this value is exceeded.

This field is in rapid development and, given the enormous importance of using biological indicators to assess exposure to these substances, new tests will be continuously developed and validated.

 

Back

Epidemiology

Epidemiology is recognized both as the science basic to preventive medicine and one that informs the public health policy process. Several operational definitions of epidemiology have been suggested. The simplest is that epidemiology is the study of the occurrence of disease or other health-related characteristics in human and in animal populations. Epidemiologists study not only the frequency of disease, but whether the frequency differs across groups of people; i.e., they study the cause-effect relationship between exposure and illness. Diseases do not occur at random; they have causes—quite often man-made causes—which are avoidable. Thus, many diseases could be prevented if the causes were known. The methods of epidemiology have been crucial to identifying many causative factors which, in turn, have led to health policies designed to prevent disease, injury and premature death.

What is the task of epidemiology and what are its strengths and weaknesses when definitions and concepts of epidemiology are applied to occupational health? This chapter addresses these questions and the ways in which occupational health hazards can be investigated using epidemiological techniques. This article introduces the ideas found in successive articles in this chapter.

Occupational Epidemiology

Occupational epidemiology has been defined as the study of the effects of workplace exposures on the frequency and distribution of diseases and injuries in the population. Thus it is an exposure-oriented discipline with links to both epidemiology and occupational health (Checkoway et al. 1989). As such, it uses methods similar to those employed by epidemiology in general.

The main objective of occupational epidemiology is prevention through identifying the consequences of workplace exposures on health. This underscores the preventive focus of occupational epidemiology. Indeed, all research in the field of occupational health and safety should serve preventive purposes. Hence, epidemiological knowledge can and should be readily implementable. While the public health interest always should be the primary concern of epidemiological research, vested interests can exercise influence, and care must be taken to minimize such influence in the formulation, conduct and/or interpretation of studies (Soskolne 1985; Soskolne 1989).

A second objective of occupational epidemiology is to use results from specific settings to reduce or to eliminate hazards in the population at large. Thus, apart from providing information on the health effects of exposures in the workplace, the results from occupational epidemiology studies also play a role in the estimation of risk associated with the same exposures but at the lower levels generally experienced by the general population. Environmental contamination from industrial processes and products usually would result in lower levels of exposure than those experienced in the workplace.

The levels of application of occupational epidemiology are:

  • surveillance to describe the occurrence of illness in different categories of workers and so provide early warning signals of unrecognized occupational hazards
  • generation and testing of an hypothesis that a given exposure may be harmful, and the quantification of an effect
  • evaluation of an intervention (for example, a preventive action such as reduction in exposure levels) by measuring changes in the health status of a population over time.

 

The causal role that occupational exposures can play in the development of disease, injury and premature death had been identified long ago and is part of the history of epidemiology. Reference has to be made to Bernardino Ramazzini, founder of occupational medicine and one of the first to revive and add to the Hippocratic tradition of the dependence of health on identifiable natural external factors. In the year 1700, he wrote in his “De Morbis Artificum Diatriba” (Ramazzini 1705; Saracci 1995):

The physician has to ask many questions of the patients. Hippocrates states in De Affectionibus: “When you face a sick person you should ask him from what he is suffering, for what reason, for how many days, what he eats, and what are his bowel movements. To all these questions one should be added: ‘What work does he do?’.”

This reawakening of clinical observation and of the attention to the circumstances surrounding the occurrence of disease, brought Ramazzini to identify and describe many of the occupational diseases that were later studied by occupational physicians and epidemiologists.

Using this approach, Pott was first to report in 1775 (Pott 1775) the possible connection between cancer and occupation (Clayson 1962). His observations on cancer of the scrotum among chimney-sweeps began with a description of the disease and continued:

The fate of these people seems singularly hard: in their early infancy, they are most frequently treated with great brutality, and almost starved with cold and hunger; they are thrust up narrow, and sometimes hot chimneys, where they are bruised, burned and almost suffocated; and when they get to puberty, become peculiarly liable to a most noisome, painful, and fatal disease.

Of this last circumstance there is not the least doubt, though perhaps it may not have been sufficiently attended to, to make it generally known. Other people have cancer of the same parts; and so have others, besides lead-workers, the Poitou colic, and the consequent paralysis; but it is nevertheless a disease to which they are peculiarly liable; and so are chimney-sweeps to cancer of the scrotum and testicles.

The disease, in these people, seems to derive its origin from a lodgement of soot in the rugae of the scrotum, and at first not to be a disease of the habit … but here the subjects are young, in general good health, at least at first; the disease brought on them by their occupation, and in all probability local; which last circumstance may, I think, be fairly presumed from its always seizing the same parts; all this makes it (at first) a very different case from a cancer which appears in an elderly man.

This first account of an occupational cancer still remains a model of lucidity. The nature of the disease, the occupation concerned and the probable causal agent are all clearly defined. An increased incidence of scrotal cancer among chimney-sweeps is noted although no quantitative data are given to substantiate the claim.

Another fifty years passed before Ayrton-Paris noticed in 1822 (Ayrton-Paris 1822) the frequent development of scrotal cancers among the copper and tin smelters of Cornwall, and surmised that arsenic fumes might be the causal agent. Von Volkmann reported in 1874 skin tumours in paraffin workers in Saxony, and shortly afterwards, Bell suggested in 1876 that shale oil was responsible for cutaneous cancer (Von Volkmann 1874; Bell 1876). Reports of the occupational origin of cancer then became relatively more frequent (Clayson 1962).

Among the early observations of occupational diseases was the increased occurrence of lung cancer among Schneeberg miners (Harting and Hesse 1879). It is noteworthy (and tragic) that a recent case study shows that the epidemic of lung cancer in Schneeberg is still a huge public health problem, more than a century after the first observation in 1879. An approach to identify an “increase” in disease and even to quantify it had been present in the history of occupational medicine. For example, as Axelson (1994) has pointed out, W.A. Guy in 1843 studied “pulmonary consumption” in letter press printers and found a higher risk among compositors than among pressmen; this was done by applying a design similar to the case-control approach (Lilienfeld and Lilienfeld 1979). Nevertheless, it was not until perhaps the early 1950s that modern occupational epidemiology and its methodology began to develop. Major contributions marking this development were the studies on bladder cancer in dye workers (Case and Hosker 1954) and lung cancer among gas workers (Doll 1952).

Issues in Occupational Epidemiology

The articles in this chapter introduce both the philosophy and the tools of epidemiological investigation. They focus on assessing the exposure experience of workers and on the diseases that arise in these populations. Issues in drawing valid conclusions about possible causative links in the pathway from exposures to hazardous substances to the development of diseases are addressed in this chapter.

Ascertainment of an individual’s work life exposure experience constitutes the core of occupational epidemiology. The informativeness of an epidemiological study depends, in the first instance, on the quality and extent of available exposure data. Secondly, the health effects (or, the diseases) of concern to the occupational epidemiologist must be accurately determinable among a well-defined and accessible group of workers. Finally, data about other potential influences on the disease of interest should be available to the epidemiologist so that any occupational exposure effects that are established from the study can be attributed to the occupational exposure per se rather than to other known causes of the disease in question. For example, in a group of workers who may work with a chemical that is suspected of causing lung cancer, some workers may also have a history of tobacco smoking, a further cause of lung cancer. In the latter situation, occupational epidemiologists must determine which exposure (or, which risk factor—the chemical or the tobacco, or, indeed, the two in combination) is responsible for any increase in the risk of lung cancer in the group of workers being studied.

Exposure assessment

If a study has access only to the fact that a worker was employed in a particular industry, then the results from such a study can link health effects only to that industry. Likewise, if knowledge about exposure exists for the occupations of the workers, conclusions can be directly drawn only in so far as occupations are concerned. Indirect inferences on chemical exposures can be made, but their reliability has to be evaluated situation by situation. If a study has access, however, to information about the department and/or job title of each worker, then conclusions will be able to be made to that finer level of workplace experience. Where information about the actual substances with which a person works is known to the epidemiologist (in collaboration with an industrial hygienist), then this would be the finest level of exposure information available in the absence of rarely available dosimetry. Furthermore, the findings from such studies can provide more useful information to industry for creating safer workplaces.

Epidemiology has been a sort of “black box” discipline until now, because it has studied the relationship between exposure and disease (the two extremes of the causal chain), without considering the intermediate mechanistic steps. This approach, despite its apparent lack of refinement, has been extremely useful: in fact, all the known causes of cancer in humans, for instance, have been discovered with the tools of epidemiology.

The epidemiological method is based on available records —questionnaires, job titles or other “proxies” of exposure; this makes the conduct of epidemiological studies and the interpretation of their findings relatively simple.

Limitations of the more crude approach to exposure assessment, however, have become evident in recent years, with epidemiologists facing more complex problems. Limiting our consideration to occupational cancer epidemiology, most well-known risk factors have been discovered because of high levels of exposure in the past; a limited number of exposures for each job; large populations of exposed workers; and a clear-cut correspondence between “proxy” information and chemical exposures (e.g., shoe workers and benzene, shipyards and asbestos, and so on). Nowadays, the situation is substantially different: levels of exposure are considerably lower in Western countries (this qualification should always be stressed); workers are exposed to many different chemicals and mixtures in the same job title (e.g., agricultural workers); homogeneous populations of exposed workers are more difficult to find and are usually small in number; and, the correspondence between “proxy” information and actual exposure grows progressively weaker. In this context, the tools of epidemiology have reduced sensitivity owing to the misclassification of exposure.

In addition, epidemiology has relied on “hard” end points, such as death in most cohort studies. However, workers might prefer to see something different from “body counts” when the potential health effects of occupational exposures are studied. Therefore, the use of more direct indicators of both exposure and early response would have some advantages. Biological markers may provide just a tool.

Biological markers

The use of biological markers, such as lead levels in blood or liver function tests, is not new in occupational epidemiology. However, the utilization of molecular techniques in epidemiological studies has made possible the use of biomarkers for assessing target organ exposures, for determining susceptibility and for establishing early disease.

Potential uses of biomarkers in the context of occupational epidemiology are:

  • exposure assessment in cases in which traditional epidemiological tools are insufficient (particularly for low doses and low risks)
  • to disentangle the causative role of single chemical agents or substances in multiple exposures or mixtures
  • estimation of the total burden of exposure to chemicals having the same mechanistic target
  • investigation of pathogenetic mechanisms
  • study of individual susceptibility (e.g., metabolic polymorphisms, DNA repair) (Vineis 1992)
  • to classify exposure and/or disease more accurately, thereby increasing statistical power.

 

Great enthusiasm has arisen in the scientific community about these uses, but, as noted above, methodological complexity of the use of these new “molecular tools” should serve to caution against excessive optimism. Biomarkers of chemical exposures (such as DNA adducts) have several shortcomings:

  1. They usually reflect recent exposures and, therefore, are of limited use in case-control studies, whereas they require repeated samplings over prolonged periods for utilization in cohort investigations.
  2. While they can be highly specific and thus improve exposure misclassification, findings often remain difficult to interpret.
  3. When complex chemical exposures are investigated (e.g., air pollution or environmental tobacco smoke) it is possible that the biomarker would reflect one particular component of the mixture, whereas the biological effect could be due to another.
  4. In many situations, it is not clear whether a biomarker reflects a relevant exposure, a correlate of the relevant exposure, individual susceptibility, or an early disease stage, thus limiting causal inference.
  5. The determination of most biomarkers requires an expensive test or an invasive procedure or both, thus creating constraints for adequate study size and statistical power.
  6. A biomarker of exposure is no more than a proxy for the real objective of an epidemiological investigation, which, as a rule, focuses on an avoidable environmental exposure (Trichopoulos 1995; Pearce et al. 1995).

 

Even more important than the methodological shortcomings is the consideration that molecular techniques might cause us to redirect our focus from identifying risks in the exogenous environment, to identifying high-risk individuals and then making personalized risk assessments by measuring phenotype, adduct load and acquired mutations. This would direct our focus, as noted by McMichael, to a form of clinical evaluation, rather than one of public health epidemiology. Focusing on individuals could distract us from the important public health goal of creating a less hazardous environment (McMichael 1994).

Two further important issues emerge regarding the use of biomarkers:

  1. The use of biomarkers in occupational epidemiology must be accompanied by a clear policy as far as informed consent is concerned. The worker may have several reasons to refuse cooperation. One very practical reason is that the identification of, say, an alteration in an early response marker such as sister chromatid exchange implies the possibility of discrimination by health and life insurers and by employers who might shun the worker because he or she may be more prone to disease. A second reason concerns genetic screening: since the distributions of genotypes and phenotypes vary according to ethnic group, occupational opportunities for minorities might be hampered by genetic screening. Third, doubts can be raised about the predictability of genetic tests: since the predictive value depends on the prevalence of the condition which the test aims to identify, if the latter is rare, the predictive value will be low and the practical use of the screening test will be questionable. Until now, none of the genetic screening tests have been judged applicable in the field (Ashford et al. 1990).
  2. Ethical principles must be applied prior to the use of biomarkers. These principles have been evaluated for biomarkers used for identifying individual susceptibility to disease by an interdisciplinary Working Group of the Technical Office of the European Trade Unions, with the support of the Commission of the European Communities (Van Damme et al. 1995); their report has reinforced the view that tests can be conducted only with the objective of preventing disease in a workforce. Among other considerations, use of tests must never.

 

  • serve as a means for “selection of the fittest”
  • be used to avoid implementing effective preventive measures, such as the identification and substitution of risk factors or improvements in conditions in the workplace
  • create, confirm or reinforce social inequality
  • create a gap between the ethical principles followed in the workplace and the ethical principles that must be upheld in a democratic society
  • oblige a person seeking employment to disclose personal details other than those strictly necessary for obtaining the job.

 

Finally, evidence is accumulating that the metabolic activation or inactivation of hazardous substances (and of carcinogens in particular) varies considerably in human populations, and is partly genetically determined. Furthermore, inter-individual variability in the susceptibility to carcinogens may be particularly important at low levels of occupational and environmental exposure (Vineis et al. 1994). Such findings may strongly affect regulatory decisions that focus the risk assessment process on the most susceptible (Vineis and Martone 1995).

Study design and validity

Hernberg’s article on epidemiological study designs and their applications in occupational medicine concentrates on the concept of “study base”, defined as the morbidity experience (in relation to some exposure) of a population while it is followed over time. Thus, the study base is not only a population (i.e., a group of people), but the experience of disease occurrence of this population during a certain period of time (Miettinen 1985, Hernberg 1992). If this unifying concept of a study base is adopted, then it is important to recognize that the different study designs (e.g., case-control and cohort designs) are simply different ways of “harvesting” information on both exposure and disease from the same study base; they are not diametrically different approaches.

The article on validity in study design by Sasco addresses definitions and the importance of confounding. Study investigators must always consider the possibility of confounding in occupational studies, and it can never be sufficiently stressed that the identification of potentially confounding variables is an integral part of any study design and analysis. Two aspects of confounding must be addressed in occupational epidemiology:

  1. Negative confounding should be explored: for example, some industrial populations have low exposure to lifestyle-associated risk factors because of a smoke-free workplace; glass blowers tend to smoke less than the general population.
  2. When confounding is considered, an estimate of its direction and its potential impact ought to be assessed. This is particularly true when data to control confounding are scanty. For example, smoking is an important confounder in occupational epidemiology and it always should be considered. Nevertheless, when data on smoking are not available (as is often the case in cohort studies), it is unlikely that smoking can explain a large excess of risk found in an occupational group. This is nicely described in a paper by Axelson (1978) and further discussed by Greenland (1987). When detailed data on both occupation and smoking have been available in the literature, confounding did not seem to heavily distort the estimates concerning the association between lung cancer and occupation (Vineis and Simonato 1991). Furthermore, suspected confounding does not always introduce non-valid associations. Since investigators also are at risk of being led astray by other undetected observation and selection biases, these should receive as much emphasis as the issue of confounding in designing a study (Stellman 1987).

 

Time and time-related variables such as age at risk, calendar period, time since hire, time since first exposure, duration of exposure and their treatment at the analysis stage, are among the most complex methodological issues in occupational epidemiology. They are not covered in this chapter, but two relevant and recent methodological references are noted (Pearce 1992; Robins et al. 1992).

Statistics

The article on statistics by Biggeri and Braga, as well as the title of this chapter, indicate that statistical methods cannot be separated from epidemiological research. This is because: (a) a sound understanding of statistics may provide valuable insights into the proper design of an investigation and (b) statistics and epidemiology share a common heritage, and the entire quantitative basis of epidemiology is grounded in the notion of probability (Clayton 1992; Clayton and Hills 1993). In many of the articles that follow, empirical evidence and proof of hypothesized causal relationships are evaluated using probabilistic arguments and appropriate study designs. For example, emphasis is placed on estimating the risk measure of interest, like rates or relative risks, and on the construction of confidence intervals around these estimates instead of the execution of statistical tests of probability (Poole 1987; Gardner and Altman 1989; Greenland 1990). A brief introduction to statistical reasoning using the binomial distribution is provided. Statistics should be a companion to scientific reasoning. But it is worthless in the absence of properly designed and conducted research. Statisticians and epidemiologists are aware that the choice of methods determines what and the extent to which we make observations. The thoughtful choice of design options is therefore of fundamental importance in order to ensure valid observations.

Ethics

The last article, by Vineis, addresses ethical issues in epidemiological research. Points to be mentioned in this introduction refer to epidemiology as a discipline that implies preventive action by definition. Specific ethical aspects with regard to the protection of workers and of the population at large require recognition that:

  • Epidemiological studies in occupational settings should in no way delay preventive measures in the workplace.
  • Occupational epidemiology does not refer to lifestyle factors, but to situations where usually little or no personal role is played in the choice of exposure. This implies a particular commitment to effective prevention and to the immediate transmission of information to workers and the public.
  • Research uncovers health hazards and provides the knowledge for preventive action. The ethical problems of not carrying out research, when it is feasible, should be considered.
  • Notification to workers of the results of epidemiological studies is both an ethical and methodological issue in risk communication. Research in evaluating the potential impact and effectiveness of notification should be given high priority (Schulte et al. 1993).

 

Training in occupational epidemiology

People with a diverse range of backgrounds can find their way into the specialization of occupational epidemiology. Medicine, nursing and statistics are some of the more likely backgrounds seen among those specializing in this area. In North America, about half of all trained epidemiologists have science backgrounds, while the other half will have proceeded along the doctor of medicine path. In countries outside North America, most specialists in occupational epidemiology will have advanced through the doctor of medicine ranks. In North America, those with medical training tend to be considered “content experts”, while those who are trained through the science route are deemed “methodological experts”. It is often advantageous for a content expert to team up with a methodological expert in order to design and conduct the best possible study.

Not only is knowledge of epidemiological methods, statistics and computers needed for the occupational epidemiology speciality, but so is knowledge of toxicology, industrial hygiene and disease registries (Merletti and Comba 1992). Because large studies can require linkage to disease registries, knowledge of sources of population data is useful. Knowledge of labour and corporate organization also is important. Theses at the masters level and dissertations at the doctoral level of training equip students with the knowledge needed for conducting large record-based and interview-based studies among workers.

Proportion of disease attributable to occupation

The proportion of disease which is attributable to occupational exposures either in a group of exposed workers or in the general population is covered at least with respect to cancer in another part of this Encyclopaedia. Here we should remember that if an estimate is computed, it should be for a specific disease (and a specific site in the case of cancer), a specific time period and a specific geographic area. Furthermore, it should be based on accurate measures of the proportion of exposed people and the degree of exposure. This implies that the proportion of disease attributable to occupation may vary from very low or zero in certain populations to very high in others located in industrial areas where, for example, as much as 40% of lung cancer can be attributable to occupational exposures (Vineis and Simonato 1991). Estimates which are not based on a detailed review of well-designed epidemiological studies can, at the very best, be considered as informed guesses, and are of limited value.

Transfer of hazardous industries

Most epidemiological research is carried out in the developed world, where regulation and control of known occupational hazards has reduced the risk of disease over the past several decades. At the same time, however, there has been a large transfer of hazardous industries to the developing world (Jeyaratnam 1994). Chemicals previously banned in the United States or Europe now are produced in developing countries. For example, asbestos milling has been transferred from the US to Mexico, and benzidine production from European countries to the former Yugoslavia and Korea (Simonato 1986; LaDou 1991; Pearce et al. 1994).

An indirect sign of the level of occupational risk and of the working conditions in the developing world is the epidemic of acute poisoning taking place in some of these countries. According to one assessment, there are about 20,000 deaths each year in the world from acute pesticide intoxication, but this is likely to be a substantial underestimate (Kogevinas et al. 1994). It has been estimated that 99% of all deaths from acute pesticide poisoning occur in developing countries, where only 20% of the world’s agrochemicals are used (Kogevinas et al. 1994). This is to say that even if the epidemiological research seems to point to a reduction of occupational hazards, this might simply be due to the fact that most of this research is being conducted in the developed world. The occupational hazards may simply have been transferred to the developing world and the total world occupational exposure burden might have increased (Vineis et al. 1995).

Veterinary epidemiology

For obvious reasons, veterinary epidemiology is not directly pertinent to occupational health and occupational epidemiology. Nevertheless, clues to environmental and occupational causes of diseases may come from epidemiological studies on animals for several reasons:

  1. The life span of animals is relatively short compared with that of humans, and the latency period for diseases (e.g., most cancers) is shorter in animals than in humans. This implies that a disease that occurs in a wild or pet animal can serve as a sentinel event to alert us to the presence of a potential environmental toxicant or carcinogen for humans before it would have been identified by other means (Glickman 1993).
  2. Markers of exposures, such as haemoglobin adducts or levels of absorption and excretion of toxins, may be measured in wild and pet animals to assess environmental contamination from industrial sources (Blondin and Viau 1992; Reynolds et al. 1994; Hungerford et al. 1995).
  3. Animals are not exposed to some factors which may act as confounders in human studies, and investigations in animal populations therefore can be conducted without regard to these potential confounders. For example, a study of lung cancer in pet dogs might detect significant associations between the disease and exposure to asbestos (e.g., via owners’ asbestos-related occupations and proximity to industrial sources of asbestos). Clearly, such a study would remove the effect of active smoking as a confounder.

 

Veterinarians talk about an epidemiological revolution in veterinary medicine (Schwabe 1993) and textbooks about the discipline have appeared (Thrusfield 1986; Martin et al. 1987). Certainly, clues to environmental and occupational hazards have come from the joint efforts of human and animal epidemiologists. Among others, the effect of phenoxyherbicides in sheep and dogs (Newell et al. 1984; Hayes et al. 1990), of magnetic fields (Reif et al. 1995) and pesticides (notably flea preparations) contaminated with asbestos-like compounds in dogs (Glickman et al. 1983) are notable contributions.

Participatory research, communicating results and prevention

It is important to recognize that many epidemiological studies in the field of occupational health are initiated through the experience and concern of workers themselves (Olsen et al. 1991). Often, the workers—those historically and/or presently exposed—believed that something was wrong long before this was confirmed by research. Occupational epidemiology can be thought of as a way of “making sense” of the workers’ experience, of collecting and grouping the data in a systematic way, and allowing inferences to be made about the occupational causes of their ill health. Furthermore, the workers themselves, their representatives and the people in charge of workers’ health are the most appropriate persons to interpret the data which are collected. They therefore should always be active participants in any investigation conducted in the workplace. Only their direct involvement will guarantee that the workplace will remain safe after the researchers have left. The aim of any study is the use of the results in the prevention of disease and disability, and the success of this depends to a large extent on ensuring that the exposed participate in obtaining and interpreting the results of the study. The role and use of research findings in the litigation process as workers seek compensation for damages caused through workplace exposure is beyond the scope of this chapter. For some insight on this, the reader is referred elsewhere (Soskolne, Lilienfeld and Black 1994).

Participatory approaches to ensuring the conduct of occupational epidemiological research have in some places become standard practice in the form of steering committees established to oversee the research initiative from its inception to its completion. These committees are multipartite in their structure, including labour, science, management and/or government. With representatives of all stakeholder groups in the research process, the communication of results will be made more effective by virtue of their enhanced credibility because “one of their own” would have been overseeing the research and would be communicating the findings to his or her respective constituency. In this way, the greatest level of effective prevention is likely.

These and other participatory approaches in occupational health research are undertaken with the involvement of those who experience or are otherwise affected by the exposure-related problem of concern. This should be seen more commonly in all epidemiological research (Laurell et al. 1992). It is relevant to remember that while in epidemiological work the objective of analysis is estimation of the magnitude and distribution of risk, in participatory research, the preventability of the risk is also an objective (Loewenson and Biocca 1995). This complementarity of epidemiology and effective prevention is part of the message of this Encyclopaedia and of this chapter.

Maintaining public health relevance

Although new developments in epidemiological methodology, in data analysis and in exposure assessment and measurement (such as new molecular biological techniques) are welcome and important, they can also contribute to a reductionist approach focusing on individuals, rather than on populations. It has been said that:

… epidemiology has largely ceased to function as part of a multidisciplinary approach to understanding the causation of disease in populations and has become a set of generic methods for measuring associations of exposure and disease in individuals.… There is current neglect of social, economic, cultural, historical, political and other population factors as major causes of diseases.…Epidemiology must reintegrate itself into public health, and must rediscover the population perspective (Pearce 1996).

Occupational and environmental epidemiologists have an important role to play, not only in developing new epidemiological methods and applications for these methods, but also in ensuring that these methods are always integrated in the proper population perspective.

 

Back

Tuesday, 08 March 2011 20:55

Anthropometry

 

This article is adapted from the 3rd edition of the Encyclopaedia of Occupational Health and Safety.

Anthropometry is a fundamental branch of physical anthropology. It represents the quantitative aspect. A wide system of theories and practice is devoted to defining methods and variables to relate the aims in the different fields of application. In the fields of occupational health, safety and ergonomics anthropometric systems are mainly concerned with body build, composition and constitution, and with the dimensions of the human body’s interrelation to workplace dimensions, machines, the industrial environment, and clothing.

Anthropometric variables

An anthropometric variable is a measurable characteristic of the body that can be defined, standardized and referred to a unit of measurement. Linear variables are generally defined by landmarks that can be precisely traced to the body. Landmarks are generally of two types: skeletal-anatomical, which may be found and traced by feeling bony prominences through the skin, and virtual landmarks that are simply found as maximum or minimum distances using the branches of a caliper.

Anthropometric variables have both genetic and environmental components and may be used to define individual and population variability. The choice of variables must be related to the specific research purpose and standardized with other research in the same field, as the number of variables described in the literature is extremely large, up to 2,200 having been described for the human body.

Anthropometric variables are mainly linear measures, such as heights, distances from landmarks with subject standing or seated in standardized posture; diameters, such as distances between bilateral landmarks; lengths, such as distances between two different landmarks; curved measures, namely arcs, such as distances on the body surface between two landmarks; and girths, such as closed all-around measures on body surfaces, generally positioned at at least one landmark or at a defined height.

Other variables may require special methods and instruments. For instance skinfold thickness is measured by means of special constant pressure calipers. Volumes are measured by calculation or by immersion in water. To obtain full information on body surface characteristics, a computer matrix of surface points may be plotted using biostereometric techniques.

Instruments

Although sophisticated anthropometric instruments have been described and used with a view to automated data collection, basic anthropometric instruments are quite simple and easy to use. Much care must be taken to avoid common errors resulting from misinterpretation of landmarks and incorrect postures of subjects.

The standard anthropometric instrument is the anthropometer—a rigid rod 2 metres long, with two counter-reading scales, with which vertical body dimensions, such as heights of landmarks from floor or seat, and transverse dimensions, such as diameters, can be taken.

Commonly the rod can be split into 3 or 4 sections which fit into one another. A sliding branch with a straight or curved claw makes it possible to measure distances from the floor for heights, or from a fixed branch for diameters. More elaborate anthropometers have a single scale for heights and diameters to avoid scale errors, or are fitted with digital mechanical or electronic reading devices (figure 1).

Figure 1. An anthropometer

ERG070F1

A stadiometer is a fixed anthropometer, generally used only for stature and frequently associated with a weight beam scale.

For transverse diameters a series of calipers may be used: the pelvimeter for measures up to 600 mm and the cephalometer up to 300 mm. The latter is particularly suitable for head measurements when used together with a sliding compass (figure 2).

Figure 2. A cephalometer together with a sliding compass

ERG070F2

The foot-board is used for measuring the feet and the head-board provides cartesian co-ordinates of the head when oriented in the “Frankfort plane” (a horizontal plane passing through porion and orbitale landmarks of the head).The hand may be measured with a caliper, or with a special device composed of five sliding rulers.

Skinfold thickness is measured with a constant-pressure skinfold caliper generally with a pressure of 9.81 x 104 Pa (the pressure imposed by a weight of 10 g on an area of 1 mm2).

For arcs and girths a narrow, flexible steel tape with flat section is used. Self-straightening steel tapes must be avoided.

Systems of variables

A system of anthropometric variables is a coherent set of body measurements to solve some specific problems.

In the field of ergonomics and safety, the main problem is fitting equipment and workspace to humans and tailoring clothes to the right size.

Equipment and workspace require mainly linear measures of limbs and body segments that can easily be calculated from landmark heights and diameters, whereas tailoring sizes are based mainly on arcs, girths and flexible tape lengths. Both systems may be combined according to need.

In any case, it is absolutely necessary to have a precise space reference for each measurement. The landmarks must, therefore, be linked by heights and diameters and every arc or girth must have a defined landmark reference. Heights and slopes must be indicated.

In a particular survey, the number of variables has to be limited to the minimum so as to avoid undue stress on the subject and operator.

A basic set of variables for workspace has been reduced to 33 measured variables (figure 3) plus 20 derived by a simple calculation. For a general-purpose military survey, Hertzberg and co-workers use 146 variables. For clothes and general biological purposes the Italian Fashion Board (Ente Italiano della Moda) uses a set of 32 general purpose variables and 28 technical ones. The German norm (DIN 61 516) of control body dimensions for clothes includes 12 variables. The recommendation of the International Organization for Standardization (ISO) for anthropometry includes a core list of 36 variables (see table 1). The International Data on Anthropometry tables published by the ILO list 19 body dimensions for the populations of 20 different regions of the world (Jürgens, Aune and Pieper 1990).

Figure 3. Basic set of anthropometric variables

ERG070F3


Table 1. Basic anthropometric core list

 

1.1            Forward reach (to hand grip with subject standing upright against a wall)

1.2            Stature (vertical distance from floor to head vertex)

1.3            Eye height (from floor to inner eye corner)

1.4            Shoulder height (from floor to acromion)

1.5            Elbow height (from floor to radial depression of elbow)

1.6            Crotch height (from floor to pubic bone)

1.7            Finger tip height (from floor to grip axis of fist)

1.8            Shoulder breadth (biacromial diameter)

1.9            Hip breadth, standing (the maximum distance across hips)

2.1            Sitting height (from seat to head vertex)

2.2            Eye height, sitting (from seat to inner corner of the eye)

2.3            Shoulder height, sitting (from seat to acromion)

2.4            Elbow height, sitting (from seat to lowest point of bent elbow)

2.5            Knee height (from foot-rest to the upper surface of thigh)

2.6            Lower leg length (height of sitting surface)

2.7            Forearm-hand length (from back of bent elbow to grip axis)

2.8            Body depth, sitting (seat depth)

2.9            Buttock-knee length (from knee-cap to rearmost point of buttock)

2.10            Elbow to elbow breadth (distance between lateral surface of the elbows)

2.11            Hip breadth, sitting (seat breadth)

3.1            Index finger breadth, proximal (at the joint between medial and proximal phalanges)

3.2            Index finger breadth, distal (at the joint between distal and medial phalanges)

3.3            Index finger length

3.4            Hand length (from tip of middle finger to styloid)

3.5            Handbreadth (at metacarpals)

3.6            Wrist circumference

4.1            Foot breadth

4.2            Foot length

5.1            Heat circumference (at glabella)

5.2            Sagittal arc (from glabella to inion)

5.3            Head length (from glabella to opisthocranion)

5.4            Head breadth (maximum above the ear)

5.5            Bitragion arc (over the head between the ears)

6.1            Waist circumference (at the umbilicus)

6.2            Tibial height (from the floor to the highest point on the antero-medial margin of the glenoid of the tibia)

6.3            Cervical height sitting (to the tip of the spinous process of the 7th cervical vertebra).

Source: Adapted from ISO/DP 7250 1980).


 

 Precision and errors

The precision of living body dimensions must be considered in a stochastic manner because the human body is highly unpredictable, both as a static and as a dynamic structure.

A single individual may grow or change in muscularity and fatness; undergo skeletal changes as a consequence of aging, disease or accidents; or modify behavior or posture. Different subjects differ by proportions, not only by general dimensions. Tall stature subjects are not mere enlargements of short ones; constitutional types and somatotypes probably vary more than general dimensions.

The use of mannequins, particularly those representing the standard 5th, 50th and 95th percentiles for fitting trials may be highly misleading, if body variations in body proportions are not taken into consideration.

Errors result from misinterpretation of landmarks and incorrect use of instruments (personal error), imprecise or inexact instruments (instrumental error), or changes in subject posture (subject error—this latter may be due to difficulties of communication if the cultural or linguistic background of the subject differs from that of the operator).

Statistical treatment

Anthropometric data must be treated by statistical procedures, mainly in the field of inference methods applying univariate (mean, mode, percentiles, histograms, variance analysis, etc.), bivariate (correlation, regression) and multivariate (multiple correlation and regression, factor analysis, etc.) methods. Various graphical methods based on statistical applications have been devised to classify human types (anthropometrograms, morphosomatograms).

Sampling and survey

As anthropometric data cannot be collected for the whole population (except in the rare case of a particularly small population), sampling is generally necessary. A basically random sample should be the starting point of any anthropometric survey. To keep the number of measured subjects to a reasonable level it is generally necessary to have recourse to multiple-stage stratified sampling. This allows the most homogeneous subdivision of the population into a number of classes or strata.

The population may be subdivided by sex, age group, geographical area, social variables, physical activity and so on.

Survey forms have to be designed keeping in mind both measuring procedure and data treatment. An accurate ergonomic study of the measuring procedure should be made in order to reduce the operator’s fatigue and possible errors. For this reason, variables must be grouped according to the instrument used and ordered in sequence so as to reduce the number of body flexions the operator has to make.

To reduce the effect of personal error, the survey should be carried out by one operator. If more than one operator has to be used, training is necessary to assure the replicability of measurements.

Population anthropometrics

Disregarding the highly criticized concept of “race”, human populations are nevertheless highly variable in size of individuals and in size distribution. Generally human populations are not strictly Mendelian; they are commonly the result of admixture. Sometimes two or more populations, with different origins and adaptation, live together in the same area without interbreeding. This complicates the theoretical distribution of traits. From the anthropometric viewpoint, sexes are different populations. Populations of employees may not correspond exactly to the biological population of the same area as a consequence of possible aptitudinal selection or auto-selection due to job choice.

Populations from different areas may differ as a consequence of different adaptation conditions or biological and genetic structures.

When close fitting is important a survey on a random sample is necessary.

Fitting trials and regulation

The adaptation of workspace or equipment to the user may depend not only on the bodily dimensions, but also on such variables as tolerance of discomfort and nature of activities, clothing, tools and environmental conditions. A combination of a checklist of relevant factors, a simulator and a series of fitting trials using a sample of subjects chosen to represent the range of body sizes of the expected user population can be used.

The aim is to find tolerance ranges for all subjects. If the ranges overlap it is possible to select a narrower final range that is not outside the tolerance limits of any subject. If there is no overlap it will be necessary to make the structure adjustable or to provide it in different sizes. If more than two dimensions are adjustable a subject may not be able to decide which of the possible adjustments will fit him best.

Adjustability can be a complicated matter, especially when uncomfortable postures result in fatigue. Precise indications must, therefore, be given to the user who frequently knows little or nothing about his own anthropometric characteristics. In general, an accurate design should reduce the need for adjustment to the minimum. In any case, it should constantly be kept in mind what is involved is anthropometrics, not merely engineering.

Dynamic anthropometrics

Static anthropometrics may give wide information about movement if an adequate set of variables has been chosen. Nevertheless, when movements are complicated and a close fit with the industrial environment is desirable, as in most user-machine and human-vehicle interfaces, an exact survey of postures and movements is necessary. This may be done with suitable mock-ups that allow tracing of reach lines or by photography. In this case, a camera fitted with a telephoto lens and an anthropometric rod, placed in the sagittal plane of the subject, allows standardized photographs with little distortion of the image. Small labels on subjects’ articulations make the exact tracing of movements possible.

Another way of studying movements is to formalize postural changes according to a series of horizontal and vertical planes passing through the articulations. Again, using computerized human models with computer-aided design (CAD) systems is a feasible way to include dynamic anthropometrics in ergonomic workplace design.

 

Back

Sunday, 16 January 2011 16:18

Introduction and Concepts

Mechanistic toxicology is the study of how chemical or physical agents interact with living organisms to cause toxicity. Knowledge of the mechanism of toxicity of a substance enhances the ability to prevent toxicity and design more desirable chemicals; it constitutes the basis for therapy upon overexposure, and frequently enables a further understanding of fundamental biological processes. For purposes of this Encyclopaedia the emphasis will be placed on animals to predict human toxicity. Different areas of toxicology include mechanistic, descriptive, regulatory, forensic and environmental toxicology (Klaassen, Amdur and Doull 1991). All of these benefit from understanding the fundamental mechanisms of toxicity.

Why Understand Mechanisms of Toxicity?

Understanding the mechanism by which a substance causes toxicity enhances different areas of toxicology in different ways. Mechanistic understanding helps the governmental regulator to establish legally binding safe limits for human exposure. It helps toxicologists in recommending courses of action regarding clean-up or remediation of contaminated sites and, along with physical and chemical properties of the substance or mixture, can be used to select the degree of protective equipment required. Mechanistic knowledge is also useful in forming the basis for therapy and the design of new drugs for treatment of human disease. For the forensic toxicologist the mechanism of toxicity often provides insight as to how a chemical or physical agent can cause death or incapacitation.

If the mechanism of toxicity is understood, descriptive toxicology becomes useful in predicting the toxic effects of related chemicals. It is important to understand, however, that a lack of mechanistic information does not deter health professionals from protecting human health. Prudent decisions based on animal studies and human experience are used to establish safe exposure levels. Traditionally, a margin of safety was established by using the “no adverse effect level” or a “lowest adverse effect level” from animal studies (using repeated-exposure designs) and dividing that level by a factor of 100 for occupational exposure or 1,000 for other human environmental exposure. The success of this process is evident from the few incidents of adverse health effects attributed to chemical exposure in workers where appropriate exposure limits had been set and adhered to in the past. In addition, the human lifespan continues to increase, as does the quality of life. Overall the use of toxicity data has led to effective regulatory and voluntary control. Detailed knowledge of toxic mechanisms will enhance the predictability of newer risk models currently being developed and will result in continuous improvement.

Understanding environmental mechanisms is complex and presumes a knowledge of ecosystem disruption and homeostasis (balance). While not discussed in this article, an enhanced understanding of toxic mechanisms and their ultimate consequences in an ecosystem would help scientists to make prudent decisions regarding the handling of municipal and industrial waste material. Waste management is a growing area of research and will continue to be very important in the future.

Techniques for Studying Mechanisms of Toxicity

The majority of mechanistic studies start with a descriptive toxicological study in animals or clinical observations in humans. Ideally, animal studies include careful behavioural and clinical observations, careful biochemical examination of elements of the blood and urine for signs of adverse function of major biological systems in the body, and a post-mortem evaluation of all organ systems by microscopic examination to check for injury (see OECD test guidelines; EC directives on chemical evaluation; US EPA test rules; Japan chemicals regulations). This is analogous to a thorough human physical examination that would take place in a hospital over a two- to three-day time period except for the post-mortem examination.

Understanding mechanisms of toxicity is the art and science of observation, creativity in the selection of techniques to test various hypotheses, and innovative integration of signs and symptoms into a causal relationship. Mechanistic studies start with exposure, follow the time-related distribution and fate in the body (pharmacokinetics), and measure the resulting toxic effect at some level of the system and at some dose level. Different substances can act at different levels of the biological system in causing toxicity.

Exposure

The route of exposure in mechanistic studies is usually the same as for human exposure. Route is important because there can be effects that occur locally at the site of exposure in addition to systemic effects after the chemical has been absorbed into the blood and distributed throughout the body. A simple yet cogent example of a local effect would be irritation and eventual corrosion of the skin following application of strong acid or alkaline solutions designed for cleaning hard surfaces. Similarly, irritation and cellular death can occur in cells lining the nose and/or lungs following exposure to irritant vapours or gases such as oxides of nitrogen or ozone. (Both are constituents of air pollution, or smog). Following absorption of a chemical into blood through the skin, lungs or gastrointestinal tract, the concentration in any organ or tissue is controlled by many factors which determine the pharmacokinetics of the chemical in the body. The body has the ability to activate as well as detoxify various chemicals as noted below.

Role of Pharmacokinetics in Toxicity

Pharmacokinetics describes the time relationships for chemical absorption, distribution, metabolism (biochemical alterations in the body) and elimination or excretion from the body. Relative to mechanisms of toxicity, these pharmacokinetic variables can be very important and in some instances determine whether toxicity will or will not occur. For instance, if a material is not absorbed in a sufficient amount, systemic toxicity (inside the body) will not occur. Conversely, a highly reactive chemical that is detoxified quickly (seconds or minutes) by digestive or liver enzymes may not have the time to cause toxicity. Some polycyclic halogenated substances and mixtures as well as certain metals like lead would not cause significant toxicity if excretion were rapid; but accumulation to sufficiently high levels determines their toxicity since excretion is not rapid (sometimes measured in years). Fortunately, most chemicals do not have such long retention in the body. Accumulation of an innocuous material still would not induce toxicity. The rate of elimination from the body and detoxication is frequently referred to as the half-life of the chemical, which is the time for 50% of the chemical to be excreted or altered to a non-toxic form.

However, if a chemical accumulates in a particular cell or organ, that may signal a reason to further examine its potential toxicity in that organ. More recently, mathematical models have been developed to extrapolate pharmacokinetic variables from animals to humans. These pharmacokinetic models are extremely useful in generating hypotheses and testing whether the experimental animal may be a good representation for humans. Numerous chapters and texts have been written on this subject (Gehring et al. 1976; Reitz et al. 1987; Nolan et al. 1995). A simplified example of a physiological model is depicted in figure 1.

Figure 1. A simplified pharmacokinetic model

TOX210F1

Different Levels and Systems Can Be Adversely Affected

Toxicity can be described at different biological levels. Injury can be evaluated in the whole person (or animal), the organ system, the cell or the molecule. Organ systems include the immune, respiratory, cardiovascular, renal, endocrine, digestive, muscolo-skeletal, blood, reproductive and central nervous systems. Some key organs include the liver, kidney, lung, brain, skin, eyes, heart, testes or ovaries, and other major organs. At the cellular/biochemical level, adverse effects include interference with normal protein function, endocrine receptor function, metabolic energy inhibition, or xenobiotic (foreign substance) enzyme inhibition or induction. Adverse effects at the molecular level include alteration of the normal function of DNA-RNA transcription, of specific cytoplasmic and nuclear receptor binding, and of genes or gene products. Ultimately, dysfunction in a major organ system is likely caused by a molecular alteration in a particular target cell within that organ. However, it is not always possible to trace a mechanism back to a molecular origin of causation, nor is it necessary. Intervention and therapy can be designed without a complete understanding of the molecular target. However, knowledge about the specific mechanism of toxicity increases the predictive value and accuracy of extrapolation to other chemicals. Figure 2 is a diagrammatic representation of the various levels where interference of normal physiological processes can be detected. The arrows indicate that the consequences to an individual can be determined from top down (exposure, pharmaco- kinetics to system/organ toxicity) or from bottom up (molecular change, cellular/biochemical effect to system/organ toxicity).

Figure 2. Reresentation of mechanisms of toxicity

TOX210F2

Examples of Mechanisms of Toxicity

Mechanisms of toxicity can be straightforward or very complex. Frequently, there is a difference among the type of toxicity, the mechanism of toxicity, and the level of effect, related to whether the adverse effects are due to a single, acute high dose (like an accidental poisoning), or a lower-dose repeated exposure (from occupational or environmental exposure). Classically, for testing purposes, an acute, single high dose is given by direct intubation into the stomach of a rodent or exposure to an atmosphere of a gas or vapour for two to four hours, whichever best resembles the human exposure. The animals are observed over a two-week period following exposure and then the major external and internal organs are examined for injury. Repeated-dose testing ranges from months to years. For rodent species, two years is considered a chronic (lifetime) study sufficient to evaluate toxicity and carcinogenicity, whereas for non-human primates, two years would be considered a subchronic (less than lifetime) study to evaluate repeated dose toxicity. Following exposure a complete examination of all tissues, organs and fluids is conducted to determine any adverse effects.

Acute Toxicity Mechanisms

The following examples are specific to high-dose, acute effects which can lead to death or severe incapacitation. However, in some cases, intervention will result in transient and fully reversible effects. The dose or severity of exposure will determine the result.

Simple asphyxiants. The mechanism of toxicity for inert gases and some other non-reactive substances is lack of oxygen (anoxia). These chemicals, which cause deprivation of oxygen to the central nervous system (CNS), are termed simple asphyxiants. If a person enters a closed space that contains nitrogen without sufficient oxygen, immediate oxygen depletion occurs in the brain and leads to unconsciousness and eventual death if the person is not rapidly removed. In extreme cases (near zero oxygen) unconsciousness can occur in a few seconds. Rescue depends on rapid removal to an oxygenated environment. Survival with irreversible brain damage can occur from delayed rescue, due to the death of neurons, which cannot regenerate.

Chemical asphyxiants. Carbon monoxide (CO) competes with oxygen for binding to haemoglobin (in red blood cells) and therefore deprives tissues of oxygen for energy metabolism; cellular death can result. Intervention includes removal from the source of CO and treatment with oxygen. The direct use of oxygen is based on the toxic action of CO. Another potent chemical asphyxiant is cyanide. The cyanide ion interferes with cellular metabolism and utilization of oxygen for energy. Treatment with sodium nitrite causes a change in haemoglobin in red blood cells to methaemoglobin. Methaemoglobin has a greater binding affinity to the cyanide ion than does the cellular target of cyanide. Consequently, the methaemoglobin binds the cyanide and keeps the cyanide away from the target cells. This forms the basis for antidotal therapy.

Central nervous system (CNS) depressants. Acute toxicity is characterized by sedation or unconsciousness for a number of materials like solvents which are not reactive or which are transformed to reactive intermediates. It is hypothesized that sedation/anaesthesia is due to an interaction of the solvent with the membranes of cells in the CNS, which impairs their ability to transmit electrical and chemical signals. While sedation may seem a mild form of toxicity and was the basis for development of the early anaesthetics, “the dose still makes the poison”. If sufficient dose is administered by ingestion or inhalation the animal can die due to respiratory arrest. If anaesthetic death does not occur, this type of toxicity is usually readily reversible when the subject is removed from the environment or the chemical is redistributed or eliminated from the body.

Skin effects. Adverse effects to the skin can range from irritation to corrosion, depending on the substance encountered. Strong acids and alkaline solutions are incompatible with living tissue and are corrosive, causing chemical burns and possible scarring. Scarring is due to death of the dermal, deep skin cells responsible for regeneration. Lower concentrations may just cause irritation of the first layer of skin.

Another specific toxic mechanism of skin is that of chemical sensitization. As an example, sensitization occurs when 2,4-dinitrochlorobenzene binds with natural proteins in the skin and the immune system recognizes the altered protein-bound complex as a foreign material. In responding to this foreign material, the immune system activates special cells to eliminate the foreign substance by release of mediators (cytokines) which cause a rash or dermatitis (see “Immunotoxicology”). This is the same reaction of the immune system when exposure to poison ivy occurs. Immune sensitization is very specific to the particular chemical and takes at least two exposures before a response is elicited. The first exposure sensitizes (sets up the cells to recognize the chemical), and subsequent exposures trigger the immune system response. Removal from contact and symptomatic therapy with steroid-containing anti-inflammatory creams are usually effective in treating sensitized individuals. In serious or refractory cases a systemic acting immunosuppresant like prednisone is used in conjunction with topical treatment.

Lung sensitization. An immune sensitization response is elicited by toluene diisocyanate (TDI), but the target site is the lungs. TDI over-exposure in susceptible individuals causes lung oedema (fluid build-up), bronchial constriction and impaired breathing. This is a serious condition and requires removing the individual from potential subsequent exposures. Treatment is primarily symptomatic. Skin and lung sensitization follow a dose response. Exceeding the level set for occupational exposure can cause adverse effects.

Eye effects. Injury to the eye ranges from reddening of the outer layer (swimming-pool redness) to cataract formation of the cornea to damage to the iris (coloured part of the eye). Eye irritation tests are conducted when it is believed serious injury will not occur. Many of the mechanisms causing skin corrosion can also cause injury to the eyes. Materials corrosive to the skin, like strong acids (pH less than 2) and alkali (pH greater than 11.5), are not tested in the eyes of animals because most will cause corrosion and blindness due to a mechanism similar to that which causes skin corrosion. In addition, surface active agents like detergents and surfactants can cause eye injury ranging from irritation to corrosion. A group of materials that requires caution is the positively charged (cationic) surfactants, which can cause burns, permanent opacity of the cornea and vascularization (formation of blood vessels). Another chemical, dinitrophenol, has a specific effect of cataract formation. This appears to be related to concentration of this chemical in the eye, which is an example of pharmacokinetic distributional specificity.

While the listing above is far from exhaustive, it is designed to give the reader an appreciation for various acute toxicity mechanisms.

Subchronic and Chronic Toxicity Mechanisms

When given as a single high dose, some chemicals do not have the same mechanism of toxicity as when given repeatedly as a lower but still toxic dose. When a single high dose is given, there is always the possibility of exceeding the person’s ability to detoxify or excrete the chemical, and this can lead to a different toxic response than when lower repetitive doses are given. Alcohol is a good example. High doses of alcohol lead to primary central nervous system effects, while lower repetitive doses result in liver injury.

Anticholinesterase inhibition. Most organophosphate pesticides, for example, have little mammalian toxicity until they are metabolically activated, primarily in the liver. The primary mechanism of action of organophosphates is the inhibition of acetylcholinesterase (AChE) in the brain and peripheral nervous system. AChE is the normal enzyme that terminates the stimulation of the neurotransmitter acetylcholine. Slight inhibition of AChE over an extended period has not been associated with adverse effects. At high levels of exposure, inability to terminate this neuronal stimulation results in overstimulation of the cholinergic nervous system. Cholinergic overstimulation ultimately results in a host of symptoms, including respiratory arrest, followed by death if not treated. The primary treatment is the administration of atropine, which blocks the effects of acetylcholine, and the administration of pralidoxime chloride, which reactivates the inhibited AChE. Therefore, both the cause and the treatment of organophosphate toxicity are addressed by understanding the biochemical basis of toxicity.

Metabolic activation. Many chemicals, including carbon tetrachloride, chloroform, acetylaminofluorene, nitrosamines, and paraquat are metabolically activated to free radicals or other reactive intermediates which inhibit and interfere with normal cellular function. At high levels of exposure this results in cell death (see “Cellular injury and cellular death”). While the specific interactions and cellular targets remain unknown, the organ systems which have the capability to activate these chemicals, like the liver, kidney and lung, are all potential targets for injury. Specifically, particular cells within an organ have a greater or lesser capacity to activate or detoxify these intermediates, and this capacity determines the intracellular susceptibility within an organ. Metabolism is one reason why an understanding of pharmacokinetics, which describes these types of transformations and the distribution and elimination of these intermediates, is important in recognizing the mechanism of action of these chemicals.

Cancer mechanisms. Cancer is a multiplicity of diseases, and while the understanding of certain types of cancer is increasing rapidly due to the many molecular biological techniques that have been developed since 1980, there is still much to learn. However, it is clear that cancer development is a multi-stage process, and critical genes are key to different types of cancer. Alterations in DNA (somatic mutations) in a number of these critical genes can cause increased susceptibility or cancerous lesions (see “Genetic toxic- ology”). Exposure to natural chemicals (in cooked foods like beef and fish) or synthetic chemicals (like benzidine, used as a dye) or physical agents (ultraviolet light from the sun, radon from soil, gamma radiation from medical procedures or industrial activity) are all contributors to somatic gene mutations. However, there are natural and synthetic substances (such as anti-oxidants) and DNA repair processes which are protective and maintain homeostasis. It is clear that genetics is an important factor in cancer, since genetic disease syndromes such as xeroderma pigmentosum, where there is a lack of normal DNA repair, dramatically increase susceptibility to skin cancer from exposure to ultraviolet light from the sun.

Reproductive mechanisms. Similar to cancer, many mechanisms of reproductive and/or developmental toxicity are known, but much is to be learned. It is known that certain viruses (such as rubella), bacterial infections and drugs (such as thalidomide and vitamin A) will adversely affect development. Recently, work by Khera (1991), reviewed by Carney (1994), show good evidence that the abnormal developmental effects in animal tests with ethylene glycol are attributable to maternal metabolic acidic metabolites. This occurs when ethylene glycol is metabolized to acid metabolites including glycolic and oxalic acid. The subsequent effects on the placenta and foetus appear to be due to this metabolic toxication process.

Conclusion

The intent of this article is to give a perspective on several known mechanisms of toxicity and the need for future study. It is important to understand that mechanistic knowledge is not absolutely necessary to protect human or environmental health. This knowledge will enhance the professional’s ability to better predict and manage toxicity. The actual techniques used in elucidating any particular mechanism depend upon the collective knowledge of the scientists and the thinking of those who make decisions regarding human health.

 

Back

Monday, 28 February 2011 21:01

Exposure Assessment

The assessment of exposures is a critical step in identifying workplace hazards through epidemiological investigation. The exposure assessment process may be subdivided into a series of activities. These include:

  1. compiling an inventory of potentially toxic agents and mixtures present in the targeted work environment
  2. determining how exposures occur and how likely they are to vary among employees
  3. selecting appropriate measures or indices for quantifying exposures
  4. collecting data that will enable study participants to be assigned qualitative or quantitative exposure values for each measure. Whenever possible, these activities should be carried out under the guidance of a qualified industrial hygienist.

 

Occupational health studies are often criticized because of inadequacies in the assessment of exposures. Inadequacies may lead to differential or non-differential misclassification of exposure and subsequent bias or loss of precision in the exposure-effect analyses. Efforts to improve the situation are evidenced by several recent international conferences and texts devoted to this topic (ACGIH 1991; Armstrong et al. 1992; Proceedings of the Conference on Retrospective Assessment of Occupational Exposures in Epidemiology 1995). Clearly, technical developments are providing new opportunities for advancing exposure assessment. These developments include improvements in analytical instrumentation, a better understanding of pharmacokinetic processes, and the discovery of new biomarkers of exposure. Because occupational health studies often depend on historic exposure information for which no specific monitoring would have been undertaken, the need for retrospective exposure assessment adds an additional dimension of complexity to these studies. However, improved standards for assessment and for ensuring the reliability of such assessments continue to be developed (Siemiatycki et al. 1986). Prospective exposure assessments, of course, can be more readily validated.

The term exposure refers to the concentration of an agent at the boundary between individual and environment. Exposure is normally presumed when an agent is known to be present in a work environment and there is a reasonable expectation of employee contact with that agent. Exposures may be expressed as an 8-hour time-weighted-average (TWA) concentration, which is a measure of exposure intensity that has been averaged over an 8-hour work shift. Peak concentrations are intensities averaged over shorter time periods such as 15 minutes. Cumulative exposure is a measure of the product of average intensity and duration (e.g., a mean 8-hour TWA concentration multiplied by years worked at that mean concentration). Depending on the nature of the study and the health outcomes of interest, evaluation of peak, average intensity, cumulative or lagged exposures may be desirable.

By way of contrast, dose refers to the deposition or absorption of an agent per unit time. Dose or daily intake of an agent may be estimated by combining environmental measurement data with standard assumptions regarding, among other factors, breathing rates and dermal penetration. Alternatively, intake may be estimated based on biomonitoring data. Dose ideally would be measured at the target organ of interest.

Important exposure assessment factors include:

  1. identification of the relevant agents
  2. determination of their presence and concentrations in appropriate environmental media (e.g., air, contact surfaces)
  3. assessment of the likely routes of entry (inhalation, skin absorption, ingestion), the time course of exposure (daily variation), and cumulative duration of exposure expressed in weeks, months or years
  4. evaluation of the effectiveness of engineering and personal controls (e.g., use of protective clothing and respiratory protection may mediate exposures) and, finally
  5. host and other considerations that may modulate target organ concentrations.

 

These include the physical level of work activity and the prior health status of individuals. Special care should be taken in assessing exposure to agents that are persistent or tend to bioaccumulate (e.g., certain metals, radionuclides or stable organic compounds). With these materials, internal body burdens may increase insidiously even when environmental concentrations appear to be low.

While the situation can be quite complex, often it is not. Certainly, many valuable contributions to identifying occupational hazards have come from studies using common-sense approaches to exposure assessment. Sources of information that can be helpful in identifying and categorizing exposures include:

  1. employee interviews
  2. employer personnel and production records (these include work records, job descriptions, facility and process histories, and chemical inventories)
  3. expert judgement
  4. industrial hygiene records (area, personal, and compliance monitoring, and surface wipe samples, together with health hazard or comprehensive survey reports)
  5. interviews with long-term or retired employees and
  6. biomonitoring data.

 

There are several advantages to categorizing individual exposures in as much detail as possible. Clearly, the informativeness of a study will be enhanced to the extent that the relevant exposures have been adequately described. Secondly, the credibility of the findings may be increased because the potential for confounding can be addressed more satisfactorily. For example, referents and exposed individuals will differ as to exposure status, but may also differ relative to other measured and unmeasured explanatory factors for the disease of interest. However, if an exposure gradient can be established within the study population, it is less likely that the same degree of confounding will persist within exposure subgroups, thus strengthening the overall study findings.

Job Exposure Matrices

One of the more practical and frequently used approaches to exposure assessment has been to estimate exposures indirectly on the basis of job titles. The use of job exposure matrices can be effective when complete work histories are available and there is a reasonable constancy in both the tasks and exposures associated with the jobs under study. On the broadest scale, standard industry and job title groupings have been devised from routinely collected census data or occupational data provided on death certificates. Unfortunately, the information maintained in these large record systems is often limited to the “current” or “usual” occupation. Furthermore, because the standard groupings do not take into account the conditions present in specific workplaces, they must usually be regarded as crude exposure surrogates.

For community- and registry-based case-control studies, a more detailed exposure assessment has been achieved by utilizing expert opinion to translate job history data obtained through personal interview into semi-quantitative evaluations of likely exposures to specific agents (Siemiatycki et al. 1986). Experts, such as chemists and industrial hygienists, are chosen to assist in the exposure evaluation because of their knowledge and familiarity with various industrial processes. By combining the detailed questionnaire data with knowledge of industrial processes, this approach has been helpful in characterizing exposure differences across work facilities.

The job-exposure matrix approach has also been employed successfully in industry- and company-specific studies (Gamble and Spirtas 1976). Individual job histories (a chronological listing of past department and job assignments for each employee) are often retained in company personnel files and, when available, provide a complete job history for the employees while they are working at that facility. These data may be expanded upon through personal interviews of the study participants. The next step is to inventory all job titles and department or work area designations used during the study period. These may easily range into the hundreds or even thousands within large, multi-process facilities or across companies within an industry, when production, maintenance, research, engineering, plant support services and administrative jobs are all considered over time (often several decades), allowing for changes in industrial processes. Data consolidation can be facilitated by creating a computer file of all work history records and then using edit routines to standardize job title terminology. Those jobs involving relatively homogeneous exposures can be combined to simplify the process of linking exposures to individual jobs. However, the grouping of jobs and work locations should be supported wherever possible by measurement data collected according to a sound sampling strategy.

Even with computerized work histories, retrospective linkage of exposure data to individuals can be a difficult task. Certainly, workplace conditions will be altered as technologies change, product demand shifts, and new regulations are put in place. There may also be changes in product formulations and seasonal production patterns in many industries. Permanent records may be kept regarding some changes. However, it is less likely that records will be retained regarding seasonal and other marginal process and production changes. Employees also may be trained to perform multiple jobs and then be rotated among jobs as production demands change. All of these circumstances add complexity to the exposure profiles of employees. Nevertheless, there are also work settings that have remained relatively unchanged for many years. In the final analysis, each work setting must be evaluated in its own right.

Ultimately, it will be necessary to summarize the worklife exposure history of each person in a study. Considerable influence on the final exposure-effect measures of risk has been demonstrated (Suarez-Almazor et al. 1992), and hence great care has to be exercised in selecting the most appropriate summary measure of exposure.

Industrial Hygiene—Environmental Measurement

Monitoring of work exposures is a fundamental ongoing activity in protecting employee health. Thus, industrial hygiene records may already exist at the time an epidemiological study is being planned. If so, these data should be reviewed to determine how well the target population has been covered, how many years of data are represented in the files, and how easily the measurements can be linked to jobs, work areas and individuals. These determinations will be helpful both in assessing the feasibility of the epidemiological study and in identifying data gaps that could be remedied with additional exposure sampling.

The issue of how best to link measurement data to specific jobs and individuals is a particularly important one. Area and breathing zone sampling may be helpful to industrial hygienists in identifying emission sources for corrective actions, but could be less useful in characterizing actual employee exposures unless careful time studies of employee work activities have been performed. For example, continuous area monitoring may identify excursion exposures at certain times in the day, but the question remains as to whether or not employees were in the work area at that time.

Personal sampling data generally provide more accurate estimates of employee exposure as long as the sampling is carried out under representative conditions, the use of personal protective gear is properly taken into account, and the job tasks and process conditions are relatively constant from day to day. Personal samples may be readily linked to the individual employee through the use of personal identifiers. These data may be generalized to other employees in the same jobs and to other time periods as warranted. However, based on their own experience, Rappaport et al. (1993) have cautioned that exposure concentrations may be highly variable even among employees assigned to what are considered homogeneous exposure groups. Again, expert judgement is needed in deciding whether or not homogeneous exposure groups can be presumed.

Researchers have successfully combined a job-exposure matrix approach with utilization of environmental measurement data to estimate exposures within the cells of the matrix. When measurement data are found to be lacking, it may be possible to fill in data gaps through the use of exposure modelling. Generally, this involves developing a model for relating environmental concentrations to more easily assessed determinants of exposure concentrations (e.g., production volumes, physical characteristics of the facility including the use of exhaust ventilation systems, agent volatility and nature of the work activity). The model is constructed for work settings with known environmental concentrations and then used to estimate concentrations in similar work settings lacking measurement data but having information on such parameters as constituent ingredients and production volumes. This approach may be particularly helpful for the retrospective estimation of exposures.

Another important assessment issue is the handling of exposure to mixtures. First, from an analytic viewpoint, separate detection of chemically related compounds and elimination of interferences from other substances present in the sample may not be within the capability of the analytic procedure. The various limitations in the analytic procedures used to provide measurement data need to be evaluated and the study objectives modified accordingly. Secondly, it may be that certain agents are almost always used together and hence occur in approximately the same relative proportions throughout the work environment under study. In this situation, internal statistical analyses per se will not be useful in distinguishing whether or not effects are due to one or the other agents or due to a combination of the agents. Such judgements would only be possible based on review of external studies in which the same agent combinations had not occurred. Finally, in situations where different materials are used interchangeably depending on product specifications (e.g., the use of different colourants to obtain desired colour contrasts), it may be impossible to attribute effects to any specific agent.

Biological Monitoring

Biomarkers are molecular, biochemical or cellular alterations that can be measured in biological media such as human tissue, cells or fluids. A primary reason for developing biomarkers of exposure is to provide an estimate of internal dose for a particular agent. This approach is especially useful when multiple routes of exposure are likely (e.g., inhalation and skin absorption), when protective gear is worn intermittently, or when the conditions of exposure are unpredictable. Biomonitoring can be especially advantageous when the agents of interest are known to have relatively long biological half-lives. From a statistical perspective, an advantage of biological monitoring over air monitoring may be seen with agents having a half-life as short as ten hours, depending upon the degree of environmental variability (Droz and Wu 1991). The exceedingly long half-lives of materials such as chlorinated dioxins (measured in years) make these compounds ideal candidates for biological monitoring. As with analytical methods for measuring air concentrations, one must be aware of potential interferences. For example, before utilizing a particular metabolite as a biomarker, it should be determined whether or not other common substances, such as those contained in certain medications and in cigarette smoke, could be metabolized to the same end point. In general, basic knowledge of the pharmacokinetics of an agent is needed before biological monitoring is utilized as a basis for exposure assessment.

The most frequent points of measurement include alveolar air, urine and blood. Alveolar air samples may be helpful in characterizing high short-term solvent exposures that have occurred within minutes or hours of when the sample was collected. Urinary samples are typically collected to determine excretion rates for metabolites of the compound of interest. Blood samples may be collected for direct measurement of the compound, for measurement of metabolites, or for determination of protein or DNA adducts (e.g., albumin or haemoglobin adducts, and DNA adducts in circulating lymphocytes). Accessible tissue cells, such as epithelial cells from the buccal area of the mouth, may also be sampled for identification of DNA adducts.

Determination of cholinesterase activity in red blood cells and plasma exemplifies the use of biochemical alterations as a measure of exposure. Organophosphorus pesticides inhibit cholinesterase activity and hence measurement of that activity before and after likely exposure to these compounds can be a useful indicator of exposure intensity. However, as one progresses along the spectrum of biological alterations, it becomes more difficult to distinguish between biomarkers of exposure and those of effect. In general, effect measures tend to be non-specific for the substance of interest and, therefore, other potential explanations of the effect may need to be assessed in order to support using that parameter as an exposure measure. Exposure measures should either be directly tied to the agent of interest or there should be a sound basis for linking any indirect measure to the agent. Despite these qualifications, biological monitoring holds much promise as a means for improving exposure assessment in support of epidemiological studies.

Conclusions

In making comparisons in occupational epidemiology studies, the need is to have a group of workers with exposure to compare against a group of workers without exposure. Such distinctions are crude, but can be helpful in identifying problem areas. Clearly, however, the more refined the measure of exposure, the more useful will be the study, specifically in terms of its ability to identify and develop appropriately targeted intervention programmes.

 

Back

Tuesday, 08 March 2011 21:01

Muscular Work

Muscular Work in Occupational Activities

In industrialized countries around 20% of workers are still employed in jobs requiring muscular effort (Rutenfranz et al. 1990). The number of conventional heavy physical jobs has decreased, but, on the other hand, many jobs have become more static, asymmetrical and stationary. In developing countries, muscular work of all forms is still very common.

Muscular work in occupational activities can be roughly divided into four groups: heavy dynamic muscle work, manual materials handling, static work and repetitive work. Heavy dynamic work tasks are found in forestry, agriculture and the construction industry, for example. Materials handling is common, for example, in nursing, transportation and warehousing, while static loads exist in office work, the electronics industry and in repair and maintenance tasks. Repetitive work tasks can be found in the food and wood-processing industries, for example.

It is important to note that manual materials handling and repetitive work are basically either dynamic or static muscular work, or a combination of these two.

Physiology of Muscular Work

Dynamic muscular work

In dynamic work, active skeletal muscles contract and relax rhythmically. The blood flow to the muscles is increased to match metabolic needs. The increased blood flow is achieved through increased pumping of the heart (cardiac output), decreased blood flow to inactive areas, such as kidneys and liver, and increased number of open blood vessels in the working musculature. Heart rate, blood pressure, and oxygen extraction in the muscles increase linearly in relation to working intensity. Also, pulmonary ventilation is heightened owing to deeper breathing and increased breathing frequency. The purpose of activating the whole cardio-respiratory system is to enhance oxygen delivery to the active muscles. The level of oxygen consumption measured during heavy dynamic muscle work indicates the intensity of the work. The maximum oxygen consumption (VO2max) indicates the person’s maximum capacity for aerobic work. Oxygen consumption values can be translated to energy expenditure (1 litre of oxygen consumption per minute corresponds to approximately 5 kcal/min or 21 kJ/min).

In the case of dynamic work, when the active muscle mass is smaller (as in the arms), maximum working capacity and peak oxygen consumption are smaller than in dynamic work with large muscles. At the same external work output, dynamic work with small muscles elicits higher cardio-respiratory responses (e.g., heart rate, blood pressure) than work with large muscles (figure 1).

Figure 1. Static versus dynamic work    

ERG060F2

Static muscle work

In static work, muscle contraction does not produce visible movement, as, for example, in a limb. Static work increases the pressure inside the muscle, which together with the mechanical compression occludes blood circulation partially or totally. The delivery of nutrients and oxygen to the muscle and the removal of metabolic end-products from the muscle are hampered. Thus, in static work, muscles become fatigued more easily than in dynamic work.

The most prominent circulatory feature of static work is a rise in blood pressure. Heart rate and cardiac output do not change much. Above a certain intensity of effort, blood pressure increases in direct relation to the intensity and the duration of the effort. Furthermore, at the same relative intensity of effort, static work with large muscle groups produces a greater blood pressure response than does work with smaller muscles. (See figure 2)

Figure 2. The expanded stress-strain model modified from Rohmert (1984)

ERG060F1

In principle, the regulation of ventilation and circulation in static work is similar to that in dynamic work, but the metabolic signals from the muscles are stronger, and induce a different response pattern.

Consequences of Muscular Overload in Occupational Activities

The degree of physical strain a worker experiences in muscular work depends on the size of the working muscle mass, the type of muscular contractions (static, dynamic), the intensity of contractions, and individual characteristics.

When muscular workload does not exceed the worker’s physical capacities, the body will adapt to the load and recovery is quick when the work is stopped. If the muscular load is too high, fatigue will ensue, working capacity is reduced, and recovery slows down. Peak loads or prolonged overload may result in organ damage (in the form of occupational or work-related diseases). On the other hand, muscular work of certain intensity, frequency, and duration may also result in training effects, as, on the other hand, excessively low muscular demands may cause detraining effects. These relationships are represented by the so-called expanded stress-strain concept developed by Rohmert (1984) (figure 3).

Figure 3. Analysis of acceptable workloads

ERG060F3

In general, there is little epidemiological evidence that muscular overload is a risk factor for diseases. However, poor health, disability and subjective overload at work converge in physically demanding jobs, especially with older workers. Furthermore, many risk factors for work-related musculoskeletal diseases are connected to different aspects of muscular workload, such as the exertion of strength, poor working postures, lifting and sudden peak loads.

One of the aims of ergonomics has been to determine acceptable limits for muscular workloads which could be applied for the prevention of fatigue and disorders. Whereas the prevention of chronic effects is the focus of epidemiology, work physiology deals mostly with short-term effects, that is, fatigue in work tasks or during a work day.

Acceptable Workload in Heavy Dynamic Muscular Work

The assessment of acceptable workload in dynamic work tasks has traditionally been based on measurements of oxygen consumption (or, correspondingly, energy expenditure). Oxygen consumption can be measured with relative ease in the field with portable devices (e.g., Douglas bag, Max Planck respirometer, Oxylog, Cosmed), or it can be estimated from heart rate recordings, which can be made reliably at the workplace, for example, with the SportTester device. The use of heart rate in the estimation of oxygen consumption requires that it be individually calibrated against measured oxygen consumption in a standard work mode in the laboratory, i.e., the investigator must know the oxygen consumption of the individual subject at a given heart rate. Heart rate recordings should be treated with caution because they are also affected by such factors as physical fitness, environmental temperature, psychological factors and size of active muscle mass. Thus, heart rate measurements can lead to overestimates of oxygen consumption in the same way that oxygen consumption values can give rise to underestimates of global physiological strain by reflecting only energy requirements.

Relative aerobic strain (RAS) is defined as the fraction (expressed as a percentage) of a worker’s oxygen consumption measured on the job relative to his or her VO2max measured in the laboratory. If only heart rate measurements are available, a close approximation to RAS can be made by calculating a value for percentage heart rate range (% HR range) with the so-called Karvonen formula as in figure 3.

VO2max is usually measured on a bicycle ergometer or treadmill, for which the mechanical efficiency is high (20-25%). When the active muscle mass is smaller or the static component is higher, VO2max and mechanical efficiency will be smaller than in the case of exercise with large muscle groups. For example, it has been found that in the sorting of postal parcels the VO2max of workers was only 65% of the maximum measured on a bicycle ergometer, and the mechanical efficiency of the task was less than 1%. When guidelines are based on oxygen consumption, the test mode in the maximal test should be as close as possible to the real task. This goal, however, is difficult to achieve.

According to Åstrand’s (1960) classical study, RAS should not exceed 50% during an eight-hour working day. In her experiments, at a 50% workload, body weight decreased, heart rate did not reach steady state and subjective discomfort increased during the day. She recommended a 50% RAS limit for both men and women. Later on she found that construction workers spontaneously chose an average RAS level of 40% (range 25-55%) during a working day. Several more recent studies have indicated that the acceptable RAS is lower than 50%. Most authors recommend 30-35% as an acceptable RAS level for the entire working day.

Originally, the acceptable RAS levels were developed for pure dynamic muscle work, which rarely occurs in real working life. It may happen that acceptable RAS levels are not exceeded, for example, in a lifting task, but the local load on the back may greatly exceed acceptable levels. Despite its limitations, RAS determination has been widely used in the assessment of physical strain in different jobs.

In addition to the measurement or estimation of oxygen consumption, other useful physiological field methods are also available for the quantification of physical stress or strain in heavy dynamic work. Observational techniques can be used in the estimation of energy expenditure (e.g., with the aid of the Edholm scale) (Edholm 1966). Rating of perceived exertion (RPE) indicates the subjective accumulation of fatigue. New ambulatory blood pressure monitoring systems allow more detailed analyses of circulatory responses.

Acceptable Workload in Manual Materials Handling

Manual materials handling includes such work tasks as lifting, carrying, pushing and pulling of various external loads. Most of the research in this area has focused on low back problems in lifting tasks, especially from the biomechanical point of view.

A RAS level of 20-35% has been recommended for lifting tasks, when the task is compared to an individual maximum oxygen consumption obtained from a bicycle ergometer test.

Recommendations for a maximum permissible heart rate are either absolute or related to the resting heart rate. The absolute values for men and women are 90-112 beats per minute in continuous manual materials handling. These values are about the same as the recommended values for the increase in heart rate above resting levels, that is, 30 to 35 beats per minute. These recommendations are also valid for heavy dynamic muscle work for young and healthy men and women. However, as mentioned previously, heart rate data should be treated with caution, because it is also affected by other factors than muscle work.

The guidelines for acceptable workload for manual materials handling based on biomechanical analyses comprise several factors, such as weight of the load, handling frequency, lifting height, distance of the load from the body and physical characteristics of the person.

In one large-scale field study (Louhevaara, Hakola and Ollila 1990) it was found that healthy male workers could handle postal parcels weighing 4 to 5 kilograms during a shift without any signs of objective or subjective fatigue. Most of the handling occurred below shoulder level, the average handling frequency was less than 8 parcels per minute and the total number of parcels was less than 1,500 per shift. The mean heart rate of the workers was 101 beats per minute and their mean oxygen consumption 1.0 l/min, which corresponded to 31% RAS as related to bicycle maximum.

Observations of working postures and use of force carried out for example according to OWAS method (Karhu, Kansi and Kuorinka 1977), ratings of perceived exertion and ambulatory blood pressure recordings are also suitable methods for stress and strain assessments in manual materials handling. Electromyography can be used to assess local strain responses, for example in arm and back muscles.

Acceptable Workload for Static Muscular Work

Static muscular work is required chiefly in maintaining working postures. The endurance time of static contraction is exponentially dependent on the relative force of contraction. This means, for example, that when the static contraction requires 20% of the maximum force, the endurance time is 5 to 7 minutes, and when the relative force is 50%, the endurance time is about 1 minute.

Older studies indicated that no fatigue will be developed when the relative force is below 15% of the maximum force. However, more recent studies have indicated that the acceptable relative force is specific to the muscle or muscle group, and is 2 to 5% of the maximum static strength. These force limits are, however, difficult to use in practical work situations because they require electromyographic recordings.

For the practitioner, fewer field methods are available for the quantification of strain in static work. Some observational methods (e.g., the OWAS method) exist to analyse the proportion of poor working postures, that is, postures deviating from normal middle positions of the main joints. Blood pressure measurements and ratings of perceived exertion may be useful, whereas heart rate is not so applicable.

Acceptable Workload in Repetitive Work

Repetitive work with small muscle groups resembles static muscle work from the point of view of circulatory and metabolic responses. Typically, in repetitive work muscles contract over 30 times per minute. When the relative force of contraction exceeds 10% of the maximum force, endurance time and muscle force start to decrease. However, there is wide individual variation in endurance times. For example, the endurance time varies between two to fifty minutes when the muscle contracts 90 to 110 times per minute at a relative force level of 10 to 20% (Laurig 1974).

It is very difficult to set any definitive criteria for repetitive work, because even very light levels of work (as with the use of a microcomputer mouse) may cause increases in intramuscular pressure, which may sometimes lead to swelling of muscle fibres, pain and reduction in muscle strength.

Repetitive and static muscle work will cause fatigue and reduced work capacity at very low relative force levels. Therefore, ergonomic interventions should aim to minimize the number of repetitive movements and static contractions as far as possible. Very few field methods are available for strain assessment in repetitive work.

Prevention of Muscular Overload

Relatively little epidemiological evidence exists to show that muscular load is harmful to health. However, work physiological and ergonomic studies indicate that muscular overload results in fatigue (i.e., decrease in work capacity) and may reduce productivity and quality of work.

The prevention of muscular overload may be directed to the work content, the work environment and the worker. The load can be adjusted by technical means, which focus on the work environment, tools, and/or the working methods. The fastest way to regulate muscular workload is to increase the flexibility of working time on an individual basis. This means designing work-rest regimens which take into account the workload and the needs and capacities of the individual worker.

Static and repetitive muscular work should be kept at a minimum. Occasional heavy dynamic work phases may be useful for the maintenance of endurance type physical fitness. Probably, the most useful form of physical activity that can be incorporated into a working day is brisk walking or stair climbing.

Prevention of muscular overload, however, is very difficult if a worker’s physical fitness or working skills are poor. Appropriate training will improve working skills and may reduce muscular loads at work. Also, regular physical exercise during work or leisure time will increase the muscular and cardio-respiratory capacities of the worker.

 

Back

Sunday, 16 January 2011 16:29

Cellular Injury and Cellular Death

Virtually all of medicine is devoted to either preventing cell death, in diseases such as myocardial infarction, stroke, trauma and shock, or causing it, as in the case of infectious diseases and cancer. It is, therefore, essential to understand the nature and mechanisms involved. Cell death has been classified as “accidental”, that is, caused by toxic agents, ischaemia and so on, or “programmed”, as occurs during embryological development, including formation of digits, and resorption of the tadpole tail.

Cell injury and cell death are, therefore, important both in physiology and in pathophysiology. Physiological cell death is extremely important during embryogenesis and embryonic development. The study of cell death during development has led to important and new information on the molecular genetics involved, especially through the study of development in invertebrate animals. In these animals, the precise location and the significance of cells that are destined to undergo cell death have been carefully studied and, with the use of classic mutagenesis techniques, several involved genes have now been identified. In adult organs, the balance between cell death and cell proliferation controls organ size. In some organs, such as the skin and the intestine, there is a continual turnover of cells. In the skin, for example, cells differentiate as they reach the surface, and finally undergo terminal differentiation and cell death as keratinization proceeds with the formation of crosslinked envelopes.

Many classes of toxic chemicals are capable of inducing acute cell injury followed by death. These include anoxia and ischaemia and their chemical analogues such as potassium cyanide; chemical carcinogens, which form electrophiles that covalently bind to proteins in nucleic acids; oxidant chemicals, resulting in free radical formation and oxidant injury; activation of complement; and a variety of calcium ionophores. Cell death is also an important component of chemical carcinogenesis; many complete chemical carcinogens, at carcinogenic doses, produce acute necrosis and inflammation followed by regeneration and preneoplasia.

Definitions

Cell injury

Cell injury is defined as an event or stimulus, such as a toxic chemical, that perturbs the normal homeostasis of the cell, thus causing a number of events to occur (figure 1). The principal targets of lethal injury illustrated are inhibition of ATP synthesis, disruption of plasma membrane integrity or withdrawal of essential growth factors.

Figure 1. Cell injury

TOX060F1

Lethal injuries result in the death of a cell after a variable period of time, depending on temperature, cell type and the stimulus; or they can be sublethal or chronic—that is, the injury results in an altered homeostatic state which, though abnormal, does not result in cell death (Trump and Arstila 1971; Trump and Berezesky 1992; Trump and Berezesky 1995; Trump, Berezesky and Osornio-Vargas 1981). In the case of a lethal injury, there is a phase prior to the time of cell death

during this time, the cell will recover; however, after a particular point in time (the “point of no return” or point of cell death), the removal of the injury does not result in recovery but instead the cell undergoes degradation and hydrolysis, ultimately reaching physical-chemical equilibrium with the environment. This is the phase known as necrosis. During the prelethal phase, several principal types of change occur, depending on the cell and the type of injury. These are known as apoptosis and oncosis.

 

 

 

 

 

Apoptosis

Apoptosis is derived from the Greek words apo, meaning away from, and ptosis, meaning to fall. The term falling away from is derived from the fact that, during this type of prelethal change, the cells shrink and undergo marked blebbing at the periphery. The blebs then detach and float away. Apoptosis occurs in a variety of cell types following various types of toxic injury (Wyllie, Kerr and Currie 1980). It is especially prominent in lymphocytes, where it is the predominant mechanism for turnover of lymphocyte clones. The resulting fragments result in the basophilic bodies seen within macrophages in lymph nodes. In other organs, apoptosis typically occurs in single cells which are rapidly cleared away before and following death by phagocytosis of the fragments by adjacent parenchymal cells or by macrophages. Apoptosis occurring in single cells with subsequent phagocytosis typically does not result in inflammation. Prior to death, apoptotic cells show a very dense cytosol with normal or condensed mitochondria. The endoplasmic reticulum (ER) is normal or only slightly dilated. The nuclear chromatin is markedly clumped along the nuclear envelope and around the nucleolus. The nuclear contour is also irregular and nuclear fragmentation occurs. The chromatin conden- sation is associated with DNA fragmentation which, in many instances, occurs between nucleosomes, giving a characteristic ladder appearance on electrophoresis.

In apoptosis, increased [Ca2+]i may stimulate K+ efflux resulting in cell shrinkage, which probably requires ATP. Injuries that totally inhibit ATP synthesis, therefore, are more likely to result in apoptosis. A sustained increase of [Ca2+]i has a number of deleterious effects including activation of proteases, endonucleases, and phospholipases. Endonuclease activation results in single and double DNA strand breaks which, in turn, stimulate increased levels of p53 and in poly-ADP ribosylation, and of nuclear proteins which are essential in DNA repair. Activation of proteases modifies a number of substrates including actin and related proteins leading to bleb formation. Another important substrate is poly(ADP-ribose) polymerase (PARP), which inhibits DNA repair. Increased [Ca2+]i is also associated with activation of a number of protein kinases, such as MAP kinase, calmodulin kinase and others. Such kinases are involved in activation of transcription factors which initiate transcription of immediate-early genes, for example, c-fos, c-jun and c-myc, and in activation of phospholipase A2 which results in permeabilization of the plasma membrane and of intracellular membranes such as the inner membrane of mitochondria.

Oncosis

Oncosis, derived from the Greek word onkos, to swell, is so named because in this type of prelethal change the cell begins to swell almost immediately following the injury (Majno and Joris 1995). The reason for the swelling is an increase in cations in the water within the cell. The principal cation responsible is sodium, which is normally regulated to maintain cell volume. However, in the absence of ATP or if Na-ATPase of the plasmalemma is inhibited, volume control is lost because of intracellular protein, and sodium in the water continuing to increase. Among the early events in oncosis are, therefore, increased [Na+]i which leads to cellular swelling and increased [Ca2+]i resulting either from influx from the extracellular space or release from intracellular stores. This results in swelling of the cytosol, swelling of the endoplasmic reticulum and Golgi apparatus, and the formation of watery blebs around the cell surface. The mitochondria initially undergo condensation, but later they too show high-amplitude swelling because of damage to the inner mitochondrial membrane. In this type of prelethal change, the chromatin undergoes condensation and ultimately degradation; however, the characteristic ladder pattern of apoptosis is not seen.

Necrosis

Necrosis refers to the series of changes that occur following cell death when the cell is converted to debris which is typically removed by the inflammatory response. Two types can be distinguished: oncotic necrosis and apoptotic necrosis. Oncotic necrosis typically occurs in large zones, for example, in a myocardial infarct or regionally in an organ after chemical toxicity, such as the renal proximal tubule following administration of HgCl2. Broad zones of an organ are involved and the necrotic cells rapidly incite an inflammatory reaction, first acute and then chronic. In the event that the organism survives, in many organs necrosis is followed by clearing away of the dead cells and regeneration, for example, in the liver or kidney following chemical toxicity. In contrast, apoptotic necrosis typically occurs on a single cell basis and the necrotic debris is formed within the phagocytes of macrophages or adjacent parenchymal cells. The earliest characteristics of necrotic cells include interruptions in plasma membrane continuity and the appearance of flocculent densities, representing denatured proteins within the mitochondrial matrix. In some forms of injury that do not initially interfere with mitochondrial calcium accumulation, calcium phosphate deposits can be seen within the mitochondria. Other membrane systems are similarly fragmenting, such as the ER, the lysosomes and the Golgi apparatus. Ultimately, the nuclear chromatin undergoes lysis, resulting from attack by lysosomal hydrolases. Following cell death, lysosomal hydrolases play an important part in clearing away debris with cathepsins, nucleolases and lipases since these have an acid pH optimum and can survive the low pH of necrotic cells while other cellular enzymes are denatured and inactivated.

Mechanisms

Initial stimulus

In the case of lethal injuries, the most common initial interactions resulting in injury leading to cell death are interference with energy metabolism, such as anoxia, ischaemia or inhibitors of respiration, and glycolysis such as potassium cyanide, carbon monoxide, iodo-acetate, and so on. As mentioned above, high doses of compounds that inhibit energy metabolism typically result in oncosis. The other common type of initial injury resulting in acute cell death is modification of the function of the plasma membrane (Trump and Arstila 1971; Trump, Berezesky and Osornio-Vargas 1981). This can either be direct damage and permeabilization, as in the case of trauma or activation of the C5b-C9 complex of complement, mechanical damage to the cell membrane or inhibition of the sodium-potassium (Na+-K+) pump with glycosides such as ouabain. Calcium ionophores such as ionomycin or A23187, which rapidly carry [Ca2+] down the gradient into the cell, also cause acute lethal injury. In some cases, the pattern in the prelethal change is apoptosis; in others, it is oncosis.

Signalling pathways

With many types of injury, mitochondrial respiration and oxidative phosphorylation are rapidly affected. In some cells, this stimulates anaerobic glycolysis, which is capable of maintaining ATP, but with many injuries this is inhibited. The lack of ATP results in failure to energize a number of important homeostatic processes, in particular, control of intracellular ion homeostasis (Trump and Berezesky 1992; Trump, Berezesky and Osornio-Vargas 1981). This results in rapid increases of [Ca2+]i, and increased [Na+] and [Cl-] results in cell swelling. Increases in [Ca2+]i result in the activation of a number of other signalling mechanisms discussed below, including a series of kinases, which can result in increased immediate early gene transcription. Increased [Ca2+]i also modifies cytoskeletal function, in part resulting in bleb formation and in the activation of endonucleases, proteases and phospholipases. These seem to trigger many of the important effects discussed above, such as membrane damage through protease and lipase activation, direct degradation of DNA from endonuclease activation, and activation of kinases such as MAP kinase and calmodulin kinase, which act as transcription factors.

Through extensive work on development in the invertebrate C. elegans and Drosophila, as well as human and animal cells, a series of pro-death genes have been identified. Some of these invertebrate genes have been found to have mammalian counterparts. For example, the ced-3 gene, which is essential for programmed cell death in C. elegans, has protease activity and a strong homology with the mammalian interleukin converting enzyme (ICE). A closely related gene called apopain or prICE has recently been identified with even closer homology (Nicholson et al. 1995). In Drosophila, the reaper gene seems to be involved in a signal that leads to programmed cell death. Other pro-death genes include the Fas membrane protein and the important tumour-suppressor gene, p53, which is widely conserved. p53 is induced at the protein level following DNA damage and when phosphorylated acts as a transcription factor for other genes such as gadd45 and waf-1, which are involved in cell death signalling. Other immediate early genes such as c-fos, c-jun, and c-myc also seem to be involved in some systems.

At the same time, there are anti-death genes which appear to counteract the pro-death genes. The first of these to be identified was ced-9 from C. elegans, which is homologous to bcl-2 in humans. These genes act in an as yet unknown way to prevent cell killing by either genetic or chemical toxins. Some recent evidence indicates that bcl-2 may act as an antioxidant. Currently, there is much effort underway to develop an understanding of the genes involved and to develop ways to activate or inhibit these genes, depending on the situation.

 

Back

Monday, 28 February 2011 21:03

Summary Worklife Exposure Measures

Researchers are fortunate when they have at their disposal a detailed chronology of the worklife experience of workers that provides an historic review of jobs they have held over time. For these workers a job exposure matrix can then be set up that allows each and every job change that a worker has gone through to be associated with specific exposure information.

Detailed exposure histories must be summarized for analysis purposes in order to determine whether patterns are evident that could be related to health and safety issues in the workplace. We can visualize a list of, say, 20 job changes that a worker had experienced in his or her working lifetime. There are then several alternative ways in which the exposure details (for each of the 20 job changes in this example) can be summarized, taking duration and/or concentration/dose/grade of exposure into account.

It is important to note, however, that different conclusions from a study could be reached depending on the method selected (Suarez-Almazor et al. 1992). An example of five summary worklife exposure measures is shown in table 1.

Table 1. Formulae and dimensions or units of the five selected summary measures of worklife exposure

Exposure measure

Formula

Dimensions/Units

Cumulative exposure index (CEI)

Σ (grade x time exposed)

grade and time

Mean grade (MG)

Σ (grade x time exposed)/total time exposed

grade

Highest grade ever (HG)

highest grade to which exposed for ≥ 7 days

grade

Time-weighted average (TWA) grade

Σ (grade x time exposed)/total time employed

grade

Total time exposed (TTE)

Σ time exposed

time

Adapted from Suarez-Almazor et al. 1992.

Cumulative exposure index. The cumulative exposure index (CEI) is equivalent to “dose” in toxicological studies and represents the sum, over a working lifetime, of the products of exposure grade and exposure duration for each successive job title. It includes time in its units.

Mean grade. The mean grade (MG) cumulates the products of exposure grade and exposure duration for each successive job title (i.e., the CEI) and divides by the total time exposed at any grade greater than zero. MG is independent of time in its units; the summary measure for a person exposed for a long period at a high concentration will be similar to that for a person exposed for a short period at a high concentration. Within any matched set in a case-control design, MG is an average grade of exposure per unit of time exposed. It is an average grade for the time actually exposed to the agent under consideration.

Highest grade ever. The highest grade ever (HG) is determined from scanning the work history for the highest grade assignment in the period of observation to which the worker was exposed for at least seven days. The HG could misrepresent a person’s worklife exposure because, by its very formulation, it is based on a maximizing rather than on an averaging procedure and is therefore independent of duration of exposure in its units.

Time-weighted average grade. The time-weighted average (TWA) grade is the cumulative exposure index (CEI) divided by the total time employed. Within any matched set in a case-control design, the TWA grade averages over total time employed. It differs from MG, which averages only over the total time actually exposed. Thus, TWA grade can be viewed as an average exposure per unit of time in the full term of employment regardless of exposure per se.

Total time exposed. The total time exposed (TTE) accumulates all time periods associated with exposure in units of time. TTE has appeal for its simplicity. However, it is well accepted that health effects must be related not only to duration of chemical exposure, but also to the intensity of that exposure (i.e., the concentration or grade).

Clearly, the utility of a summary exposure measure is determined by the respective weight it attributes to either duration or concentration of exposure or both. Thus different measures may produce different results (Walker and Blettner 1985). Ideally, the summary measure selected should be based on a set of defensible assumptions regarding the postulated biological mechanism for the agent or disease association under study (Smith 1987). This procedure is not, however, always possible. Very often, the biological effect of the duration of exposure or the concentration of the agent under study is unknown. In this context, the use of different exposure measures may be useful to suggest a mechanism by which exposure exerts its effect.

It is recommended that, in the absence of proved models for assessing exposure, a variety of summary worklife exposure measures be used to estimate risk. This approach would facilitate the comparison of findings across studies.

 

Back

Tuesday, 08 March 2011 21:13

Postures at Work

A person’s posture at work—the mutual organization of the trunk, head and extremities—can be analysed and understood from several points of view. Postures aim at advancing the work; thus, they have a finality which influences their nature, their time relation and their cost (physiological or otherwise) to the person in question. There is a close interaction between the body’s physiological capacities and characteristics and the requirement of the work.

Musculoskeletal load is a necessary element in body functions and indispensable in well-being. From the standpoint of the design of the work, the question is to find the optimal balance between the necessary and the excessive.

Postures have interested researchers and practitioners for at least the following reasons:

    1. A posture is the source of musculoskeletal load. Except for relaxed standing, sitting and lying horizontally, muscles have to create forces to balance the posture and/or control movements. In classical heavy tasks, for example in the construction industry or in the manual handling of heavy materials, external forces, both dynamic and static, add to the internal forces in the body, sometimes creating high loads which may exceed the capacity of the tissues. (See figure 1) Even in relaxed postures, when muscle work approaches zero, tendons and joints may be loaded and show signs of fatigue. A job with low apparent loading—an example being that of a microscopist—may become tedious and strenuous when it is carried out over a long period of time.
    2. Posture is closely related to balance and stability. In fact, posture is controlled by several neural reflexes where input from tactile sensations and visual cues from the surroundings play an important role. Some postures, like reaching objects from a distance, are inherently unstable. Loss of balance is a common immediate cause of work accidents. Some work tasks are performed in an environment where stability cannot always be guaranteed, for example, in the construction industry.
    3. Posture is the basis of skilled movements and visual observation. Many tasks require fine, skilled hand movements and close observation of the object of the work. In such cases, posture becomes the platform of these actions. Attention is directed to the task, and the postural elements are enlisted to support the tasks: the posture becomes motionless, the mus-cular load increases and becomes more static. A French research group showed in their classical study that immobility and musculoskeletal load increased when the rate of work increased (Teiger, Laville and Duraffourg 1974).
    4. Posture is a source of information on the events taking place at work. Observing posture may be intentional or unconscious. Skilful supervisors and workers are known to use postural observations as indicators of the work process. Often, observing postural information is not conscious. For example, on an oil drilling derrick, postural cues have been used to communicate messages between team members during different phases of a task. This takes place under conditions where other means of communication are not possible.

     

    Figure 1. Too high hand positions or forward bending are amont the most commom ways  of creating “static” load

    ERG080F1

          Safety, Health and Working Postures

          From a safety and health point of view, all the aspects of posture described above may be important. However, postures as a source of musculoskeletal illnesses such as low back diseases have attracted the most attention. Musculoskeletal problems related to repetitive work are also connected to postures.

          Low back pain (LBP) is a generic term for various low back diseases. It has many causes and posture is one possible causal element. Epidemiological studies have shown that physically heavy work is conducive to LBP and that postures are one element in this process. There are several possible mechanisms which explain why certain postures may cause LBP. Forward bending postures increase the load on the spine and ligaments, which are especially vulnerable to loads in a twisted posture. External loads, especially dynamic ones, such as those imposed by jerks and slipping, may increase the loads on the back by a large factor.

          From a safety and health standpoint, it is important to identify bad postures and other postural elements as part of the safety and health analysis of work in general.

          Recording and Measuring Working Postures

          Postures can be recorded and measured objectively by the use of visual observation or more or less sophisticated measuring techniques. They can also be recorded by using self-rating schemes. Most methods consider posture as one of the elements in a larger context, for example, as part of the job content—as do the AET and Renault’s Les profils des postes (Landau and Rohmert 1981; RNUR 1976)—or as a starting point for biomechanical calculations that also take into account other components.

          In spite of the advancements in measuring technology, visual observation remains, under field conditions, the only practicable means of systematically recording postures. However, the precision of such measurements remains low. In spite of this, postural observations can be a rich source of information on work in general.

          The following short list of measuring methods and techniques presents selected examples:

            1. Self-reporting questionnaires and diaries. Self-reporting questionnaires and diaries are an economical means of collecting postural information. Self-reporting is based on the perception of the subject and usually deviates greatly from “objectively” observed postures, but may still convey important information about the tediousness of the work.
            2. Observation of postures. The observation of postures includes the purely visual recording of the postures and their components as well as methods in which an interview completes the information. Computer support is usually available for these methods. Many methods are available for visual observations. The method may simply contain a catalogue of actions, including postures of the trunk and limbs (e.g., Keyserling 1986; Van der Beek, Van Gaalen and Frings-Dresen 1992) .The OWAS method proposes a structured scheme for the analysis, rating and evaluation of trunk and limb postures designed for field conditions (Karhu, Kansi and Kuorinka 1977). The recording and analysis method may contain notation schemes, some of them quite detailed (as with the posture targeting method, by Corlett and Bishop 1976), and they may provide a notation for the position of many anatomical elements for each element of the task (Drury 1987).
            3. Computer-aided postural analyses. Computers have aided postural analyses in many ways. Portable computers and special programs allow easy recording and fast analysis of postures. Persson and Kilbom (1983) have developed the program VIRA for upper-limb study; Kerguelen (1986) has produced a complete recording and analysis package for work tasks; Kivi and Mattila (1991) have designed a computerized OWAS version for recording and analysis.

                 

                Video is usually an integral part of the recording and analysis process. The US National Institute for Occupational Safety and Health (NIOSH) has presented guidelines for using video methods in hazard analysis (NIOSH 1990).

                Biomechanical and anthropometrical computer programs offer specialized tools for analysing some postural elements in the work activity and in the laboratory (e.g., Chaffin 1969).

                Factors Affecting Working Postures

                Working postures serve a goal, a finality outside themselves. That is why they are related to external working conditions. Postural analysis that does not take into account the work environment and the task itself is of limited interest to ergonomists.

                The dimensional characteristics of the workplace largely define the postures (as in the case of a sitting task), even for dynamic tasks (for example, the handling of material in a confined space). The loads to be handled force the body into a certain posture, as does the weight and nature of the working tool. Some tasks require that body weight be used to support a tool or to apply force on the object of the work, as shown, for example in figure 2.

                Figure 2. Ergonomic aspects of standing

                ERG080F4

                Individual differences, age and sex influence postures. In fact, it has been found that a “typical” or “best” posture, for example in manual handling, is largely fiction. For each individual and each working situation, there are a number of alternative “best” postures from the standpoint of different criteria.

                 

                 

                 

                 

                 

                 

                 

                 

                 

                 

                 

                 

                 

                Job Aids and Supports for Working Postures

                Belts, lumbar supports and orthotics have been recommended for tasks with a risk of low back pain or upper-limb musculoskeletal injuries. It has been assumed that these devices give support to muscles, for example, by controlling intra-abdominal pressure or hand movements. They are also expected to limit the range of movement of the elbow, wrist or fingers. There is no evidence that modifying postural elements with these devices would help to avoid musculoskeletal problems.

                Postural supports in the workplace and on machinery, such as handles, supporting pads for kneeling, and seating aids, may be useful in alleviating postural loads and pain.

                Safety and Health Regulations concerning Postural Elements

                Postures or postural elements have not been subject to regulatory activities per se. However, several documents either contain statements which have a bearing on postures or include the issue of postures as an integral element of a regulation. A complete picture of the existing regulatory material is not available. The following references are presented as examples.

                  1. The International Labour Organization published a Recommendation in 1967 on maximum loads to be handled. Although the Recommendation does not regulate postural elements as such, it has a significant bearing on postural strain. The Recommendation is now outdated but has served an important purpose in focusing attention on problems in manual material handling.
                  2. The NIOSH lifting guidelines (NIOSH 1981), as such, are not regulations either, but they have attained that status. The guidelines derive weight limits for loads using the location of the load—a postural element—as a basis.
                  3. In the International Organization for Standardization as well as in the European Community, ergonomics standards and directives exist which contain matter relating to postural elements (CEN 1990 and 1991).

                   

                  Back

                  Sunday, 16 January 2011 16:34

                  Genetic Toxicology

                  Genetic toxicology, by definition, is the study of how chemical or physical agents affect the intricate process of heredity. Genotoxic chemicals are defined as compounds that are capable of modifying the hereditary material of living cells. The probability that a particular chemical will cause genetic damage inevitably depends on several variables, including the organism’s level of exposure to the chemical, the distribution and retention of the chemical once it enters the body, the efficiency of metabolic activation and/or detoxification systems in target tissues, and the reactivity of the chemical or its metabolites with critical macromolecules within cells. The probability that genetic damage will cause disease ultimately depends on the nature of the damage, the cell’s ability to repair or amplify genetic damage, the opportunity for expressing whatever alteration has been induced, and the ability of the body to recognize and suppress the multiplication of aberrant cells.

                  In higher organisms, hereditary information is organized in chromosomes. Chromosomes consist of tightly condensed strands of protein-associated DNA. Within a single chromosome, each DNA molecule exists as a pair of long, unbranched chains of nucleotide subunits linked together by phosphodiester bonds that join the 5 carbon of one deoxyribose moiety to the 3 carbon of the next (figure 1). In addition, one of four different nucleotide bases (adenine, cytosine, guanine or thymine) is attached to each deoxyribose subunit like beads on a string. Three-dimensionally, each pair of DNA strands forms a double helix with all of the bases oriented toward the inside of the spiral. Within the helix, each base is associated with its complementary base on the opposite DNA strand; hydrogen bonding dictates strong, noncovalent pairing of adenine with thymine and guanine with cytosine (figure 1). Since the sequence of nucleotide bases is complementary throughout the entire length of the duplex DNA molecule, both strands carry essentially the same genetic information. In fact, during DNA replication each strand serves as a template for the production of a new partner strand.

                  Figure 1. The (a) primary, (b) secondary and (c) tertiary organization of human hereditary information

                  TOX090F1Using RNA and an array of different proteins, the cell ultimately deciphers the information encoded by the linear sequence of bases within specific regions of DNA (genes) and produces proteins that are essential for basic cell survival as well as normal growth and differentiation. In essence, the nucleotides function like a biological alphabet which is used to code for amino acids, the building blocks of proteins.

                  When incorrect nucleotides are inserted or nucleotides are lost, or when unnecessary nucleotides are added during DNA synthesis, the mistake is called a mutation. It has been estimated that less than one mutation occurs for every 109 nucleotides incorporated during the normal replication of cells. Although mutations are not necessarily harmful, alterations causing inactivation or overexpression of important genes can result in a variety of disorders, including cancer, hereditary disease, developmental abnormalities, infertility and embryonic or perinatal death. Very rarely, a mutation can lead to enhanced survival; such occurrences are the basis of natural selection.

                  Although some chemicals react directly with DNA, most require metabolic activation. In the latter case, electrophilic intermediates such as epoxides or carbonium ions are ultimately responsible for inducing lesions at a variety of nucleophilic sites within the genetic material (figure 2). In other instances, genotoxicity is mediated by by-products of compound interaction with intracellular lipids, proteins, or oxygen.

                  Figure 2. Bioactivation of: a) benzo(a)pyrene; and b) N-nitrosodimethylamine

                  TOX090F2

                  Because of their relative abundance in cells, proteins are the most frequent target of toxicant interaction. However, modification of DNA is of greater concern due to the central role of this molecule in regulating growth and differentiation through multiple generations of cells.

                  At the molecular level, electrophilic compounds tend to attack oxygen and nitrogen in DNA. The sites that are most prone to modification are illustrated in figure 3. Although oxygens within phosphate groups in the DNA backbone are also targets for chemical modification, damage to bases is thought to be biologically more relevant since these groups are considered to be the primary informational elements in the DNA molecule.

                  Figure 3. Primary sites of chemically-induced DNA damage

                  TOX090F3

                  Compounds that contain one electrophilic moiety typically exert genotoxicity by producing mono-adducts in DNA. Similarly, compounds that contain two or more reactive moieties can react with two different nucleophilic centres and thereby produce intra- or inter-molecular crosslinks in genetic material (figure 4). Interstrand DNA-DNA and DNA-protein crosslinks can be particularly cytotoxic since they can form complete blocks to DNA replication. For obvious reasons, the death of a cell eliminates the possibility that it will be mutated or neoplastically transformed. Genotoxic agents can also act by inducing breaks in the phosphodiester backbone, or between bases and sugars (producing abasic sites) in DNA. Such breaks may be a direct result of chemical reactivity at the damage site, or may occur during the repair of one of the aforementioned types of DNA lesion.

                  Figure 4. Various types of damage to the protein-DNA complex

                  TOX090F4

                  Over the past thirty to forty years, a variety of techniques have been developed to monitor the type of genetic damage induced by various chemicals. Such assays are described in detail elsewhere in this chapter and Encyclopaedia.

                  Misreplication of “microlesions” such as mono-adducts, abasic sites or single-strand breaks may ultimately result in nucleotide base-pair substitutions, or the insertion or deletion of short polynucleotide fragments in chromosomal DNA. In contrast, “macrolesions,” such as bulky adducts, crosslinks, or double-strand breaks may trigger the gain, loss or rearrangement of relatively large pieces of chromosomes. In any case, the consequences can be devastating to the organism since any one of these events can lead to cell death, loss of function or malignant transformation of cells. Exactly how DNA damage causes cancer is largely unknown. It is currently believed the process may involve inappropriate activation of proto-oncogenes such as myc and ras, and/or inactivation of recently identified tumour suppressor genes such as p53. Abnormal expression of either type of gene abrogates normal cellular mechanisms for controlling cell proliferation and/or differentiation.

                  The preponderance of experimental evidence indicates that the development of cancer following exposure to electrophilic compounds is a relatively rare event. This can be explained, in part, by the cell’s intrinsic ability to recognize and repair damaged DNA or the failure of cells with damaged DNA to survive. During repair, the damaged base, nucleotide or short stretch of nucleotides surrounding the damage site is removed and (using the opposite strand as a template) a new piece of DNA is synthesized and spliced into place. To be effective, DNA repair must occur with great accuracy prior to cell division, before opportunities for the propagation of mutation.

                  Clinical studies have shown that people with inherited defects in the ability to repair damaged DNA frequently develop cancer and/or developmental abnormalities at an early age (table 1). Such examples provide strong evidence linking accumulation of DNA damage to human disease. Similarly, agents that promote cell proliferation (such as tetradecanoylphorbol acetate) often enhance carcinogenesis. For these compounds, the increased likelihood of neoplastic transformation may be a direct consequence of a decrease in the time available for the cell to carry out adequate DNA repair.

                  Table 1. Hereditary, cancer-prone disorders that appear to involve defects in DNA repair

                  Syndrome Symptoms Cellular phenotype
                  Ataxia telangiectasia Neurological deterioration
                  Immunodeficiency
                  High incidence of lymphoma
                  Hypersensitivity to ionizing radiation and certain alkylating agents.
                  Dysregulated replication of damaged DNA (may indicate shortened time for DNA repair)
                  Bloom’s syndrome Developmental abnormalities
                  Lesions on exposed skin
                  High incidence of tumours of the immune system and gastrointestinal tract
                  High frequency of chromosomal aberrations
                  Defective ligation of breaks associated with DNA repair
                  Fanconi’s anaemia Growth retardation
                  High incidence of leukaemia
                  Hypersensitivity to crosslinking agents
                  High frequency of chromosomal aberrations
                  Defective repair of crosslinks in DNA
                  Hereditary nonpolyposis colon cancer High incidence of colon cancer Defect in DNA mismatch repair (when insertion of wrong nucleotide occurs during replication)
                  Xeroderma pigmentosum High incidence of epithelioma on exposed areas of skin
                  Neurological impairment (in many cases)
                  Hypersensitivity to UV light and many chemical carcinogens
                  Defects in excision repair and/or replication of damaged DNA

                   

                  The earliest theories on how chemicals interact with DNA can be traced back to studies conducted during the development of mustard gas for use in warfare. Further understanding grew out of efforts to identify anticancer agents that would selectively arrest the replication of rapidly dividing tumour cells. Increased public concern over hazards in our environment has prompted additional research into the mechanisms and consequences of chemical interaction with the genetic material. Examples of various types of chemicals which exert genotoxicity are presented in table 2.

                  Table 2. Examples of chemicals that exhibit genotoxicity in human cells

                  Class of chemical Example Source of exposure Probable genotoxic lesion
                  Aflatoxins Aflatoxin B1 Contaminated food Bulky DNA adducts
                  Aromatic amines 2-Acetylaminofluorene Environmental Bulky DNA adducts
                  Aziridine quinones Mitomycin C Cancer chemotherapy Mono-adducts, interstrand crosslinks and single-strand breaks in DNA.
                  Chlorinated hydrocarbons Vinyl chloride Environmental Mono-adducts in DNA
                  Metals and metal compounds Cisplatin Cancer chemotherapy Both intra- and inter-strand crosslinks in DNA
                    Nickel compounds Environmental Mono-adducts and single-strand breaks in DNA
                  Nitrogen mustards Cyclophosphamide Cancer chemotherapy Mono-adducts and interstrand crosslinks in DNA
                  Nitrosamines N-Nitrosodimethylamine Contaminated food Mono-adducts in DNA
                  Polycyclic aromatic hydrocarbons Benzo(a)pyrene Environmental Bulky DNA adducts

                   

                  Back

                  Page 2 of 7

                  " DISCLAIMER: The ILO does not take responsibility for content presented on this web portal that is presented in any language other than English, which is the language used for the initial production and peer-review of original content. Certain statistics have not been updated since the production of the 4th edition of the Encyclopaedia (1998)."

                  Contents