April Weber

April Weber

Saturday, 12 March 2011 17:34

Physical Safety Hazards

Climate, noise and vibration are common physical hazards in forestry work. Exposure to physical hazards varies greatly depending on the type of work and the equipment used. The following discussion concentrates on forest harvesting and considers manual work and motor-manual (mostly chain-saws) and mechanized operations.

Manual Forest Work


Working outdoors, subject to climatic conditions, is both positive and negative for the forest worker. Fresh air and nice weather are good, but unfavourable conditions can create problems.

Working in a hot climate puts pressure on the forest worker engaged in heavy work. Among other things, the heart rate increases to keep the body temperature down. Sweating means loss of body fluids. Heavy work in high temperatures means that a worker might need to drink 1 litre of water per hour to keep the body fluid balance.

In a cold climate the muscles function poorly. The risk of musculoskeletal injuries (MSI) and accidents increases. In addition, energy expenditure increases substantially, since it takes a lot of energy just to keep warm.

Rainy conditions, especially in combination with cold, mean higher risk of accidents, since tools are more difficult to grasp. They also mean that the body is even more chilled.

Adequate clothing for different climatic conditions is essential to keep the forest worker warm and dry. In hot climates only light clothing is required. It is then rather a problem to use sufficient protective clothing and footwear to protect him or her against thorns, whipping branches and irritating plants. Lodgings must have sufficient washing and drying facilities for clothes. Improved conditions in camps have in many countries substantially reduced the problems for the workers.

Setting limits for acceptable weather conditions for work based only on temperature is very difficult. For one thing the temperature varies quite a lot between different places in the forest. The effect on the person also depends on many other things such as humidity, wind and clothing.

Tool-related hazards

Noise, vibrations, exhaust gases and so on are seldom a problem in manual forest work. Shocks from hitting hard knots during delimbing with an axe or hitting stones when planting might create problems in elbows or hands.

Motor-Manual Forest Work

The motor-manual forest worker is one who works with hand-held machines such as chain-saws or power brush cutters and is exposed to the same climatic conditions as the manual worker. He or she therefore has the same need for adequate clothing and lodging facilities. A specific problem is the use of personal protective equipment in hot climates. But the worker is also subject to other specific hazards due to the machines he or she is working with.

Noise is a problem when working with a chain-saw, brush saw or the like. The noise level of most chain-saws used in regular forest work exceeds 100 dBA. The operator is exposed to this noise level for 2 to 5 hours daily. It is difficult to reduce the noise levels of these machines without making them too heavy and awkward to work with. The use of ear protectors is therefore essential. Still, many chain-saw operators suffer loss of hearing. In Sweden around 30% of chain-saw operators had a serious hearing impairment. Other countries report high but varying figures depending on the definition of hearing loss, the duration of exposure, the use of ear protectors and so on.

Hand-induced vibration is another problem with chain-saws. “White finger” disease has been a major problem for some forest workers operating chain-saws. The problem has been brought to a minimum with modern chain-saws. The use of efficient anti-vibration dampers (in cold climates combined with heated handles) has meant, for instance, that in Sweden the number of chain-saw operators suffering from white fingers has dropped to 7 or 8%, which corresponds to the overall figure for natural white fingers for all Swedes. Other countries report large numbers of workers with white finger, but these probably do not use modern, vibration-reduced chain-saws.

The problem is similar when using brush saws and pruning saws. These types of machines have not been under close study, since in most cases the time of exposure is short.

Recent research points to a risk of loss of muscle strength due to vibrations, sometimes even without white finger symptoms.

Machine Work

Exposure to unfavourable climatic conditions is easier to solve when machines have cabins. The cabin can be insulated from cold, provided with air-conditioning, dust filters and so on. Such improvements cost money, so in most older machines and in many new ones the operator is still exposed to cold, heat, rain and dust in a more or less open cabin.

Noise problems are solved in a similar manner. Machines used in cold climates such as the Nordic countries need efficient insulation against cold. They also most often have good noise protection, with noise levels down to 70 to 75 dBA. But machines with open cabins most often have very high noise levels (over 100 dBA).

Dust is a problem especially in hot and dry climates. A cabin well insulated against cold, heat or noise also helps keep out the dust. By using a slight overpressure in the cabin, the situation can be improved even more.

Whole-body vibration in forest machines can be induced by the terrain over which the machine travels, the movement of the crane and other moving parts of the machine, and the vibrations from the power transmission. A specific problem is the shock to the operator when the machine comes down from an obstacle such as a rock. Operators of cross-country vehicles, such as skidders and forwarders, often have problems with low-back pain. The vibrations also increase the risk of repetitive strain injuries (RSI) to the neck, shoulder, arm or hand. The vibrations increase strongly with the speed at which the operator drives the machine.

In order to reduce vibrations, machines in the Nordic countries use vibration-damping seats. Other ways are to reduce the shocks coming from the crane by making it work smoother technically and by using better working techniques. This also makes the machine and the crane last longer. A new interesting concept is the “Pendo cabin”. This cabin hangs on its “ears” connected to the rest of the machine by only a stand. The cabin is sealed off from the noise sources and is easier to protect from vibrations. The results are good.

Other approaches try to reduce the shocks that arise from driving over the terrain. This is done by using “intelligent” wheels and power transmission. The aim is to lower environmental impact, but it also has a positive effect on the situation for the operator. Less expensive machines most often have little reduction of noise, dust and vibration. Vibration may also be a problem in handles and controls.

When no engineering approaches to controlling the hazards are used, the only available solution is to reduce the hazards by lowering the time of exposure, for instance, by job rotation.

Ergonomic checklists have been designed and used successfully to evaluate forestry machines, to guide the buyer and to improve machine design (see Apud and Valdés 1995).

Combinations of Manual, Motor-Manual and Machine Work

In many countries, manual workers work together with or close to chain-saw operators or machines. The machine operator sits in a cabin or uses ear protectors and good protective equipment. But, in most cases the manual workers are not protected. The safety distances to the machines are not adhered to, resulting in very high risk of accidents and risk of hearing damage to unprotected workers.

Job Rotation

All the above-described hazards increase with the duration of exposure. To reduce the problems, job rotation is the key, but care has to be taken not to merely change work tasks while in actuality maintaining the same type of hazards.



Saturday, 12 March 2011 17:22

Forest Fire Management and Control

The Relevance of Forest Fires

One important task for forest management is the protection of the forest resource base.

Out of many sources of attacks against the forest, fire is often the most dangerous. This danger is also a real threat for the people living inside or adjacent to the forest area. Each year thousands of people lose their homes due to wildfires, and hundreds of people die in these accidents; additionally tens of thousands of domestic animals perish. Fire destroys agricultural crops and leads to soil erosion, which in the long run is even more disastrous than the accidents described before. When the soil is barren after the fire, and heavy rains soak the soil, huge mud- or landslides can occur.

It is estimated that every year:

  • 10 to 15 million hectares of boreal or temperate forest burn.
  • 20 to 40 million hectares of tropical rain forest burn.
  • 500 to 1,000 million hectares of tropical and subtropical savannahs, woodlands and open forests burn.


More than 90% of all this burning is caused by human activity. Therefore, it is quite clear that fire prevention and control should receive top priority among forest management activities.

Risk Factors in Forest Fires

The following factors make fire-control work particularly difficult and dangerous:

  • excessive heat radiated by the fire (fires always occur during hot weather)
  • poor visibility (due to smoke and dust)
  • difficult terrain (fires always follow wind patterns and generally move uphill)
  • difficulty getting supplies to the fire-fighters (food, water, tools, fuel)
  • often obligatory to work at night (easiest time to “kill” the fire)
  • impossibility of outrunning a fire during strong winds (fires move faster than any person can run)
  • sudden changes in the wind direction, so that no one can exactly predict the spread of the fire
  • stress and fatigue, causing people to make disastrous judgement errors, often with fatal results.


Activities in Forest Fire Management

The activities in forest fire management can be divided into three different categories with different objectives:

  • fire prevention (how to prevent fires from happening)
  • fire detection (how to report the fires as fast as possible)
  • fire suppression (the work to put out the fire, actually fighting the fire).


Occupational dangers

Fire prevention work is generally a very safe activity.

Fire detection safety is mostly a question of safe driving of vehicles, unless aircraft are used. Fixed-wing aircraft are especially vulnerable to strong uplifting air streams caused by the hot air and gases. Each year tens of air crews are lost due to pilot errors, especially in mountainous conditions.

Fire suppression, or actual fighting of the fire, is a very specialized operation. It has to be organized like a military operation, because negligence, non-obedience and other human errors may not only endanger the firefighter, but may also cause the death of many others as well as extensive property damage. The whole organization has to be clearly structured with good coordination between forestry staff, emergency services, fire brigades, police and, in large fires, the armed forces. There has to be a single line of command, centrally and onsite.

Fire suppression mostly involves the establishment or maintenance of a network of fire-breaks. These are typically 10- to 20-metre-wide strips cleared of all vegetation and burnable material. Accidents are mostly caused by cutting tools.

Major wildfires are, of course, the most hazardous, but similar problems arise with prescribed burning or “cold fires”, when mild burns are allowed to reduce the amount of inflammable material without damaging the vegetation. The same precautions apply in all cases.

Early intervention

Detecting the fire early, when it is still weak, will make its control easier and safer. Previously, detection was based on observations from the ground. Now, however, infrared and microwave equipment attached to aircraft can detect an early fire. The information is relayed to a computer on the ground, which can process it and give the precise location and temperature of the fire, even when there are clouds. This allows ground crews and/or smoke jumpers to attack the fire before it spreads widely.

Tools and equipment

Many rules are applicable to the firefighter, who may be a forest worker, a volunteer from the community, a government employee or a member of a military unit ordered to the area. The most important is: never go to fight a fire without your own personal cutting tool. The only way to escape the fire may be to use the tool to remove one of the components of the “fire triangle”, as shown in figure 1. The quality of that tool is critical: if it breaks, the fire fighter may lose his or her life.

Figure 1. Forest firefighter safert equipment


This also puts a very special emphasis on the quality of the tool; bluntly put, if the metal part of the tool breaks, the fire-fighter may lose his or her life. Forest firefighter safety equipment is shown in figure 2.

Figure 2. Forest firefighter safety equipment


Terrestrial firefighting

The preparation of fire breaks during an actual fire is especially dangerous because of the urgency of controlling the advance of the fire. The danger may be multiplied by poor visibility and changing wind direction. In fighting fires with heavy smoke (e.g., peat-land fires), lessons learned from such a fire in Finland in 1995 include:

  • Only experienced and physically very fit people should be sent out in heavy smoke conditions.
  • Each person should have a radio to receive directions from an hovering aircraft.
  • Only people with breathing apparatus or gas masks should be included.


The problems are related to poor visibility and changing wind directions.

When an advancing fire threatens dwellings, the inhabitants may have to be evacuated. This presents an opportunity for thieves and vandals, and calls for diligent policing activities.

The most dangerous work task is the making of backfires: hurriedly cutting through the trees and underbrush to form a path parallel to the advancing line of fire and setting it afire at just the right moment to produce a strong draught of air heading toward the advancing fire, so that the two fires meet. The draught from the advancing fire is caused by the need of the advancing fire to pull oxygen from all sides of the fire. It is very clear that if the timing fails, then the whole crew will be engulfed by strong smoke and exhausting heat and then will suffer a lack of oxygen. Only the most experienced people should set backfires, and they should prepare escape routes in advance to either side of the fire. This backfiring system should always be practised in advance of the fire season; this practice should include the use of equipment like torches for lighting the backfire. Ordinary matches are too slow!

As a last effort for self-preservation, a firefighter can scrape all burning materials in a 5 m diameter, dig a pit in the centre, cover him or herself with soil, soak headgear or jacket and put it over his or her head. Oxygen is often available only at 1 to 2 centimetres from ground level.

Water bombing by aircraft

The use of aircraft for fighting fires is not new (the dangers in aviation are described elsewhere in this Encyclopaedia). There are, however, some activities that are very dangerous for the ground crew in a forest fire. The first is related to the official sign language used in aircraft operations—this has to be practised during training.

The second is how to mark all areas where the aircraft is going to load water for its tanks. To make this operation as safe as possible, these areas should be marked off with floating buoys to obviate the pilot’s need to use guesswork.

The third important matter is to keep constant radio contact between the ground crew and the aircraft as it prepares to release its water. The release from small heli-buckets of 500 to 800 litres is not that dangerous. Large helicopters, however, like the MI-6, carry 2,500 litres, while the C-120 aircraft takes 8,000 litres and the IL-76 can drop 42,000 litres in one sweep. If, by chance, one of these big loads of water lands on crew members on the ground, the impact could kill them.

Training and organization

One essential requirement in firefighting is to line up all firefighters, villagers and forest workers to organize joint firefighting exercises before the beginning of fire season. This is the best way to secure successful and safe firefighting. At the same time, all the work functions of the various levels of command should be practised in the field.

The selected fire chief and leaders should be the ones with the best knowledge of local conditions and of government and private organizations. It is obviously dangerous to assign somebody either too high up the hierarchy (no local knowledge) or too low down the hierarchy (often lacking authority).



Saturday, 12 March 2011 17:14

Tree Planting

Tree planting consists of putting seedlings or young trees into the soil. It is mainly done to re-grow a new forest after harvesting, to establish a woodlot or to change the use of a piece of land (e.g., from a pasture to a woodlot or to control erosion on a steep slope). Planting projects can amount to several million plants. Projects may be executed by the forest owners’ private contractors, pulp and paper companies, the government’s forest service, non-governmental organizations or cooperatives. In some countries, tree planting has become a veritable industry. Excluded here is the planting of large individual trees, which is considered more the domain of landscaping than forestry.

The workforce includes the actual tree planters as well as tree nursery staff, workers involved in transporting and maintaining the plants, support and logistics (e.g., managing, cooking, driving and maintaining vehicles and so on) and quality control inspectors. Women comprise 10 to 15% of the tree-planter workforce. As an indication of the importance of the industry and the scale of activities in regions where forestry is of economic importance, the provincial government in Quebec, Canada, set an objective of planting 250 million seedlings in 1988.

Planting Stock

Several technologies are available to produce seedlings or small trees, and the ergonomics of tree planting will vary accordingly. Tree planting on flat land can be done by planting machines. The role of the worker is then limited to feeding the machine manually or merely to controlling quality. In most countries and situations, however, site preparation may be mechanized, but actual planting is still done manually.

In most reforestation, following a forest fire or clear cutting, for example, or in afforestation, seedlings varying from 25 to 50 cm in height are used. The seedlings are either bare-rooted or have been grown in containers. The most common containers in tropical countries are 600 to 1,000 cm3. Containers may be arranged in plastic or styrofoam trays which usually hold from 40 to 70 identical units. For some purposes, larger plants, 80 to 200 cm, may be needed. They are usually bare-rooted.

Tree planting is seasonal because it depends on rainy and/or cool weather. The season lasts 30 to 90 days in most regions. Although it may seem a lesser seasonal occupation, tree planting must be considered a major long-term strategic activity, both for the environment and for revenue where forestry is an important industry.

Information presented here is based mainly on the Canadian experience, but many of the issues can be extrapolated to other countries with a similar geographical and economic context. Specific practices and health and safety considerations for developing countries are also addressed.

Planting Strategy

Careful evaluation of the site is important for setting adequate planting targets. A superficial approach can hide field difficulties that will slow down the planting and overburden the planters. Several strategies exist for planting large areas. One common approach is to have a team of 10 to 15 planters equally spaced in a row, who progress at the same pace; a designated worker then has the task of bringing in enough seedlings for the whole team, usually by means of small off-road vehicles. One other common method is to work with several pairs of planters, each pair being responsible for fetching and carrying their own small stock of plants. Experienced planters will know how to space out their stock to avoid losing time carrying plants back and forth. Planting alone is not recommended.

Seedling Transport

Planting relies on the steady supply of seedlings to the planters. They are brought in several thousands at a time from the nurseries, on trucks or pick-ups as far as the road will go. The seedlings must be unloaded rapidly and watered regularly. Modified logging machinery or small off-road vehicles can be used to carry the seedlings from the main depot to the planting sites. Where seedlings have to be carried by workers, such as in many developing countries, the workload is very heavy. Suitable back-packs should be used to reduce fatigue and risk of injuries. Individual planters will carry from four to six trays to their respective lots. Since most planters are paid at a piece rate, it is important for them to minimize unproductive time spent travelling, or fetching or carrying seedlings.

Equipment and Tools

The typical equipment carried by a tree planter includes a planting shovel or a dibble (a slightly conical metal cylinder at the end of a stick, used to make holes closely fitting the dimensions of containerized seedlings), two or three plant container trays carried by a harness, and safety equipment such as toe-capped boots and protective gloves. When planting bare-rooted seedlings, a pail containing enough water to cover the seedling’s roots is used instead of the harness, and is carried by hand. Various types of tree-planting hoes are also widely used for bare-rooted seedlings in Europe and North America. Some planting tools are manufactured by specialized tool companies, but many are made in local shops or are intended for gardening and agriculture, and present some design deficiencies such as excess weight and improper length. The weight typically carried is presented in table 1.

Table 1. Typical load carried while planting.


 Weight in kg    

Commercially available harness


Three 45-seedling container trays, full   


Typical planting tool (dibble)





Planting Cycle

One tree-planting cycle is defined as the series of steps necessary to put one seedling into the ground. Site conditions, such as slope, soil and ground cover, have a strong influence on productivity. In Canada the production of a planter can vary from 600 plants per day for a novice to 3,000 plants per day for an experienced individual. The cycle may be subdivided as follows:

Selection of a micro-site. This step is fundamental for the survival of the young trees and depends on several criteria taken into account by quality control inspectors, including distance from preceding plant and natural offspring, closeness to organic material, absence of surrounding debris and avoidance of dry or flooded spots. All these criteria must be applied by the planter for each and every tree planted, since their non-observance can lead to a financial penalty.

Ground perforation. A hole is made in the ground with the planting tool. Two operating modes are observed, depending on the type of handle and the length of the shaft. One consists of using the mass of the body applied to a step bar located at the lower extremity of the tool to force it into the ground, while the other one involves raising the tool at arm’s length and forcefully plunging it into the ground. To avoid soil particles falling into the hole when the tool is removed, planters have the habit of smoothing its walls either by turning the tool around its long axis with a movement of the hand, or by flaring it with a circular motion of the arm.

Insertion of the plant into the cavity. If the planter is not yet holding a seedling, he or she grabs one from the container, bends down, inserts it into the hole and straightens up. The plant must be straight, firmly inserted into the soil, and the roots must be completely covered. It is interesting to note here that the tool plays an important secondary role by supplying a support for the planter as he or she bends down and straightens up, thus relieving the back muscles. Back movements can be straight or flexed, depending on the length of the shaft and the type of handle.

Soil compaction. Soil is compacted around the newly planted seedling to set it in the hole and to eliminate air that could dry the roots. Even though a trampling action is recommended, a forceful stamping of the feet or heel is more often observed.

Moving to the next micro-site. The planter proceeds to the next micro-site, generally 1.8 m away. This distance is usually evaluated by sight by experienced planters. While proceeding to the site, he or she must identify hazards on the way, plan a path around them, or determine another evasive strategy. In figure 1, the planter in the foreground is about to insert the seedling in the hole. The planter in the background is about to make a hole with a straight-handle planting tool. Both carry the seedlings in containers attached to a harness. Seedlings and equipment can weigh up to 16.8 kg (see table 1). Also note that the planters are fully covered by clothes to protect themselves against insects and the sun.

Figure 1. Tree planters in action in Canada


Hazards, Outcomes and Preventive Measures

Few studies worldwide have been devoted to the health and safety of tree planters. Although bucolic in appearance, tree planting carried out on an industrial basis can be strenuous and hazardous. In a pioneering study conducted by Smith (1987) in British Columbia, it was found that 90% of the 65 planters interviewed had suffered an illness, injury or accident during life-time tree-planting activities. In a similar study conducted by IRSST, the Quebec Institute of Occupational Health and Safety (Giguère et al. 1991, 1993), 24 out of 48 tree planters reported having suffered from a work-related injury during the course of their planting careers. In Canada, 15 tree planters died between 1987 and 1991 of the following work-related causes: road accidents (7), wild animals (3), lightning (2), lodging incidents (fire, asphyxia—2) and heat stroke (1).

Although scarce and conducted on a small number of workers, the few investigations of physiological indicators of physical strain (heart rate, blood haematology parameters, elevated serum enzymes activity) all concluded that tree planting is a highly strenuous occupation both in terms of cardiovascular and musculoskeletal strain (Trites, Robinson and Banister 1993; Robinson, Trites and Banister 1993; Giguère et al. 1991; Smith 1987). Banister, Robinson and Trites (1990) defined “tree-planter burnout”, a condition originating from haematological deficiency and characterized by the presence of lethargy, weakness and light-headedness similar to the “adrenal exhaustion syndrome” or “sport anaemia” developed by training athletes. (For data on workload in Chile, see Apud and Valdés 1995; for Pakistan, see Saarilahti and Asghar 1994).

Organizational factors. Long workdays, commuting and strict quality control, coupled with the piece-work incentive (which is a widespread practice among tree-planting contractors), may strain the physiological and psychological equilibrium of the worker and lead to chronic fatigue and stress (Trites, Robinson and Banister 1993). A good working technique and regular short pauses improve daily output and help to avoid burnout.

Accidents and injuries. Data presented in table 2 provide an indication of the nature and causes of accidents and injuries as they were reported by the tree-planter population participating in the Quebec study. The relative importance of accidents by body part affected shows that injuries to the lower extremities are more frequently reported than those to the upper extremities, if the percentages for knees, feet, legs and ankles are added together. The environmental setting is favourable to tripping and falling accidents. Injuries associated with forceful movements and lesions caused by tools, cutting scraps or soil debris are also of relevance.

Table 2. Frequency grouping of tree-planting accidents by body parts affected (in percentage of 122 reports by 48 subjects in Quebec).


 Body part  

 % total  

 Related causes




 Falls, contact with tool, soil compaction




 Equipment contact, biting and stinging insects, sunburn, chapping




 Insects, insect repellent, twigs




 Frequent bending, load carrying




 Soil compaction, blisters




 Chapping, scratches from contact with soil




 Falls, contact with tool




 Hidden rocks




 Trips and falls, hidden obstacles, contact with tool





Source: Giguere et al. 1991, 1993.

A well-prepared planting site, free of bushes and obstacles, will speed up planting and reduce accidents. Scrap should be disposed of in piles instead of furrows to allow easy circulation of the planters on the site. Tools should have straight handles to avoid injuries, and be of a contrasting colour. Shoes or boots should be sturdy enough to protect the feet during the repeated contact with the planting tool and while trampling the soil; sizes should be available for male and female planters, and the sole, sized properly for both men and women, should have a good grip on wet rocks or stumps. Gloves are useful to reduce the occurrence of blistering and of cuts and bruises from inserting the seedling into the soil. They also make the handling of conifer or thorny seedlings more comfortable.

Camp life and outdoor work. In Canada and a number of other countries, planters often have to live in camps. Working in the open requires protection against the sun (sun glasses, hats, sun block) and against biting and stinging insects. Heat stress can also be significant, and prevention calls for the possibility of adjusting the work-rest regimen and the availability of potable liquids to avoid dehydration.

It is important to have first aid equipment and some of the personnel trained as paramedics. Training should include emergency treatment of heat stroke and allergy caused by the venom of wasps or snakes. Planters should be checked for tetanus vaccination and for allergy before being sent to remote sites. Emergency communication systems, evacuation procedures and assembly signal (in case of a forest fire, sudden wind or sudden thunderstorm, or the presence of dangerous wild animals and so on) are essential.

Chemical hazards. The use of pesticides and fungicides to protect the seedlings (during cultivation or storage) is a potential risk when handling freshly sprayed plants (Robinson, Trites and Banister 1993). Eye irritation may occur due to the constant need to apply insect-repelling lotions or sprays.

Musculoskeletal and physiological load. Although there is no specific epidemiological literature linking musculoskeletal problems and tree planting, the forceful movements associated with load carrying, as well as the range of postures and muscular work involved in the planting cycle, undoubtedly constitute risk factors, which are exacerbated by the repetitive nature of the work.

Extreme flexions and extensions of the wrists, in grabbing seedlings in the trays, for example, and shock transmission to the hands and arms occurring when the planting tool hits a hidden rock, are among the possible biomechanical hazards to the upper limbs. The overall weight carried, the frequency of lifting, the repetitive and physical nature of the work, especially the intensive muscular effort required when plunging the dibble into the ground, contribute to the muscular strain exerted on upper limbs.

Low-back problems could be related to the frequency of bending. Handling of seedling trays (3.0 to 4.1 kg each when full) when unloading delivery trucks is also a potential risk. Carrying loads with harnesses, especially if the weight is not properly distributed on the shoulders and around the waist, is also likely to engender back pain.

The muscular load on lower limbs is obviously extensive. Walking several kilometres a day while carrying a load on rough terrain, sometimes going uphill, can rapidly become strenuous. Additionally, the work involves frequent knee flexions, and the feet are used continuously. Most tree planters use their feet to clear local debris with a lateral movement before making a hole. They also use their feet in putting weight on the tool’s footrest to aid penetration into the soil and to compact the soil around the seedling after it has been inserted.

Prevention of musculoskeletal strain relies on the minimization of carried loads, in terms of weight, frequency and distance, in conjunction with the optimization of working postures, which implies proper working tools and practices.

If seedlings must be carried in a pail, for instance, water can be replaced by wet peat moss to reduce carried weight. In Chile, replacing heavy wooden boxes for carrying seedlings by lighter cardboard ones increased output by 50% (Apud and Valdés 1995). Tools also have to be well adapted to the job. Replacing a pickaxe and shovel with a specially designed pick-hoe reduced workload by 50% and improved output by up to 100% in reforestation in Pakistan (Saarilahti and Asghar 1994). The weight of the planting tool is also crucial. For example, in a field survey of planting tools conducted in Quebec, variations ranged from 1.7 to 3.1 kg, meaning that choosing the lightest model may save 1,400 kg of lifted weight daily based on 1,000 lifts per day.

Planting tools with long, straight handles are preferred since if the tool hits a hidden rock, the hand will slip on the handle instead of absorbing the shock. A smooth, tapered handle allows an optimum grip for a greater percentage of the population. The Forest Engineering Research Institute of Canada recommends adjustable tools with shock-absorbing properties, but reports that none were available at the time of their 1988 survey (Stjernberg 1988).

Planters should also be educated about optimal working postures. Using the body weight to insert the dibble instead of using muscular effort, avoiding back twisting or exertion of the arms while they are fully extended, avoiding planting downhill and using the planting tool as a support when bending, for example, can all help minimize musculoskeletal strain. Novice planters should not be paid piece rate until they are fully trained.



Saturday, 12 March 2011 17:08

Harvesting of Non-Wood Forest Products

Operational Environment

There are many hazards associated with the harvest of non-wood forest products because of the wide variety of non-wood products themselves. In order better to define these hazards, non-wood products may be grouped by category, with a few representative examples. Then the hazards associated with their harvest can be more easily identified (see table 1).

Table 1.  Non-wood forest product categories and examples.



Food products

Animal products, bamboo shoots, berries, beverages, forage, fruits, herbs, mushrooms, nuts, oils, palm hearts, roots, seeds, starches

Chemical and pharmacological products and derivatives

Aromatics, gums and resins, latex and other exudates, medicinal extracts, tans and dyes, toxins

Decorative materials

Bark, foliage, flowers, grasses, potpourri

Non-wood fibre for plaiting, structural purposes, and padding

Bamboo, bark, cork, kapok, palm leaves, rattan, reeds, thatching grasses


Non-wood products are harvested for several reasons (subsistence, commercial or hobby/recreational purposes) and for a range of needs. This in turn affects the relative hazard associated with their collection. For example, the hobbyist mushroom picker is much less likely to remain in the open risking exposure to severe climatic conditions than is the commercial picker, dependent on picking for income and competing for a limited supply of seasonally available mushrooms.

The scale of non-wood harvesting operations is variable, with associated positive and negative effects on potential hazards. By its nature non-wood harvesting is often a small, subsistence or entrepreneurial effort. The safety of the lone worker in remote locations can be more problematic than for the non-isolated worker. Individual experience will affect the situation. There may be an emergency or other situation possibly calling for the direct intervention of outside consultative sources of safety and health information. Certain specific non-wood products have, however, been significantly commercialized, even lending themselves to plantation cultivation, such as bamboo, mushrooms, gum naval stores, certain nuts and rubber, to name just a few. Commercialized operations, theoretically, may be more likely to provide and emphasize systematic health and safety information in the course of work.

Collectively, the listed products, the forest environment in which they exist and the methods required to harvest them can be linked with certain inherent health and safety hazards. These hazards are quite elementary because they derive from very common actions, such as climbing, cutting with hand tools, digging, gathering, picking and manual transport. In addition, harvest of a certain food product might include exposure to biological agents (a poisonous plant surface or poisonous snake), biomechanical hazards (e.g., due to a repetitive movement or carrying a heavy load), climatological conditions, safety hazards from tools and techniques (such as a laceration due to careless cutting technique) and other hazards (perhaps due to difficult terrain, river crossings or working off the ground).

Because non-wood products often do not lend themselves to mechanization, and because its cost is frequently prohibitive, there is a disproportionate emphasis on manual harvest or using draught animals for harvest and transport compared to other industries.

Hazard Control and Prevention

A special word about cutting operations is warranted, since cutting is arguably the most recognizable and common source of hazard associated with the harvest of non-wood forest products. Potential cutting hazards are linked to appropriate tool selection and tool quality, size/type of the cut required, the force needed to make the cut, positioning of the worker and worker attitude.

In general, cutting hazards can be reduced or mitigated by:

  • direct training for the work tasks: proper tool selection, tool maintenance and sharpening, and training of the worker with respect to proper biomechanical technique
  • training in work organization: job planning, safety/hazard assessment, site preparation and continual worker awareness with respect to work task and surroundings.


The goal of successful training in work technique and philosophy should be: implementation of proper work planning and precautionary measures, hazard recognition, active hazard avoidance and minimization of injury in the event of accident.

Factors Related to Harvesting Hazards

Because non-wood harvesting, by its nature, occurs in the open, subject to changing weather conditions and other natural factors, and because it is predominantly non-mechanized, workers are particularly subject to the environmental effects of geography, topography, climate and season. After considerable physical efforts and fatigue, weather conditions can contribute to work-related health problems and accidents (see table 2).

Table 2.  Non-wood harvesting hazards and examples.

Non-wood harvesting hazards


Biological agents

Bites and stings (external vector, systemic poisons)

Plant contact (external vector, topical poisons)

Ingestion (internal vector, systemic poisons)

Biomechanical action

Improper technique or repetitive-use injury related to bending, carrying, cutting, lifting, loading

Climatological conditions

Excessive heat and cold effects, either externally induced (environment) or due to work effort

Tools and techniques

Cuts, mechanical hazards, draught animal handling, small vehicle operation


Altercation, animal attack, difficult terrain, fatigue, loss of orientation, working at heights, working in remote locations, working on or crossing waterways


Non-wood harvesting operations tend to be in remote areas. This poses a form of hazard due to a lack of proximity to medical care in the event of accident. This would not be expected to increase accident frequency but certainly may increase the potential severity of any injury.



Saturday, 12 March 2011 17:00

Timber Transport

Timber transport provides the link between the forest harvesting and the mill. This operation is of great economic importance: in the northern hemisphere it accounts for 40 to 60% of the total wood procurement cost at the mill (excluding stumpage), and in the tropics the proportion is even higher. The basic factors affecting timber transport include: the size of the operation; the geographic locations of the forest and the mill as well as the distance between them; the assortment of timber for which the mill is designed; and the kinds of transportation that are available and suitable. The main timber assortments are full trees with branches, delimbed tree lengths, long logs (typically 10 to 16m in length), shortwood (typically 2 to 6m logs), chips and hog fuel. Many mills can accept varied assortments of timber; some can accept only specific types—for example, shortwood by road. Transport can be by road, rail, ship, floating down a waterway or, depending on the geography and the distance, various combinations of these. Road transport by truck, however, has become the primary form of timber transportation.

In many cases timber transport, especially road transport, is an integrated part of the harvesting operation. Thus, any problem in timber transport may stop the entire harvesting operation. The time pressure can lead to a demand for overtime work and a tendency to cut corners that may compromise the workers’ safety.

Both forest harvesting and timber transport are often contracted out. Particularly when there are multiple contractors and subcontractors, there may be a question of who has the responsibility for protecting particular workers’ safety and health.

Timber Handling and Loading

When circumstances permit, timber may be loaded directly onto trucks at the stump, eliminating the need for a separate forest transport phase. When distances are short, forest transport equipment (e.g., an agricultural tractor with a trailer or semi-trailer) may convey the timber directly to the mill. Normally, however, the timber is first taken to the forest roadside landing for long-distance transport.

Manual loading is often practised in developing countries and in poorly capitalized operations. Small logs can be lifted and the large ones rolled with the help of ramps (see figure 1). Simple hand tools like hooks, levers, sappies, pulleys and so on may be used, and draught animals may be involved.

Figure 1.  Manual loading (with and without ramps).


In most instances, however, loading is mechanized, usually with swing-boom, knuckle-boom or front-end loaders. Swing-boom and knuckle-boom loaders may be mounted on wheeled or tracked carriers or on trucks, and are usually equipped with grapples. Front-end loaders usually have forks or grapples and are mounted on crawler tractors or articulated four-wheel-drive tractors. In semi-mechanized loading, logs may be lifted or rolled up the loading skids by cables and different kinds of tractors and winches (see figure 2) . Semi-mechanized loading often requires workers to be on the ground attaching and releasing cables, guiding the load and so on, often using hooks, levers and other hand tools. In chipping operations, the chipper usually blows the chips directly into the truck, trailer or semi-trailer.

Figure 2.  Mechanized and semi-mechanized loading.


Landing Operations

Landings are busy, noisy places where many different operations are conducted simultaneously. Depending on the harvesting system, these include loading and unloading, delimbing, debarking, bucking, sorting, storing and chipping. One or more large machines may be moving and operating at the same time while chain saws are in use close by. During and after rain, snow and frost, the logs may be very slippery and the ground may be very muddy and slick. The area may be littered with debris, and in dry weather it may be very dusty. Logs may be stored in unsecured piles several metres high. All this makes the landing one of the most dangerous working areas in the forestry industry.

Road Transport

Road transport of timber is carried by vehicles the size of which depends on the dimensions of the timber, road conditions and traffic regulations, and the availability of capital to purchase or lease the equipment. Two- or three-axle trucks with a carrying capacity of 5 to 6 tonnes are commonly used in tropical countries. In Scandinavia, for example, the typical logging truck is a 4-axle truck with a 3-axle trailer or vice versa—with a carrying capacity of 20 to 22 tonnes. On private roads in North America, one can encounter rigs with a total weight of 100 to 130 tonnes or more.

Water Transport

The use of waterways for timber transport has been declining as road transport has been increasing, but it still remains important in Canada, the United States, Finland and Russia in the northern hemisphere, in the watersheds of the Amazon, Paraguay and Parana rivers in Latin America, in many rivers and lakes in Western Africa and in most countries in Southeast Asia.

In mangrove and tidal forests, water transport usually starts directly from the stump; otherwise the logs have to be transported to the waterfront, usually by truck. Loose logs or bundles can be drifted downstream in rivers. They can be bound into rafts which can be towed or pushed in rivers, lakes and along coasts, or they may be loaded onto boats and barges of varying size. Ocean-going ships play a large role in the international timber trade.

Rail Transport

In North America and in the tropics, railway transport, like water transport, is giving way to road transport. However, it remains very important in countries like Canada, Finland, Russia and China, where there are good railway networks with suitable intermediate landing areas. In some large-scale operations, temporary narrow-gauge railways may be used. The timber may be carried in standard freight cars, or specially constructed timber-carrying cars may be used. In some terminals, large fixed cranes may be used for loading and unloading, but, as a rule, the loading methods described above are used.


Loading and unloading, which sometimes must be done several times as timber travels from the forest to where it will be used, is often a particularly hazardous operation in the timber industry. Even when fully mechanized, workers on foot and using hand tools may be involved and may be at risk. Some larger operators and contractors recognize this, maintain their equipment properly and provide their workers with personal protective equipment (PPE) such as shoes, gloves, helmets, glasses and noise protectors. Even then, trained and diligent supervisors are required, to ensure that safety concerns are not overlooked. Safety often becomes problematic in smaller operations and particularly in developing countries. (For an example see figure 3 , which shows workers without PPE loading logs in Nigeria.)

Figure 3.  Logging operations in Nigeria with unprotected workers.




Saturday, 12 March 2011 16:50

Wood Harvesting

The present article draws heavily on two publications: FAO 1996 and FAO/ILO 1980. This article is an overview; numerous other references are available. For specific guidance on preventive measures, see ILO 1998.

Wood harvesting is the preparation of logs in a forest or tree plantation according to the requirements of a user, and delivery of logs to a consumer. It includes the cutting of trees, their conversion into logs, extraction and long distance transport to a consumer or processing plant. The terms forest harvesting, wood harvesting or logging are often used synonymously. Long-distance transport and the harvesting of non-wood forest products are dealt with in separate articles in this chapter.


While many different methods are used for wood harvesting, they all involve a similar sequence of operations:

  • tree felling: severing a tree from the stump and bringing it down
  • topping and debranching (delimbing): cutting off the unusable tree crown and the branches
  • debarking: removing the bark from the stem; this operation is often done at the processing plant rather than in the forest; in fuelwood harvesting it is not done at all
  • extraction: moving the stems or logs from the stump to a place close to a forest road where they can be sorted, piled and often stored temporarily, awaiting long distance transport
  • log making/cross-cutting (bucking): cutting the stem to the length specified by the intended use of the log
  • scaling: determining the quantity of logs produced, usually by measuring volume (for small dimension timber also by weight; the latter is common for pulpwood; weighing is done at the processing plant in that case)
  • sorting, piling and temporary storage: logs are usually of variable dimensions and quality, and are therefore classified into assortments according to their potential use as pulpwood, sawlogs and so on, and piled until a full load, usually a truckload, has been assembled; the cleared area where these operations, as well as scaling and loading, take place is called a “landing”
  • loading: moving the logs onto the transport medium, typically a truck, and attaching the load.


These operations are not necessarily carried out in the above sequence. Depending on the forest type, the kind of product desired and the technology available, it may be more advantageous to carry out an operation either earlier (i.e., closer to the stump) or later (i.e., at the landing or even at the processing plant). One common classification of harvesting methods is based on distinguishing between:

  • full-tree systems, where trees are extracted to the roadside, the landing or the processing plant with the full crown
  • short-wood systems, where topping, debranching and cross-cutting is done close to the stump (logs are usually not longer than 4 to 6 m)
  • tree-length systems, where tops and branches are removed before extraction.


The most important group of harvesting methods for industrial wood is based on tree length. Short-wood systems are standard in northern Europe and also common for small-dimension timber and fuelwood in many other parts of the world. Their share is likely to increase. Full-tree systems are the least common in industrial wood harvesting, and are used in only a limited number of countries (e.g., Canada, the Russian Federation and the United States). There they account for less than 10% of volume. The importance of this method is diminishing.

For work organization, safety analysis and inspection, it is useful to conceive of three distinct work areas in a wood harvesting operation:

  1. the felling site or stump
  2. the forest terrain between the stump and the forest road
  3. the landing.


It is also worthwhile to examine whether the operations take place largely independently in space and time or whether they are closely related and interdependent. The latter is often the case in harvesting systems where all steps are synchronized. Any disturbance thus disrupts the entire chain, from felling to transport. These so-called hot-logging systems can create extra pressure and strain if not carefully balanced.

The stage in the life cycle of a forest during which wood harvesting takes place, and the harvesting pattern, will affect both the technical process and its associated hazards. Wood harvesting occurs either as thinning or as final cut. Thinning is the removal of some, usually undesirable, trees from a young stand to improve the growth and quality of the remaining trees. It is usually selective (i.e., individual trees are removed without creating major gaps). The spatial pattern generated is similar to that in selective final cutting. In the latter case, however, the trees are mature and often large. Even so, only some of the trees are removed and a significant tree cover remains. In both cases orientation on the worksite is difficult because remaining trees and vegetation block the view. It can be very difficult to bring trees down because their crowns tend to be intercepted by the crowns of remaining trees. There is a high risk of falling debris from the crowns. Both situations are difficult to mechanize. Thinning and selective cutting therefore require more planning and skill to be done safely.

The alternative to selective felling for final harvest is the removal of all trees from a site, called “clear cutting”. Clearcuts can be small, say 1 to 5 hectares, or very large, covering several square kilometres. Large clearcuts are severely criticized on environmental and scenic grounds in many countries. Whatever the pattern of the cut, harvesting old growth and natural forest usually involves greater risk than harvesting younger stands or human-made forests because trees are large and have tremendous inertia when falling. Their branches may be intertwined with the crowns of other trees and climbers, causing them to break off branches of other trees as they fall. Many trees are dead or have internal rot which may not be apparent until late in the felling process. Their behaviour during felling is often unpredictable. Rotten trees may break off and fall in unexpected directions. Unlike green trees, dead and dry trees, called snags in North America, fall quickly.

Technological developments

Technological development in wood harvesting has been very rapid over the second half of the 20th century. Average productivity has been soaring in the process. Today, many different harvesting methods are in use, sometimes side by side in the same country. An overview of systems in use in Germany in the mid-1980s, for example, describes almost 40 different configurations of equipment and methods (Dummel and Branz 1986).

While some harvesting methods are technologically far more complex than others, no single method is inherently superior. The choice will usually depend on the customer specifications for the logs, on forest conditions and terrain, on environmental considerations, and often decisively on cost. Some methods are also technically limited to small and medium-size trees and relatively gentle terrain, with slopes not exceeding 15 to 20°.

Cost and performance of a harvesting system can vary over a wide range, depending on how well the system fits the conditions of the site and, equally important, on the skill of the workers and how well the operation is organized. Hand tools and manual extraction, for example, make perfect economic and social sense in countries with high unemployment, low labour and high capital cost, or in small-scale operations. Fully mechanized methods can achieve very high daily outputs but involve large capital investments. Modern harvesters under favourable conditions can produce upwards of 200 m3 of logs per 8-hour day. A chain-saw operator is unlikely to produce more than 10% of that. A harvester or big cable yarder costs around US$500,000 compared to US$1,000 to US$2,000 for a chain-saw and US$200 for a good quality cross-cut handsaw.

Common Methods, Equipment and Hazards

Felling and preparation for extraction

This stage includes felling and removal of crown and branches; it may include debarking, cross-cutting and scaling. It is one of the most hazardous industrial occupations. Hand tools and chain-saws or machines are used in felling and debranching trees and crosscutting trees into logs. Hand tools include cutting tools such as axes, splitting hammers, bush hooks and bush knives, and hand saws such as cross-cut saws and bow saws. Chain-saws are widely used in most countries. In spite of major efforts and progress by regulators and manufacturers to improve chain-saws, they remain the single most dangerous type of machine in forestry. Most serious accidents and many health problems are associated with their use.

The first activity to be carried out is felling, or severing the tree from the stump as close to the ground as conditions permit. The lower part of the stem is typically the most valuable part, as it contains a high volume, and has no knots and an even wood texture. It should therefore not split, and no fibre should be torn out from the butt. Controlling the direction of the fall is important, not only to protect the tree and those to be left standing, but also to protect the workers and to make extraction easier. In manual felling, this control is achieved by a special sequence and configuration of cuts.

The standard method for chain-saws is depicted in figure 1. After determining the felling direction (1) and clearing the tree’s base and escape routes, sawing starts with the undercut (2), which should penetrate approximately one-fifth to one-quarter of the diameter into the tree. The opening of the undercut should be at an angle of about 45°. The oblique cut (3) is made prior to the horizontal cut (4), which must meet the oblique cut in a straight line facing the felling direction at a 90o angle. If stumps are liable to tear splinters from the tree, as is common with softer woods, the undercut should be terminated with small lateral cuts (5) on both sides of the hinge (6). The back cut (7) must also be horizontal . It should be made 2.5 to 5 cm higher than the base of the undercut. If the tree’s diameter is smaller than the guide bar, the back cut can be made in a single movement (8). Otherwise, the saw must be moved several times (9). The standard method is used for trees with more than 15 cm butt diameter. The standard technique is modified if trees have one-side crowns, are leaning in one direction or have a diameter more than twice the length of the chain-saw blade. Detailed instructions are included in FAO/ILO (1980) and many other training manuals for chain-saw operators.

Figure 1.  Chain-saw felling: Sequence of cuts.


Using standard methods, skilled workers can fell a tree with a high degree of precision. Trees that have symmetrical crowns or those leaning a little in a direction other than the intended direction of fall may not fall at all or may fall at an angle from the intended direction. In these cases, tools such as felling levers for small trees or hammers and wedges for big trees need to be used to shift the tree’s natural centre of gravity in the desired direction.

Except for very small trees, axes are not suitable for felling and cross-cutting. With handsaws the process is relatively slow and errors can be detected and repaired. With chain-saws cuts are fast and the noise blocks out the signals from the tree, such as the sound of breaking fibre before it falls. If the tree does start to fall but is intercepted by other trees, a “hang-up” results, which is extremely dangerous, and must be dealt with immediately and professionally. Turning hooks and levers for smaller trees and manual or tractor-mounted winches for larger trees are used to bring hung-up trees down effectively and safely.

Hazards involved with felling include falling or rolling trees; falling or snapping branches; cutting tools; and noise, vibration and exhaust gases with chain-saws. Windfall is especially hazardous with wood and partially severed root systems under tension; hung-up trees are a frequent cause of severe and fatal accidents. All workers involved in felling should have received specific training. Tools for felling and for dealing with hung-up trees need to be onsite. Hazards associated with cross-cutting include the cutting tools as well as snapping wood and rolling stems or bolts, particularly on slopes.

Once a tree has been brought down, it is usually topped and debranched. In the majority of cases, this is still done with hand tools or chain-saws at the stump. Axes can be very effective for debranching. Where possible, trees are felled across a stem already on the ground. This stem thus serves as a natural workbench, raising the tree to be debranched to a more convenient height and allowing for complete debranching without having to turn the tree. The branches and the crown are cut from the stem and left on the site. The crowns of large, broad-leaved trees may have to be cut into smaller pieces or pulled aside because they would otherwise obstruct extraction to the roadside or landing.

Hazards involved with debranching include cuts with tools or chain-saws; high risk of chain-saw kick-back (see figure 2); snapping branches under tension; rolling logs; trips and falls; awkward work postures; and static work load if poor technique is used.

Figure 2. Chain-saw Kick-back.


In mechanized operations, the directional fall is achieved by holding the tree with a boom mounted on a sufficiently heavy base machine, and cutting the stem with a shear, circular saw or chain-saw integrated into the boom. To do this, the machine has to be driven rather close to the tree to be felled. The tree is then lowered into the desired direction by movements of the boom or of the base of the machine. The most common types of machines are feller-bunchers and harvesters.

Feller-bunchers are mostly mounted on machines with tracks, but they can also be equipped with tyres. The felling boom usually allows them to fell and collect a number of small trees (a bunch), which is then deposited along a skid trail. Some have a clam bunk to collect a load. When feller-bunchers are used, topping and debranching are usually done by machines at the landing.


With good machine design and careful operation, accident risk with feller-bunchers is relatively low, except when chain-saw operators work along with the machine. Health hazards, such as vibration, noise, dust and fumes, are significant, since base machines often are not built for forestry purposes. Feller-bunchers should not be used on excessive slopes, and the boom should not be overloaded, as felling direction becomes uncontrollable.

Harvesters are machines which integrate all felling operations except debarking. They usually have six to eight wheels, hydraulic traction and suspension, and articulated steering. They have booms with a reach of 6 to 10 m when loaded. A distinction is made between one-grip and two-grip harvesters. One-grip harvesters have one boom with a felling head fitted with devices for felling, debranching, topping and cross-cutting. They are used for small trees up to 40 cm butt diameter, mostly in thinnings but increasingly also in final cutting. A two-grip harvester has separate felling and processing heads. The latter is mounted on the base machine rather than on the boom. It can handle trees up to a stump diameter of 60 cm. Modern harvesters have an integrated, computer-assisted measuring device that can be programmed to make decisions about optimum cross-cutting depending on the assortments needed.

Harvesters are the dominant technology in large-scale harvesting in northern Europe, but presently account for a rather small share of harvesting worldwide. Their importance is, however, likely to rise fast as second growth, human-made forests and plantations become more important as sources of raw material.

Accident rates in harvester operation are typically low, though accident risk rises when chain-saw operators work along with harvesters. Maintenance of harvesters is hazardous; repairs are always under high work pressure, increasingly at night; there is high risk of slipping and falling, uncomfortable and awkward working postures, heavy lifting, contact with hydraulic oils and hot oils under pressure. The biggest hazards are static muscle tension and repetitive strain from operating controls and psychological stress.


Extraction involves moving the stems or logs from the stump to a landing or roadside where they can be processed or piled into assortments. Extraction can be very heavy and hazardous work. It can also inflict substantial environmental damage to the forest and its regeneration, to soils and to watercourses. The major types of extraction systems commonly recognized are:

  • ground-skidding systems: The stems or logs are dragged on the ground by machines, draught animals or humans.
  • forwarders: The stems or logs are carried on a machine (in the case of fuelwood, also by humans).
  • cable systems: The logs are conveyed from the stump to the landing by one or more suspended cables.
  • aerial systems: Helicopters or balloons are used to airlift the logs.


Ground skidding, by far the most important extraction system both for industrial wood and fuelwood, is usually done with wheeled skidders specially designed for forestry operations. Crawler tractors and, especially, farm tractors can be cost effective in small private forests or for the extraction of small trees from tree plantations, but adaptations are needed to protect both the operators and the machines. Tractors are less robust, less well balanced and less protected than purpose-built machines. As with all machines used in forestry, hazards include over-turning, falling objects, penetrating objects, fire, whole-body vibration and noise. All-wheel drive is preferable, and a minimum of 20% of the machine weight should be maintained as load on the steered axle during operation, which may require attaching additional weight to the front of the machine. The engine and transmission may need extra mechanical protection. Minimum engine power should be 35 kW for small-dimension timber; 50 kW is usually adequate for normal-size logs.

Grapple skidders drive directly to the individual or the pre-bunched stems, lift the front end of the load and drag it to the landing. Skidders with cable winches can operate from skid roads. Their loads are usually assembled through chokers, straps, chains or short cables that are attached to individual logs. A choker setter prepares the logs to be hooked up and, when the skidder returns from the landing, a number of chokers is attached to the main line and winched into the skidder. Most skidders have an arch onto which the front end of the load can be lifted to reduce friction during skidding. When skidders with powered winches are used, good communication between crew members through two-way radios or optical or acoustic signals is essential. Clear signals need to be agreed upon; any signal that is not understood means “Stop!”. Figure 3  shows proposed hand signals for skidders with powered winches.

Figure 3.  International conventions for hand signals to be used for skidders with powered winches.


As a rule of thumb, ground skidding equipment should not be used on slopes of more than 15°. Crawler tractors may be used to extract large trees from relatively steep terrain, but they can cause substantial damage to soils if used carelessly. For environmental and safety reasons, all skidding operations should be suspended during exceptionally wet weather.

Extraction with draught animals is an economically viable option for small logs, particularly in thinning operations. Skidding distances must be short (typically 200 m or less) and slopes gentle. It is important to use appropriate harnesses providing maximum pulling power, and devices like skidding pans, sulkies or sledges that reduce skidding resistance.

Manual skidding is increasingly rare in industrial logging but continues to be practised in subsistence logging, particularly for fuelwood. It is limited to short distances and usually downhill, making use of gravity to move logs. While logs are typically small, this is very heavy work and can be hazardous on steep slopes. Efficiency and safety can be increased by using hooks, levers and other hand tools for lifting and pulling logs. Chutes, traditionally made from timber but also available as polyethylene half-tubes, can be an alternative to manual ground skidding of short logs in steep terrain.

Forwarders are extraction machines that carry a load of logs completely off the ground, either within their own frame or on a trailer. They usually have a mechanical or hydraulic crane for self-loading and unloading of logs. They tend to be used in combination with mechanized felling and processing equipment. The economic extraction distance is 2 to 4 times that of ground-skidders. Forwarders work best when logs are approximately uniform in size.

Accidents involving forwarders are typically similar to those of tractors and other forestry machines: overturning, penetrating and falling objects, electric power lines and maintenance problems. Health hazards include vibration, noise and hydraulic oils.

Using human beings to carry loads is still done for short logs like pulpwood or pit props in some industrial harvesting, and is the rule in fuelwood harvesting. Loads carried often exceed all recommended limits, particularly for women, who are often responsible for fuelwood gathering. Training in proper techniques that would avoid extreme strain on the spine and using devices like back packs that give a better weight distribution would ease their burden.

Cable extraction systems are fundamentally different from other extraction systems in that the machine itself does not travel. Logs are conveyed with a carriage moving along suspended cables. The cables are operated by a winching machine, also referred to as a yarder or hauler. The machine is installed either at the landing or at the opposite end of the cableway, often on a ridgetop. The cables are suspended above the ground on one or more “spar” trees, which may either be trees or steel towers. Many different types of cable systems are in use. Skylines or cable cranes have a carriage that can be moved along the mainline, and the cable can be released to allow lateral pulling of logs to the line, before they are lifted and forwarded to the landing. If the system permits full suspension of the load during hauling, soil disturbance is minimal. Because the machine is fixed, cable systems can be used in steep terrain and on wet soils. Cable systems in general are substantially more expensive than ground skidding and require careful planning and skilled operators.

Hazards occur during installation, operation and dismantling of the cable system, and include mechanical impact by deformation of the cabin or stand; breaking of cables, anchors, spars or supports; inadvertent or uncontrollable movements of cables, carriages, chokers and loads; and squeezes, abrasions and so on from moving parts. Health hazards include noise, vibration and awkward working postures.

Aerial extraction systems are those which fully suspend logs in the air throughout the extraction process. The two types currently in use are balloon systems and helicopters, but only helicopters are widely used. Helicopters with a lifting capacity of about 11 tonnes are commercially available. The loads are suspended under the helicopter on a tether line (also called “tagline”). The tether lines are typically between 30 and 100 m long, depending both upon topography and the height of trees above which the helicopter must hover. The loads are attached with long chokers and are flown to the landing, where the chokers are released by remote control from the aircraft. When large logs are being extracted, an electrically operated grapple system may be used instead of chokers. Round-trip times are typically two to five minutes. Helicopters have a very high direct cost, but can also achieve high production rates and reduce or eliminate the need for expensive road construction. They also cause low environmental impact. In practice their use is limited to high-value timber in otherwise inaccessible regions or other special circumstances.

Because of the high production rates required to make the use of such equipment economical, the number of workers employed on helicopter operations is much larger than for other systems. This is true for landings, but also for workers in cutting operations. Helicopter logging can create major safety problems, including fatalities, if precautions are disregarded and crews ill prepared.

Log making and loading

Log making, if it takes place at the landing, is mostly done by chain-saw operators. It can also be carried out by a processor (i.e., a machine that delimbs, tops and cuts to length). Scaling is mostly done manually using measuring tape. For sorting and piling, logs are usually handled by machines like skidders, which use their front blade to push and lift logs, or by grapple loaders. Helpers with hand tools like levers often assist the machine operators. In fuelwood harvesting or where small logs are involved, loading onto trucks is usually done manually or by using a small winch. Loading large logs manually is very arduous and dangerous; these are usually handled by grapple or knuckle boom loaders. In some countries the logging trucks are equipped for self-loading. The logs are secured on the truck by lateral supports and cables that can be pulled tight.

In manual loading of timber, physical strain and workloads are extremely high. In both manual and mechanized loading, there is danger of getting hit by moving logs or equipment. Mechanized loading hazards include noise, dust, vibration, high mental workload, repetitive strain, overturning, penetrating or falling objects and hydraulic oils.

Standards and Regulations

At present most international safety standards applicable to forestry machinery are general—for example, roll-over protection. However, work is under way on specialized standards at the International Organization for Standardization (ISO). (See the article “Rules, legislation, regulations and codes of forest practice” in this chapter.)

Chain-saws are one of the few pieces of forestry equipment for which specific international regulations on safety features exist. Various ISO norms are relevant. They were incorporated and supplemented in 1994 in European Norm 608, Agricultural and forest machinery: Portable chain-saws—Safety. This standard contains detailed indications on design features. It also stipulates that manufacturers are required to provide comprehensive instructions and information on all aspects of operator/user maintenance and the safe use of the saw. This is to include safety clothing and personal protective equipment requirements as well as the need for training. All saws sold within the European Union have to be marked “Warning, see instruction handbook”. The standard lists the items to be included in the handbook.

Forestry machines are less well covered by international standards, and there is often no specific national regulation about required safety features. Forestry machines may also have significant ergonomic deficiencies. These play a major role in the development of serious health complaints among operators. In other cases, machines have a good design for a particular worker population, but are less suitable when imported into countries where workers have different body sizes, communication routines and so on. In the worst case machines are stripped of essential safety and health features to reduce prices for exports.

In order to guide testing organizations and those responsible for machine acquisition, specialized ergonomic checklists have been developed in various countries. Checklists usually address the following machine characteristics:

  • access and exit areas like steps, ladders and doors
  • cabin space and position of controls
  • seat, arms, back and footrest of operator’s chair
  • visibility when performing main operations
  • “worker-machine interface”: type and arrangement of indicators and controls of machine functions
  • physical environment, including vibration noise, gases and climatic factors
  • safety, including roll-over, penetrating objects, fire and so on
  • maintenance.


Specific examples of such checklists can be found in Golsse (1994) and Apud and Valdés (1995). Recommendations for machines and equipment as well as a list of existing ILO standards are included in ILO 1998.



Saturday, 12 March 2011 16:38

General Profile

Forestry—A Definition

For the purposes of the present chapter, forestry is understood to embrace all the fieldwork required to establish, regenerate, manage and protect forests and to harvest their products. The last step in the production chain covered by this chapter is the transport of raw forest products. Further processing, such as into sawnwood, furniture or paper is dealt with in the Lumber, Woodworking and Pulp and paper industries chapters in this Encyclopaedia.

The forests may be natural, human-made or tree plantations. Forest products considered in this chapter are both wood and other products, but emphasis is on the former, because of its relevance for safety and health.

Evolution of the Forest Resource and the Sector

The utilization and management of forests are as old as the human being. Initially forests were almost exclusively used for subsistence: food, fuelwood and building materials. Early management consisted mostly of burning and clearing to make room for other land uses—in particular, agriculture, but later also for settlements and infrastructure. The pressure on forests was aggravated by early industrialization. The combined effect of conversion and over-utilization was a sharp reduction in forest area in Europe, the Middle East, India, China and later in parts of North America. Presently, forests cover about one-quarter of the land surface of the earth.

The deforestation process has come to a halt in industrialized countries, and forest areas are actually increasing in these countries, albeit slowly. In most tropical and subtropical countries, however, forests are shrinking at a rate of 15 to 20 million hectares (ha), or 0.8%, per year. In spite of continuing deforestation, developing countries still account for about 60% of the world forest area, as can be seen in table 1. The countries with the largest forest areas by far are the Russian Federation, Brazil, Canada and the United States. Asia has the lowest forest cover in terms of percentage of land area under forest and hectares per capita.

Table 1.  Forest area by region (1990).


  Area (million hectares)         

 % total   




North/Central America



South America












Former USSR



Industrialized (all)



Developing (all)






Source: FAO 1995b.

Forest resources vary significantly in different parts of the world. These differences have a direct impact on the working environment, on the technology used in forestry operations and on the level of risk associated with them. Boreal forests in northern parts of Europe, Russia and Canada are mostly made up of conifers and have a relatively small number of trees per hectare. Most of these forests are natural. Moreover, the individual trees are small in size. Because of the long winters, trees grow slowly and wood increment ranges from less than 0.5 to 3 m3/ha/y.

The temperate forests of southern Canada, the United States, Central Europe, southern Russia, China and Japan are made up of a wide range of coniferous and broad-leaved tree species. Tree densities are high and individual trees can be very large, with diameters of more than 1 m and tree height of more than 50 m. Forests may be natural or human-made (i.e., intensively managed with more uniform tree sizes and fewer tree species). Standing volumes per hectare and increment are high. The latter range typically from 5 to greater than 20 m3/ha/y.

Tropical and subtropical forests are mostly broad-leaved. Tree sizes and standing volumes vary greatly, but tropical timber harvested for industrial purposes is typically in the form of large trees with big crowns. Average dimensions of harvested trees are highest in the tropics, with logs of more than 2 m3 being the rule. Standing trees with crowns routinely weigh more than 20 tonnes before felling and debranching. Dense undergrowth and tree climbers make work even more cumbersome and dangerous.

An increasingly important type of forest in terms of wood production and employment is tree plantations. Tropical plantations are thought to cover about 35 million hectares, with about 2 million hectares added per year (FAO 1995). They usually consist of only one very fast growing species. Increment mostly ranges from 15 to 30 m3/ha/y. Various pines (Pinus spp.) and eucalyptus (Eucalyptus spp.) are the most common species for industrial uses. Plantations are managed intensively and in short rotations (from 6 to 30 years), while most temperate forests take 80, sometimes up to 200 years, to mature. Trees are fairly uniform, and small to medium in size, with approximately 0.05 to 0.5 m3/tree. There is typically little undergrowth.

Prompted by wood scarcity and natural disasters like landslides, floods and avalanches, more and more forests have come under some form of management over the last 500 years. Most industrialized countries apply the “sustained yield principle”, according to which present uses of the forest may not reduce its potential to produce goods and benefits for later generations. Wood utilization levels in most industrialized countries are below the growth rates. This is not true for many tropical countries.

Economic Importance

Globally, wood is by far the most important forest product. World roundwood production is approaching 3.5 billion m3 annually. Wood production grew by 1.6% a year in the 1960s and 1970s and by 1.8% a year in the 1980s, and is projected to increase by 2.1% a year well into the 21st century, with much higher rates in developing countries than in industrialized ones.

Industrialized countries’ share of world roundwood production is 42% (i.e., roughly proportional to the share of forest area). There is, however, a major difference in the nature of the wood products harvested in industrialized and in developing countries. While in the former more than 85% consists of industrial roundwood to be used for sawnwood, panel or pulp, in the latter 80% is used for fuelwood and charcoal. This is why the list of the ten biggest producers of industrial roundwood in figure 1  includes only four developing countries. Non-wood forest products are still very significant for subsistence in many countries. They account for only 1.5% of traded unprocessed forest products, but products like cork, rattan, resins, nuts and gums are major exports in some countries.

Figure 1.  Ten biggest producers of industrial roundwood, 1993 (former USSR 1991).


Worldwide, the value of production in forestry was US$96,000 million in 1991, compared to US$322,000 million in downstream forest-based industries. Forestry alone accounted for 0.4% of world GDP. The share of forestry production in GDP tends to be much higher in developing countries, with an average of 2.2%, than in industrialized ones, where it represents only 0.14% of GDP. In a number of countries forestry is far more important than the averages suggest. In 51 countries the forestry and forest-based industries sector combined generated 5% or more of the respective GDP in 1991.

In several industrialized and developing countries, forest products are a significant export. The total value of forestry exports from developing countries increased from about US$7,000 million in 1982 to over US$19,000 million in 1993 (1996 dollars). Large exporters among industrialized countries include Canada, the United States, Russia, Sweden, Finland and New Zealand. Among tropical countries Indonesia (US$5,000 million), Malaysia (US$4,000 million), Chile and Brazil (about US$2,000 million each) are the most important.

While they cannot be readily expressed in monetary terms, the value of non-commercial goods and benefits generated by forests may well exceed their commercial output. According to estimates, some 140 to 300 million people live in or depend on forests for their livelihood. Forests are also home to three-quarters of all species of living beings. They are a significant sink of carbon dioxide and serve to stabilize climates and water regimes. They reduce erosion, landslides and avalanches, and produce clean drinking water. They are also fundamental for recreation and tourism.


Figures on wage employment in forestry are difficult to obtain and can be unreliable even for industrialized countries. The reasons are the high share of the self-employed and farmers, who do not get recorded in many cases, and the seasonality of many forestry jobs. Statistics in most developing countries simply absorb forestry into the much larger agricultural sector, with no separate figures available. The biggest problem, however, is the fact that most forestry work is not wage employment, but subsistence. The main item here is the production of fuelwood, particularly in developing countries. Bearing these limitations in mind, figure 2  below provides a very conservative estimate of global forestry employment.

Figure 2.  Employment in forestry (full-time equivalents).


World wage employment in forestry is in the order of 2.6 million, of which about 1 million is in industrialized countries. This is a fraction of the downstream employment: wood industries and pulp and paper have at least 12 million employees in the formal sector. The bulk of forestry employment is unpaid subsistence work—some 12.8 million full-time equivalents in developing and some 0.3 million in industrialized countries. Total forestry employment can thus be estimated at some 16 million person years. This is equivalent to about 3% of world agricultural employment and to about 1% of total world employment.


In most industrialized countries the size of the forestry workforce has been shrinking. This is a result of a shift from seasonal to full-time, professional forest workers, compounded by rapid mechanization, particularly of wood harvesting. Figure 3 illustrates the enormous differences in productivity in major wood-producing countries. These differences are to some extent due to natural conditions, silvicultural systems and statistical error. Even allowing for these, significant gaps persist. The transformation in the workforce is likely to continue: mechanization is spreading to more countries, and new forms of work organization, namely team work concepts, are boosting productivity, while harvesting levels remain by and large constant. It should be noted that in many countries seasonal and part-time work in forestry are unrecorded, but remain very common among farmers and small woodland owners. In a number of developing countries the industrial forestry workforce is likely to grow as a result of more intensive forest management and tree plantations. Subsistence employment, on the other hand, is likely to decline gradually, as fuelwood is slowly replaced by other forms of energy.

Figure 3Countries with highest wage employment in forestry and industrial roundwood production (late 1980s to early 1990s).


Characteristics of the Workforce

Industrial forestry work has largely remained a male domain. The proportion of women in the formal workforce rarely exceeds 10%. There are, however, jobs that tend to be predominantly carried out by women, such as planting or tending of young stands and raising seedlings in tree nurseries. In subsistence employment women are a majority in many developing countries, because they are usually responsible for fuelwood gathering.

The largest share of all industrial and subsistence forestry work is related to the harvesting of wood products. Even in human-made forests and plantations, where substantial silvicultural work is required, harvesting accounts for more than 50% of the workdays per hectare. In harvesting in developing countries the ratios of supervisor/technician to foremen and to workers are 1 to 3 and 1 to 40, respectively. The ratio is smaller in most industrialized countries.

Broadly, there are two groups of forestry jobs: those related to silviculture and those related to harvesting. Typical occupations in silviculture include tree planting, fertilization, weed and pest control, and pruning. Tree planting is very seasonal, and in some countries involves a separate group of workers exclusively dedicated to this activity. In harvesting, the most common occupations are chain-saw operation, in tropical forests often with an assistant; choker setters who attach cables to tractors or skylines pulling logs to roadside; helpers who measure, move, load or debranch logs; and machine operators for tractors, loaders, cable cranes, harvesters and logging trucks.

There are major differences between segments of the forestry workforce with respect to the form of employment, which have a direct bearing on their exposure to safety and health hazards. The share of forest workers directly employed by the forest owner or industry has been declining even in those countries where it used to be the rule. More and more work is done through contractors (i.e., relatively small, geographically mobile service firms employed for a particular job). The contractors may be owner-operators (i.e., single-person firms or family businesses) or they have a number of employees. Both the contractors and their employees often have very unstable employment. Under pressure to cut costs in a very competitive market, contractors sometimes resort to illegal practices such as moonlighting and hiring undeclared immigrants. While the move to contracting has in many cases helped to cut costs, to advance mechanization and specialization as well as to adjust the workforce to changing demands, some traditional ailments of the profession have been aggravated through the increased reliance on contract labour. These include accident rates and health complaints, both of which tend to be more frequent among contract labour.

Contract labour has also contributed to further increasing the high rate of turnover in the forestry workforce. Some countries report rates of almost 50% per year for those changing employers and more than 10% per year leaving the forestry sector altogether. This aggravates the skill problem already looming large among much of the forestry workforce. Most skill acquisition is still by experience, usually meaning trial and error. Lack of structured training, and short periods of experience due to high turnover or seasonal work, are major contributing factors to the significant safety and health problems facing the forestry sector (see the article “Skills and training” [FOR15AE] in this chapter).

The dominant wage system in forestry by far continues to be piece-rates (i.e., remuneration solely based on output). Piece-rates tend to lead to a rapid pace of work and are widely believed to increase the number of accidents. There is, however, no scientific evidence to back this contention. One undisputed side effect is that earnings fall once workers have reached a certain age because their physical abilities decline. In countries where mechanization plays a major role, time-based wages have been on the increase, because the work rhythm is largely determined by the machine. Various bonus wage systems are also in use.

Forestry wages are generally well below the industrial average in the same country. Workers, the self-employed and contractors often try to compensate by working 50 or even 60 hours per week. Such situations increase strain on the body and the risk of accidents because of fatigue.

Organized labour and trade unions are rather rare in the forestry sector. The traditional problems of organizing geographically dispersed, mobile, sometimes seasonal workers have been compounded by the fragmentation of the workforce into small contractor firms. At the same time, the number of workers in categories that are typically unionized, such as those directly employed in larger forest enterprises, is falling steadily. Labour inspectorates attempting to cover the forestry sector are faced with problems similar in nature to those of trade union organizers. As a result there is very little inspection in most countries. In the absence of institutions whose mission is to protect worker rights, forest workers often have little knowledge of their rights, including those laid down in existing safety and health regulations, and experience great difficulties in exercising such rights.

Health and Safety Problems

The popular notion in many countries is that forestry work is a 3-D job: dirty, difficult and dangerous. A host of natural, technical and organizational factors contribute to that reputation. Forestry work has to be done outdoors. Workers are thus exposed to the extremes of weather: heat, cold, snow, rain and ultraviolet (UV) radiation. Work even often proceeds in bad weather and, in mechanized operations, it increasingly continues at night. Workers are exposed to natural hazards such as broken terrain or mud, dense vegetation and a series of biological agents.

Worksites tend to be remote, with poor communication and difficulties in rescue and evacuation. Life in camps with extended periods of isolation from family and friends is still common in many countries.

The difficulties are compounded by the nature of the work—trees may fall unpredictably, dangerous tools are used and often there is a heavy physical workload. Other factors like work organization, employment patterns and training also play a significant role in increasing or reducing hazards associated with forestry work. In most countries the net result of the above influences are very high accident risks and serious health problems.

Fatalities in Forest Work

In most countries forest work is one of the most dangerous occupations, with great human and financial losses. In the United States accident insurance costs amount to 40% of payroll.

A cautious interpretation of the available evidence suggests that accident trends are more often upward than downward. Encouragingly, there are countries that have a long-standing record in bringing down accident frequencies (e.g., Sweden and Finland). Switzerland represents the more common situation of increasing, or at best stagnating, accident rates. The scarce data available for developing countries indicate little improvement and usually excessively high accident levels. A study of safety in pulpwood logging in plantation forests in Nigeria, for example, found that on average a worker had 2 accidents per year. Between 1 in 4 and 1 in 10 workers suffered a serious accident in a given year (Udo 1987).

A closer inspection of accidents reveals that harvesting is far more hazardous than other forest operations (ILO 1991). Within forest harvesting, tree felling and cross-cutting are the jobs with the most accidents, particularly serious or fatal ones. In some countries, such as in the Mediterranean area, firefighting can also be a major cause of fatalities, claiming up to 13 lives a year in Spain in some years (Rodero 1987). Road transport can also account for a large share of serious accidents, particularly in tropical countries.

The chain-saw is clearly the single most dangerous tool in forestry, and the chain-saw operator the most exposed worker. The situation depicted in figure 4  for a territory of Malaysia is found with minor variations in most other countries as well. In spite of increasing mechanization, the chain-saw is likely to remain the key problem in industrialized countries. In developing countries, its use can be expected to expand as plantations account for an increasing share of the wood harvest.

Figure 4.  Distribution of logging fatalities among jobs, Malaysia (Sarawak), 1989.


Virtually all parts of the body can be injured in forest work, but there tends to be a concentration of injuries to the legs, feet, back and hands, in roughly that order. Cuts and open wounds are the most common type of injury in chain-saw work while bruises dominate in skidding, but there are also fractures and dislocations.

Two situations under which the already high risk of serious accidents in forest harvesting multiplies severalfold are “hung-up” trees and wind-blown timber. Windblow tends to produce timber under tension, which requires specially adapted cutting techniques (for guidance see FAO/ECE/ILO 1996a; FAO/ILO 1980; and ILO 1998). Hung-up trees are those that have been severed from the stump but did not fall to the ground because the crown became entangled with other trees. Hung-up trees are extremely dangerous and referred to as “widow-makers” in some countries, because of the high number of fatalities they cause. Aid tools, such as turning hooks and winches, are required to bring such trees down safely. In no case should it be permitted that other trees be felled onto a hung-up one in the hope of bringing it down. This practice, known as “driving” in some countries, is extremely hazardous.

Accident risks vary not only with technology and exposure due to the job, but with other factors as well. In almost all cases for which data are available, there is a very significant difference between segments of the workforce. Full-time, professional forest workers directly employed by a forest enterprise are far less affected than farmers, self-employed or contract labour. In Austria, farmers seasonally engaged in logging suffer twice as many accidents per million cubic metres harvested as professional workers (Sozialversicherung der Bauern 1990), in Sweden, even four times as many. In Switzerland, workers employed in public forests have only half as many accidents as those employed by contractors, particularly where workers are hired only seasonally and in the case of migrant labour (Wettmann 1992).

The increasing mechanization of tree harvesting has had very positive consequences for work safety. Machine operators are well protected in guarded cabins, and accident risks have dropped very significantly. Machine operators experience less than 15% of the accidents of chain-saw operators to harvest the same amount of timber. In Sweden operators have one-quarter of the accidents of professional chain-saw operators.

Growing Occupational Disease Problems

The reverse side of the mechanization coin is an emerging problem of neck and shoulder strain injuries among machine operators. These can be as incapacitating as serious accidents.

The above problems add to the traditional health complaints of chain-saw operators—namely, back injuries and hearing loss. Back pain due to physically heavy work and unfavourable working postures is very common among chain-saw operators and workers doing manual loading of logs. There is a high incidence of premature loss of working capacity and of early retirement among forest workers as a result. A traditional ailment of chain-saw operators that has largely been overcome in recent years through improved saw design is vibration-induced “white finger” disease.

The physical, chemical and biological hazards causing health problems in forestry are discussed in the following articles of this chapter.

Special Risks for Women

Safety risks are by and large the same for men and women in forestry. Women are often involved in planting and tending work, including the application of pesticides. However, women who have smaller body size, lung volume, heart and muscles may have a work capacity on average that is about one-third lower than that of men. Correspondingly, legislation in many countries limits the weight to be lifted and carried by women to about 20 kg (ILO 1988), although such sex-based differences in exposure limits are illegal in many countries. These limits are often exceeded by women working in forestry. Studies in British Columbia, where separate standards do not apply, among planting workers showed full loads of plants carried by men and women to average 30.5 kg, often in steep terrain with heavy ground cover (Smith 1987).

Excessive loads are also common in many developing countries where women work as fuelwood carriers. A survey in Addis Ababa, Ethiopia, for example, found that an estimated 10,000 women and children eke out a livelihood from hauling fuelwood into town on their backs (see figure 5 ). The average bundle weighs 30 kg and is carried over a distance of 10 km. The work is highly debilitating and results in numerous serious health complaints, including frequent miscarriages (Haile 1991).

Figure 5.  Woman fuelwood carrier, Addis Ababa, Ethiopia.


The relationship between the specific working conditions in forestry, workforce characteristics, form of employment, training and other similar factors and safety and health in the sector has been a recurrent theme of this introductory article. In forestry, even more than in other sectors, safety and health cannot be analysed, let alone promoted, in isolation. This theme will also be the leitmotiv for the remainder of the chapter.




Thursday, 10 March 2011 17:54

Occupational Exposure Limits

The History of Occupational Exposure Limits

Over the past 40 years, many organizations in numerous countries have proposed occupational exposure limits (OELs) for airborne contaminants. The limits or guidelines that have gradually become the most widely accepted both in the United States and in most other countries are those issued annually by the American Conference of Governmental Industrial Hygienists (ACGIH), which are termed threshold limit values (TLVs) (LaNier 1984; Cook 1986; ACGIH 1994).

The usefulness of establishing OELs for potentially harmful agents in the working environment has been demonstrated repeatedly since their inception (Stokinger 1970; Cook 1986; Doull 1994). The contribution of OELs to the prevention or minimization of disease is now widely accepted, but for many years such limits did not exist, and even when they did, they were often not observed (Cook 1945; Smyth 1956; Stokinger 1981; LaNier 1984; Cook 1986).

It was well understood as long ago as the fifteenth century, that airborne dusts and chemicals could bring about illness and injury, but the concentrations and lengths of exposure at which this might be expected to occur were unclear (Ramazinni 1700).

As reported by Baetjer (1980), “early in this century when Dr. Alice Hamilton began her distinguished career in occupational disease, no air samples and no standards were available to her, nor indeed were they necessary. Simple observation of the working conditions and the illness and deaths of the workers readily proved that harmful exposures existed. Soon however, the need for determining standards for safe exposure became obvious.”

The earliest efforts to set an OEL were directed to carbon monoxide, the toxic gas to which more persons are occupationally exposed than to any other (for a chronology of the development of OELs, see figure 1. The work of Max Gruber at the Hygienic Institute at Munich was published in 1883. The paper described exposing two hens and twelve rabbits to known concentrations of carbon monoxide for up to 47 hours over three days; he stated that “the boundary of injurious action of carbon monoxide lies at a concentration in all probability of 500 parts per million, but certainly (not less than) 200 parts per million”. In arriving at this conclusion, Gruber had also inhaled carbon monoxide himself. He reported no symptoms or uncomfortable sensations after three hours on each of two consecutive days at concentrations of 210 parts per million and 240 parts per million (Cook 1986).

Figure 1. Chronology of occupational exposure levels (OELS).


The earliest and most extensive series of animal experiments on exposure limits were those conducted by K.B. Lehmann and others under his direction. In a series of publications spanning 50 years they reported on studies on ammonia and hydrogen chloride gas, chlorinated hydrocarbons and a large number of other chemical substances (Lehmann 1886; Lehmann and Schmidt-Kehl 1936).

Kobert (1912) published one of the earlier tables of acute exposure limits. Concentrations for 20 substances were listed under the headings: (1) rapidly fatal to man and animals, (2) dangerous in 0.5 to one hour, (3) 0.5 to one hour without serious disturbances and (4) only minimal symptoms observed. In his paper “Interpretations of permissible limits”, Schrenk (1947) notes that the “values for hydrochloric acid, hydrogen cyanide, ammonia, chlorine and bromine as given under the heading ‘only minimal symptoms after several hours’ in the foregoing Kobert paper agree with values as usually accepted in present-day tables of MACs for reported exposures”. However, values for some of the more toxic organic solvents, such as benzene, carbon tetrachloride and carbon disulphide, far exceeded those currently in use (Cook 1986).

One of the first tables of exposure limits to originate in the United States was that published by the US Bureau of Mines (Fieldner, Katz and Kenney 1921). Although its title does not so indicate, the 33 substances listed are those encountered in workplaces. Cook (1986) also noted that most of the exposure limits through the 1930s, except for dusts, were based on rather short animal experiments. A notable exception was the study of chronic benzene exposure by Leonard Greenburg of the US Public Health Service, conducted under the direction of a committee of the National Safety Council (NSC 1926). An acceptable exposure for human beings based on long-term animal experiments was derived from this work.

According to Cook (1986), for dust exposures, permissible limits established before 1920 were based on exposures of workers in the South African gold mines, where the dust from drilling operations was high in crystalline free silica. In 1916, an exposure limit of 8.5 million particles per cubic foot of air (mppcf) for the dust with an 80 to 90% quartz content was set (Phthisis Prevention Committee 1916). Later, the level was lowered to 5 mppcf. Cook also reported that, in the United States, standards for dust, also based on exposure of workers, were recommended by Higgins and co-workers following a study at the south-western Missouri zinc and lead mines in 1917. The initial level established for high quartz dusts was ten mppcf, appreciably higher than was established by later dust studies conducted by the US Public Health Service. In 1930, the USSR Ministry of Labour issued a decree that included maximum allowable concentrations for 12 industrial toxic substances.

The most comprehensive list of occupational exposure limits up to 1926 was for 27 substances (Sayers 1927). In 1935 Sayers and Dalle Valle published physiological responses to five concentrations of 37 substances, the fifth being the maximum allowable concentration for prolonged exposure. Lehmann and Flury (1938) and Bowditch et al. (1940) published papers that presented tables with a single value for repeated exposures to each substance.

Many of the exposure limits developed by Lehmann were included in a monograph initially published in 1927 by Henderson and Haggard (1943), and a little later in Flury and Zernik’s Schadliche Gase (1931). According to Cook (1986), this book was considered the authoritative reference on effects of injurious gases, vapours and dusts in the workplace until Volume II of Patty’s Industrial Hygiene and Toxicology (1949) was published.

The first lists of standards for chemical exposures in industry, called maximum allowable concentrations (MACs), were prepared in 1939 and 1940 (Baetjer 1980). They represented a consensus of opinion of the American Standard Association and a number of industrial hygienists who had formed the ACGIH in 1938. These “suggested standards” were published in 1943 by James Sterner. A committee of the ACGIH met in early 1940 to begin the task of identifying safe levels of exposure to workplace chemicals, by assembling all the data which would relate the degree of exposure to a toxicant to the likelihood of producing an adverse effect (Stokinger 1981; LaNier 1984). The first set of values were released in 1941 by this committee, which was composed of Warren Cook, Manfred Boditch (reportedly the first hygienist employed by industry in the United States), William Fredrick, Philip Drinker, Lawrence Fairhall and Alan Dooley (Stokinger 1981).

In 1941, a committee (designated as Z-37) of the American Standards Association, which later became the American National Standards Institute, developed its first standard of 100 ppm for carbon monoxide. By 1974 the committee had issued separate bulletins for 33 exposure standards for toxic dusts and gases.

At the annual meeting of the ACGIH in 1942, the newly appointed Subcommittee on Threshold Limits presented in its report a table of 63 toxic substances with the “maximum allowable concentrations of atmospheric contaminants” from lists furnished by the various state industrial hygiene units. The report contains the statement, “The table is not to be construed as recommended safe concentrations. The material is presented without comment” (Cook 1986).

In 1945 a list of 132 industrial atmospheric contaminants with maximum allowable concentrations was published by Cook, including the then current values for six states, as well as values presented as a guide for occupational disease control by federal agencies and maximum allowable concentrations that appeared best supported by the references on original investigations (Cook 1986).

At the 1946 annual meeting of ACGIH, the Subcommittee on Threshold Limits presented their second report with the values of 131 gases, vapours, dusts, fumes and mists, and 13 mineral dusts. The values were compiled from the list reported by the subcommittee in 1942, from the list published by Warren Cook in Industrial Medicine (1945) and from published values of the Z-37 Committee of the American Standards Association. The committee emphasized that the “list of M.A.C. values is presented … with the definite understanding that it be subject to annual revision.”

Intended use of OELs

The ACGIH TLVs and most other OELs used in the United States and some other countries are limits which refer to airborne concentrations of substances and represent conditions under which “it is believed that nearly all workers may be repeatedly exposed day after day without adverse health effects” (ACGIH 1994). (See table 1).  In some countries the OEL is set at a concentration which will protect virtually everyone. It is important to recognize that unlike some exposure limits for ambient air pollutants, contaminated water, or food additives set by other professional groups or regulatory agencies, exposure to the TLV will not necessarily prevent discomfort or injury for everyone who is exposed (Adkins et al. 1990). The ACGIH recognized long ago that because of the wide range in individual susceptibility, a small percentage of workers may experience discomfort from some substances at concentrations at or below the threshold limit and that a smaller percentage may be affected more seriously by aggravation of a pre-existing condition or by development of an occupational illness (Cooper 1973; ACGIH 1994). This is clearly stated in the introduction to the ACGIH’s annual booklet Threshold Limit Values for Chemical Substances and Physical Agents and Biological Exposure Indices (ACGIH 1994).

Table 1. Occupational exposure limits (OELs) in various countries (as of 1986)


Type of standard


OELs are essentially the same as those of the 1978 ACGIH TLVs. The principal difference from the ACGIH list is that, for the 144 substances (of the total of 630) for which no STELs are listed by ACGIH, the values used for the Argentina TWAs are entered also under this heading.


The National Health and Medical Research Council (NHMRC) adopted a revised edition of the Occupational Health Guide Threshold Limit Values (1990-91) in 1992. The OELs have no legal status in Australia, except where specifically incorporated into law by reference. The ACGIHTLVs are published in Australia as an appendix to the occupational health guides, revised with the ACGIH revisions in odd-numbered years.


The values recommended by the Expert Committee of the Worker Protection Commission for Appraisal of MAC (maximal acceptable concentration) Values in cooperation with the General Accident Prevention Institute of the Chemical Workers Trade Union, is considered obligatory by the Federal Ministry for Social Administration. They are applied by the Labour Inspectorate under the Labour Protection Law.


The Administration of Hygiene and Occupational Medicine of the Ministry of Employment and of Labour uses the TLVs of the ACGIH as a guideline.


The TLVs of the ACGIH have been used as the basis for the occupational health legislation of Brazil since 1978. As the Brazilian work week is usually 48 hours, the values of the ACGIH were adjusted in conformity with a formula developed for this purpose. The ACGIH list was adopted only for those air contaminants which at the time had nationwide application. The Ministry of Labour has brought the limits up to date with establishment of values for additional contaminants in accordance with recommendations from the Fundacentro Foundation of Occupational Safety and Medicine.

Canada (and Provinces)

Each province has its own regulations:


OELs are under the Occupational Health and Safety Act, Chemical Hazard Regulation, which requires the employer to ensure that workers are not exposed above the limits.

British Columbia

The Industrial Health and Safety Regulations set legal requirements for most of British Columbia industry, which refer to the current schedule of TLVs for atmospheric contaminants published by the ACGIH.


The Department of Environment and Workplace Safety and Health is responsible for legislation and its administration concerning the OELs. The guidelines currently used to interpret risk to health are the ACGIH TLVs with the exception that carcinogens are given a zero exposure level “so far as is reasonably practicable”.

New Brunswick

The applicable standards are those published in the latest ACGIH issue and, in case of an infraction, it is the issue in publication at the time of infraction that dictates compliance.

Northwest Territories

The Northwest Territories Safety Division of the Justice and Service Department regulates workplace safety for non-federal employees under the latest edition of the ACGIH TLVs.

Nova Scotia

The list of OELs is the same as that of the ACGIH as published in 1976 and its subsequent amendments and revisions.


Regulations for a number of hazardous substances are enforced under the Occupational Health and Safety Act, published each in a separate booklet that includes the permissible exposure level and codes for respiratory equipment, techniques for measuring airborne concentrations and medical surveillance approaches.


Permissible exposure levels are similar to the ACGIH TLVs and compliance with the permissible exposure levels for workplace air contaminants is required.


The maximum concentration of eleven substances having the capacity of causing acute, severe or fatal effects cannot be exceeded for even a moment. The values in the Chile standard are those of the ACGIH TLVs to which a factor of 0.8 is applied in view of the 48-hour week.


OELs include values for 542 chemical substances and 20 particulates. It is legally required that these not be exceeded as time-weighted averages. Data from the ACGIH are used in the preparation of the Danish standards. About 25 per cent of the values are different from those of ACGIH with nearly all of these being somewhat more stringent.


Ecuador does not have a list of permissible exposure levels incorporated in its legislation. The TLVs of the ACGIH are used as a guide for good industrial hygiene practice.


OELs are defined as concentrations that are deemed to be hazardous to at least some workers on long-term exposure. Whereas the ACGIH has as their philosophy that nearly all workers may be exposed to substances below the TLV without adverse effect, the viewpoint in Finland is that where exposures are above the limiting value, deleterious effects on health may occur.


The MAC value is “the maximum permissible concentration of a chemical compound present in the air within a working area (as gas, vapour, particulate matter) which, according to current knowledge, generally does not impair the health of the employee nor cause undue annoyance. Under these conditions, exposure can be repeated and of long duration over a daily period of eight hours, constituting an average work week of 40 hours (42 hours per week as averaged over four successive weeks for firms having four work shifts).- Scientifically based criteria for health protection, rather than their technical or economical feasibility, are employed.”


The latest TLVs of the ACGIH are normally used. However, the ACGIH list is not incorporated in the national laws or regulations.


MAC values are taken largely from the list of the ACGIH, as well as from the Federal Republic of Germany and NIOSH. The MAC is defined as “that concentration in the workplace air which, according to present knowledge, after repeated long-term exposure even up to a whole working life, in general does not harm the health of workers or their offspring.”


The 1970 TLVs of the ACGIH are used, except 50 ppm for vinyl chloride and 0.15 mg/m(3) for lead, inorganic compounds, fume and dust.

Russian Federation

The former USSR established many of its limits with the goal of eliminating any possibility for even reversible effects. Such subclinical and fully reversible responses to workplace exposures have, thus far, been considered too restrictive to be useful in the United States and in most other countries. In fact, due to the economic and engineering difficulties in achieving such low levels of air contaminants in the workplace, there is little indication that these limits have actually been achieved in countries which have adopted them. Instead, the limits appear to serve more as idealized goals rather than limits which manufacturers are legally bound or morally committed to achieve.

United States

At least six groups recommend exposure limits for the workplace: the TLVs of the ACGIH, the Recommended Exposure Limits (RELs) suggested by the National Institute for Occupational Safety and Health (NIOSH), the Workplace Environment Exposure Limits (WEEL) developed by the American Industrial Hygiene Association (AIHA), standards for workplace air contaminants suggested by the Z-37 Committee of the American National Standards Institute (EAL), the proposed workplace guides of the American Public Health Association (APHA 1991), and recommendations by local, state or regional governments. In addition, permissible exposure limits (PELs), which are regulations that must be met in the workplace because they are law, have been promulgated by the Department of Labor and are enforced by the Occupational Safety and Health Administration (OSHA).

Source: Cook 1986.

This limitation, although perhaps less than ideal, has been considered a practical one since airborne concentrations so low as to protect hypersusceptibles have traditionally been judged infeasible due to either engineering or economic limitations. Until about 1990, this shortcoming in the TLVs was not considered a serious one. In light of the dramatic improvements since the mid-1980s in our analytical capabilities, personal monitoring/sampling devices, biological monitoring techniques and the use of robots as a plausible engineering control, we are now technologically able to consider more stringent occupational exposure limits.

The background information and rationale for each TLV are published periodically in the Documentation of the Threshold Limit Values (ACGIH 1995). Some type of documentation is occasionally available for OELs set in other countries. The rationale or documentation for a particular OEL should always be consulted before interpreting or adjusting an exposure limit, as well as the specific data that were considered in establishing it (ACGIH 1994).

TLVs are based on the best available information from industrial experience and human and animal experimental studies—when possible, from a combination of these sources (Smith and Olishifski 1988; ACGIH 1994). The rationale for choosing limiting values differs from substance to substance. For example, protection against impairment of health may be a guiding factor for some, whereas reasonable freedom from irritation, narcosis, nuisance or other forms of stress may form the basis for others. The age and completeness of the information available for establishing occupational exposure limits also varies from substance to substance; consequently, the precision of each TLV is different. The most recent TLV and its documentation (or its equivalent) should always be consulted in order to evaluate the quality of the data upon which that value was set.

Even though all of the publications which contain OELs emphasize that they were intended for use only in establishing safe levels of exposure for persons in the workplace, they have been used at times in other situations. It is for this reason that all exposure limits should be interpreted and applied only by someone knowledgeable of industrial hygiene and toxicology. The TLV Committee (ACGIH 1994) did not intend that they be used, or modified for use:

  • as a relative index of hazard or toxicity
  • in the evaluation of community air pollution
  • for estimating the hazards of continuous, uninterrupted exposures or other extended work periods
  • as proof or disproof of an existing disease or physical condition
  • for adoption by countries whose working conditions differ from those of the United States.


The TLV Committee and other groups which set OELs warn that these values should not be “directly used” or extrapolated to predict safe levels of exposure for other exposure settings. However, if one understands the scientific rationale for the guideline and the appropriate approaches for extrapolating data, they can be used to predict acceptable levels of exposure for many different kinds of exposure scenarios and work schedules (ACGIH 1994; Hickey and Reist 1979).

Philosophy and approaches in setting exposure limits

TLVs were originally prepared to serve only for the use of industrial hygienists, who could exercise their own judgement in applying these values. They were not to be used for legal purposes (Baetjer 1980). However, in 1968 the United States Walsh-Healey Public Contract Act incorporated the 1968 TLV list, which covered about 400 chemicals. In the United States, when the Occupational Safety and Health Act (OSHA) was passed it required all standards to be national consensus standards or established federal standards.

Exposure limits for workplace air contaminants are based on the premise that, although all chemical substances are toxic at some concentration when experienced for a period of time, a concentration (e.g., dose) does exist for all substances at which no injurious effect should result no matter how often the exposure is repeated. A similar premise applies to substances whose effects are limited to irritation, narcosis, nuisance or other forms of stress (Stokinger 1981; ACGIH 1994).

This philosophy thus differs from that applied to physical agents such as ionizing radiation, and for some chemical carcinogens, since it is possible that there may be no threshold or no dose at which zero risk would be expected (Stokinger 1981). The issue of threshold effects is controversial, with reputable scientists arguing both for and against threshold theories (Seiler 1977; Watanabe et al. 1980, Stott et al. 1981; Butterworth and Slaga 1987; Bailer et al. 1988; Wilkinson 1988; Bus and Gibson 1994). With this in mind, some occupational exposure limits proposed by regulatory agencies in the early 1980s were set at levels which, although not completely without risk, posed risks that were no greater than classic occupational hazards such as electrocution, falls, and so on. Even in those settings which do not use industrial chemicals, the overall workplace risks of fatal injury are about one in one thousand. This is the rationale that has been used to justify selecting this theoretical cancer risk criterion for setting TLVs for chemical carcinogens (Rodricks, Brett and Wrenn 1987; Travis et al. 1987).

Occupational exposure limits established both in the United States and elsewhere are derived from a wide variety of sources. The 1968 TLVs (those adopted by OSHA in 1970 as federal regulations) were based largely on human experience. This may come as a surprise to many hygienists who have recently entered the profession, since it indicates that, in most cases, the setting of an exposure limit has come after a substance has been found to have toxic, irritational or otherwise undesirable effects on humans. As might be anticipated, many of the more recent exposure limits for systemic toxins, especially those internal limits set by manufacturers, have been based primarily on toxicology tests conducted on animals, in contrast to waiting for observations of adverse effects in exposed workers (Paustenbach and Langner 1986). However, even as far back as 1945, animal tests were acknowledged by the TLV Committee to be very valuable and they do, in fact, constitute the second most common source of information upon which these guidelines have been based (Stokinger 1970).

Several approaches for deriving OELs from animal data have been proposed and put into use over the past 40 years. The approach used by the TLV Committee and others is not markedly different from that which has been used by the US Food and Drug Administration (FDA) in establishing acceptable daily intakes (ADI) for food additives. An understanding of the FDA approach to setting exposure limits for food additives and contaminants can provide good insight to industrial hygienists who are involved in interpreting OELs (Dourson and Stara 1983).

Discussions of methodological approaches which can be used to establish workplace exposure limits based exclusively on animal data have also been presented (Weil 1972; WHO 1977; Zielhuis and van der Kreek 1979a, 1979b; Calabrese 1983; Dourson and Stara 1983; Leung and Paustenbach 1988a; Finley et al. 1992; Paustenbach 1995). Although these approaches have some degree of uncertainty, they seem to be much better than a qualitative extrapolation of animal test results to humans.

Approximately 50% of the 1968 TLVs were derived from human data, and approximately 30% were derived from animal data. By 1992, almost 50% were derived primarily from animal data. The criteria used to develop the TLVs may be classified into four groups: morphological, functional, biochemical and miscellaneous (nuisance, cosmetic). Of those TLVs based on human data, most are derived from effects observed in workers who were exposed to the substance for many years. Consequently, most of the existing TLVs have been based on the results of workplace monitoring, compiled with qualitative and quantitative observations of the human response (Stokinger 1970; Park and Snee 1983). In recent times, TLVs for new chemicals have been based primarily on the results of animal studies rather than human experience (Leung and Paustenbach 1988b; Leung et al. 1988).

It is noteworthy that in 1968 only about 50% of the TLVs were intended primarily to prevent systemic toxic effects. Roughly 40% were based on irritation and about two per cent were intended to prevent cancer. By 1993, about 50% were meant to prevent systemic effects, 35% to prevent irritation, and five per cent to prevent cancer. Figure 2 provides a summary of the data often used in developing OELs. 

Figure 2. Data often used in developing an occupational exposure.


Limits for irritants

Prior to 1975, OELs designed to prevent irritation were largely based on human experiments. Since then, several experimental animal models have been developed (Kane and Alarie 1977; Alarie 1981; Abraham et al. 1990; Nielsen 1991). Another model based on chemical properties has been used to set preliminary OELs for organic acids and bases (Leung and Paustenbach 1988).

Limits for carcinogens

In 1972, the ACGIH Committee began to distinguish between human and animal carcinogens in its TLV list. According to Stokinger (1977), one reason for this distinction was to assist the stakeholders in discussions (union representatives, workers and the public) in focusing on those chemicals with more probable workplace exposures.

Do the TLVs Protect Enough Workers?

Beginning in 1988, concerns were raised by numerous persons regarding the adequacy or health protectiveness of TLVs. The key question raised was, what percentage of the working population is truly protected from adverse health effects when exposed to the TLV?

Castleman and Ziem (1988) and Ziem and Castleman (1989) argued both that the scientific basis of the standards was inadequate and that they were formulated by hygienists with vested interests in the industries being regulated.

These papers engendered an enormous amount of discussion, both supportive of and opposed to the work of the ACGIH (Finklea 1988; Paustenbach 1990a, 1990b, 1990c; Tarlau 1990).

A follow-up study by Roach and Rappaport (1990) attempted to quantify the safety margin and scientific validity of the TLVs. They concluded that there were serious inconsistencies between the scientific data available and the interpretation given in the 1976 Documentation by the TLV Committee. They also note that the TLVs were probably reflective of what the Committee perceived to be realistic and attainable at the time. Both the Roach and Rappaport and the Castleman and Ziem analyses have been responded to by the ACGIH, who have insisted on the inaccuracy of the criticisms.

Although the merit of the Roach and Rappaport analysis, or for that matter, that of Ziem and Castleman, will be debated for a number of years, it is clear that the process by which TLVs and other OELs will be set will probably never be as it was between 1945 and 1990. It is likely that in coming years, the rationale, as well as the degree of risk inherent in a TLV, will be more explicitly described in the documentation for each TLV. Also, it is certain that the definition of “virtually safe” or “insignificant risk” with respect to workplace exposure will change as the values of society change (Paustenbach 1995, 1997).

The degree of reduction in TLVs or other OELs that will undoubtedly occur in the coming years will vary depending on the type of adverse health effect to be prevented (central nervous system depression, acute toxicity, odour, irritation, developmental effects, or others). It is unclear to what degree the TLV committee will rely on various predictive toxicity models, or what risk criteria they will adopt, as we enter the next century.

Standards and Nontraditional Work Schedules

The degree to which shift work affects a worker’s capabilities, longevity, mortality, and overall well-being is still not well understood. So-called nontraditional work shifts and work schedules have been implemented in a number of industries in an attempt to eliminate, or at least reduce, some of the problems caused by normal shift work, which consists of three eight-hour work shifts per day. One kind of work schedule which is classified as nontraditional is the type involving work periods longer than eight hours and varying (compressing) the number of days worked per week (e.g., a 12-hours-per-day, three-day workweek). Another type of nontraditional work schedule involves a series of brief exposures to a chemical or physical agent during a given work schedule (e.g., a schedule where a person is exposed to a chemical for 30 minutes, five times per day with one hour between exposures). The last category of nontraditional schedule is that involving the “critical case” wherein persons are continuously exposed to an air contaminant (e.g., spacecraft, submarine).

Compressed workweeks are a type of nontraditional work schedule that has been used primarily in non-manufacturing settings. It refers to full-time employment (virtually 40 hours per week) which is accomplished in less than five days per week. Many compressed schedules are currently in use, but the most common are: (a) four-day workweeks with ten-hour days; (b) three-day workweeks with 12-hour days; (c) 4-1/2–day workweeks with four nine-hour days and one four-hour day (usually Friday); and (d) the five/four, nine plan of alternating five-day and four-day workweeks of nine-hour days (Nollen and Martin 1978; Nollen 1981).

Of all workers, those on nontraditional schedules represent only about 5% of the working population. Of this number, only about 50,000 to 200,000 Americans who work nontraditional schedules are employed in industries where there is routine exposure to significant levels of airborne chemicals. In Canada, the percentage of chemical workers on nontraditional schedules is thought to be greater (Paustenbach 1994).

One Approach to Setting International OELs

As noted by Lundberg (1994), a challenge facing all national committees is to identify a common scientific approach to setting OELs. Joint international ventures are advantageous to the parties involved since writing criteria documents is both a time- and cost-consuming process (Paustenbach 1995).

This was the idea when the Nordic Council of Ministers in 1977 decided to establish the Nordic Expert Group (NEG). The task of the NEG was to develop scientifically-based criteria documents to be used as a common scientific basis of OELs by the regulatory authorities in the five Nordic countries (Denmark, Finland, Iceland, Norway and Sweden). The criteria documents from the NEG lead to the definition of a critical effect and dose-response/dose-effect relationships. The critical effect is the adverse effect that occurs at the lowest exposure. There is no discussion of safety factors and a numerical OEL is not proposed. Since 1987, criteria documents are published by the NEG concurrently in English on a yearly basis.

Lundberg (1994) has suggested a standardized approach that each county would use. He suggested building a document with the following characteristics:

  • A standardized criteria document should reflect the up-to-date knowledge as presented in the scientific literature.
  • The literature used should preferably be peer-reviewed scientific papers but at least be available publicly. Personal communications should be avoided. An openness toward the general public, particularly workers, decreases the kind of suspiciousness that recently has been addressed toward documentation from the ACGIH.
  • The scientific committee should consist of independent scientists from academia and government. If the committee should include scientific representatives from the labour market, both employers and employees should be represented.
  • All relevant epidemiological and experimental studies should be thoroughly scrutinized by the scientific committee, especially “key studies” that present data on the critical effect. All observed effects should be described.
  • Environmental and biological monitoring possibilities should be pointed out. It is also necessary to thoroughly scrutinize these data, including toxicokinetic data.
  • Data permitting, the establishment of dose-response and dose-effect relationships should be stated. A no observable effect level (NOEL) or lowest observable effect level (LOEL) for each observed effect should be stated in the conclusion. If necessary, reasons should be given as to why a certain effect is the critical one. The toxicological significance of an effect is thereby considered.
  • Specifically, mutagenic, carcinogenic and teratogenic properties should be pointed out as well as allergic and immunological effects.
  • A reference list for all studies described should be given. If it is stated in the document that only relevant studies have been used, there is no need to give a list of references not used or why. On the other hand, it could be of interest to list those databases that have been used in the literature search.


There are in practice only minor differences in the way OELs are set in the various countries that develop them. It should, therefore, be relatively easy to agree upon the format of a standardized criteria document containing the key information. From this point, the decision as to the size of the margin of safety that is incorporated in the limit would then be a matter of national policy.



Workplace exposure assessment is concerned with identifying and evaluating agents with which a worker may come in contact, and exposure indices can be constructed to reflect the amount of an agent present in the general environment or in inhaled air, as well as to reflect the amount of agent that is actually inhaled, swallowed or otherwise absorbed (the intake). Other indices include the amount of agent that is resorbed (the uptake) and the exposure at the target organ. Dose is a pharmacological or toxicological term used to indicate the amount of a substance administered to a subject. Dose rate is the amount administered per unit of time. The dose of a workplace exposure is difficult to determine in a practical situation, since physical and biological processes, like inhalation, uptake and distribution of an agent in the human body cause exposure and dose to have complex, non-linear relationships. The uncertainty about the actual level of exposure to agents also makes it difficult to quantify relationships between exposure and health effects.

For many occupational exposures there exists a time window during which the exposure or dose is most relevant to the development of a particular health-related problem or symptom. Hence, the biologically relevant exposure, or dose, would be that exposure which occurs during the relevant time window. Some exposures to occupational carcinogens are believed to have such a relevant time window of exposure. Cancer is a disease with a long latency period, and hence it could be that the exposure which is related to the ultimate development of the disease took place many years before the cancer actually manifested itself. This phenomenon is counter-intuitive, since one would have expected that cumulative exposure over a working lifetime would have been the relevant parameter. The exposure at the time of manifestation of disease may not be of particular importance.

The pattern of exposure—continuous exposure, intermittent exposure and exposure with or without sharp peaks—may be relevant as well. Taking exposure patterns into account is important for both epidemiological studies and for environmental measurements which may be used to monitor compliance with health standards or for environmental control as part of control and prevention programmes. For example, if a health effect is caused by peak exposures, such peak levels must be monitorable in order to be controlled. Monitoring which provides data only about long-term average exposures is not useful since the peak excursion values may well be masked by averaging, and certainly cannot be controlled as they occur.

The biologically relevant exposure or dose for a certain endpoint is often not known because the patterns of intake, uptake, distribution and elimination, or the mechanisms of biotransformation, are not understood in sufficient detail. Both the rate at which an agent enters and leaves the body (the kinetics) and the biochemical processes for handling the substance (biotransformation) will help determine the relationships between exposure, dose and effect.

Environmental monitoring is the measurement and assessment of agents at the workplace to evaluate ambient exposure and related health risks. Biological monitoring is the measurement and assessment of workplace agents or their metabolites in tissue, secreta or excreta to evaluate exposure and assess health risks. Sometimes biomarkers, such as DNA-adducts, are used as measures of exposure. Biomarkers may also be indicative of the mechanisms of the disease process itself, but this is a complex subject, which is covered more fully in the chapter Biological Monitoring and later in the discussion here.

A simplification of the basic model in exposure-response modelling is as follows:

exposure uptake distribution,

elimination, transformationtarget dosephysiopathologyeffect

Depending on the agent, exposure-uptake and exposure-intake relationships can be complex. For many gases, simple approximations can be made, based on the concentration of the agent in the air during the course of a working day and on the amount of air that is inhaled. For dust sampling, deposition patterns are also related to particle size. Size considerations may also lead to a more complex relationship. The chapter Respiratory System provides more detail on the aspect of respiratory toxicity.

Exposure and dose assessment are elements of quantitative risk assessment. Health risk assessment methods often form the basis upon which exposure limits are established for emission levels of toxic agents in the air for environmental as well as for occupational standards. Health risk analysis provides an estimate of the probability (risk) of occurrence of specific health effects or an estimate of the number of cases with these health effects. By means of health risk analysis an acceptable concentration of a toxicant in air, water or food can be provided, given an a priori chosen acceptable magnitude of risk. Quantitative risk analysis has found an application in cancer epidemiology, which explains the strong emphasis on retrospective exposure assessment. But applications of more elaborate exposure assessment strategies can be found in both retrospective as well as prospective exposure assessment, and exposure assessment principles have found applications in studies focused on other endpoints as well, such as benign respiratory disease (Wegman et al. 1992; Post et al. 1994). Two directions in research predominate at this moment. One uses dose estimates obtained from exposure monitoring information, and the other relies on biomarkers as measures of exposure.

Exposure Monitoring and Prediction of Dose

Unfortunately, for many exposures few quantitative data are available for predicting the risk for developing a certain endpoint. As early as 1924, Haber postulated that the severity of the health effect (H) is proportional to the product of exposure concentration (X) and time of exposure (T):

H=X x T

Haber’s law, as it is called, formed the basis for development of the concept that time-weighted average (TWA) exposure measurements—that is, measurements taken and averaged over a certain period of time—would be a useful measure for the exposure. This assumption about the adequacy of the time-weighted average has been questioned for many years. In 1952, Adams and co-workers stated that “there is no scientific basis for the use of the time-weighted average to integrate varying exposures …” (in Atherly 1985). The problem is that many relations are more complex than the relationship that Haber’s law represents. There are many examples of agents where the effect is more strongly determined by concentration than by length of time. For example, interesting evidence from laboratory studies has shown that in rats exposed to carbon tetrachloride, the pattern of exposure (continuous versus intermittent and with or without peaks) as well as the dose can modify the observed risk of the rats developing liver enzyme level changes (Bogers et al. 1987). Another example is bio-aerosols, such as α-amylase enzyme, a dough improver, which can cause allergic diseases in people who work in the bakery industry (Houba et al. 1996). It is unknown whether the risk of developing such a disease is mainly determined by peak exposures, average exposure, or cumulative level of exposure. (Wong 1987; Checkoway and Rice 1992). Information on temporal patterns is not available for most agents, especially not for agents that have chronic effects.

The first attempts to model exposure patterns and estimate dose were published in the 1960s and 1970s by Roach (1966; 1977). He showed that the concentration of an agent reaches an equilibrium value at the receptor after an exposure of infinite duration because elimination counterbalances the uptake of the agent. In an eight-hour exposure, a value of 90% of this equilibrium level can be reached if the half-life of the agent at the target organ is smaller than approximately two-and-a-half hours. This illustrates that for agents with a short half-life, the dose at the target organ is determined by an exposure shorter than an eight-hour period. Dose at the target organ is a function of the product of exposure time and concentration for agents with a long half-life. A similar but more elaborate approach has been applied by Rappaport (1985). He showed that intra-day variability in exposure has a limited influence when dealing with agents with long half-lives. He introduced the term dampening at the receptor.

The information presented above has mainly been used to draw conclusions on appropriate averaging times for exposure measurements for compliance purposes. Since Roach’s papers it is common knowledge that for irritants, grab samples with short averaging times have to be taken, while for agents with long half-lives, such as asbestos, long-term average of cumulative exposure has to be approximated. One should however realize that the dichotomization into grab sample strategies and eight-hour time average exposure strategies as adopted in many countries for compliance purposes is an extremely crude translation of the biological principles discussed above.

An example of improving an exposure assessment strategy based on pharmocokinetic principles in epidemiology can be found in a paper of Wegman et al. (1992). They applied an interesting exposure assessment strategy by using continuous monitoring devices to measure personal dust exposure peak levels and relating these to acute reversible respiratory symptoms occurring every 15 minutes.A conceptual problem in this kind of study, extensively discussed in their paper, is the definition of a health-relevant peak exposure. The definition of a peak will, again, depend on biological considerations. Rappaport (1991) gives two requirements for peak exposures to be of aetiological relevance in the disease process: (1) the agent is eliminated rapidly from the body and (2) there is a non-linear rate of biological damage during a peak exposure. Non-linear rates of biological damage may be related to changes in uptake, which in turn are related to exposure levels, host susceptibility, synergy with other exposures, involvement of other disease mechanisms at higher exposures or threshold levels for disease processes.

These examples also show that pharmacokinetic approaches can lead elsewhere than to dose estimates. The results of pharmacokinetic modelling can also be used to explore the biological relevance of existing indices of exposure and to design new health-relevant exposure assessment strategies.

Pharmacokinetic modelling of the exposure may also generate estimates of the actual dose at the target organ. For instance in the case of ozone, an acute irritant gas, models have been developed which predict the tissue concentration in the airways as a function of the average ozone concentration in the airspace of the lung at a certain distance from the trachea, the radius of the airways, the average air velocity, the effective dispersion, and the ozone flux from air to lung surface (Menzel 1987; Miller and Overton 1989). Such models can be used to predict ozone dose in a particular region of the airways, dependent on environmental ozone concentrations and breathing patterns.

In most cases estimates of target dose are based on information on the exposure pattern over time, job history and pharmacokinetic information on uptake, distribution, elimination and transformation of the agent. The whole process can be described by a set of equations which can be mathematically solved. Often information on pharmacokinetic parameters is not available for humans, and parameter estimates based on animal experiments have to be used. There are several examples by now of the use of pharmacokinetic modelling of exposure in order to generate dose estimates. The first references to modelling of exposure data into dose estimates in the literature go back to the paper of Jahr (1974).

Although dose estimates have generally not been validated and have found limited application in epidemiological studies, the new generation of exposure or dose indices is expected to result in optimal exposure-response analyses in epidemiological studies (Smith 1985, 1987). A problem not yet tackled in pharmacokinetic modelling is that large interspecies differences exist in kinetics of toxic agents, and therefore effects of intra-individual variation in pharmacokinetic parameters are of interest (Droz 1992).

Biomonitoring and Biomarkers of Exposure

Biological monitoring offers an estimate of dose and therefore is often considered superior to environmental monitoring. However, the intra-individual variability of biomonitoring indices can be considerable. In order to derive an acceptable estimate of a worker’s dose, repeated measurements have to be taken, and sometimes the measurement effort can become larger than for environmental monitoring.

This is illustrated by an interesting study on workers producing boats made of plastic reinforced with glass fibre (Rappaport et al. 1995). The variability of styrene exposure was assessed by measuring styrene in air repeatedly. Styrene in exhaled air of exposed workers was monitored, as well as sister chromatid exchanges (SCEs). They showed that an epidemiological study using styrene in the air as a measure of exposure would be more efficient, in terms of numbers of measurements required, than a study using the other indices of exposure. For styrene in air three repeats were required to estimate the long-term average exposure with a given precision. For styrene in exhaled air, four repeats per worker were necessary, while for the SCEs 20 repeats were necessary. The explanation for this observation is the signal-to-noise ratio, determined by the day-to-day and between-worker variability in exposure, which was more favourable for styrene in air than for the two biomarkers of exposure. Thus, although the biological relevance of a certain exposure surrogate might be optimal, the performance in an exposure-response analysis can still be poor because of a limited signal-to-noise ratio, leading to misclassification error.

Droz (1991) applied pharmacokinetic modelling to study advantages of exposure assessment strategies based on air sampling compared to biomonitoring strategies dependent on the half-life of the agent. He showed that biological monitoring is also greatly affected by biological variability, which is not related to variability of the toxicological test. He suggested that no statistical advantage exists in using biological indicators when the half-life of the agent considered is smaller than about ten hours.

Although one might tend to decide to measure the environmental exposure instead of a biological indicator of an effect because of variability in the variable measured, additional arguments can be found for choosing a biomarker, even when this would lead to a greater measurement effort, such as when a considerable dermal exposure is present. For agents like pesticides and some organic solvents, dermal exposure can be of greater relevance that the exposure through the air. A biomarker of exposure would include this route of exposure, while measuring of dermal exposure is complex and results are not easily interpretable (Boleij et al. 1995). Early studies among agricultural workers using “pads” to assess dermal exposure showed remarkable distributions of pesticides over the body surface, depending on the tasks of the worker. However, because little information is available on skin uptake, exposure profiles cannot yet be used to estimate an internal dose.

Biomarkers can also have considerable advantages in cancer epidemiology. When a biomarker is an early marker of the effect, its use could result in reduction of the follow-up period. Although validation studies are required, biomarkers of exposure or individual susceptibility could result in more powerful epidemiological studies and more precise risk estimates.

Time Window Analysis

Parallel to the development of pharmacokinetic modelling, epidemiologists have explored new approaches in the data analysis phase such as “time frame analysis” to relate relevant exposure periods to endpoints, and to implement effects of temporal patterns in the exposure or peak exposures in occupational cancer epidemiology (Checkoway and Rice 1992). Conceptually this technique is related to pharmacokinetic modelling since the relationship between exposure and outcome is optimized by putting weights on different exposure periods, exposure patterns and exposure levels. In pharmacokinetic modelling these weights are believed to have a physiological meaning and are estimated beforehand. In time frame analysis the weights are estimated from the data on the basis of statistical criteria. Examples of this approach are given by Hodgson and Jones (1990), who analysed the relationship between radon gas exposure and lung cancer in a cohort of UK tin miners, and by Seixas, Robins and Becker (1993), who analysed the relationship between dust exposure and respiratory health in a cohort of US coal miners. A very interesting study underlining the relevance of time window analysis is the one by Peto et al. (1982).

They showed that mesothelioma death rates appeared to be proportional to some function of time since first exposure and cumulative exposure in a cohort of insulation workers. Time since first exposure was of particular relevance because this variable was an approximation of the time required for a fibre to migrate from its place of deposition in the lungs to the pleura. This example shows how kinetics of deposition and migration determine the risk function to a large extent. A potential problem with time frame analysis is that it requires detailed information on exposure periods and exposure levels, which hampers its application in many studies of chronic disease outcomes.

Concluding Remarks

In conclusion, the underlying principles of pharmacokinetic modelling and time frame or time window analysis are widely recognized. Knowledge in this area has mainly been used to develop exposure assessment strategies. More elaborate use of these approaches, however, requires a considerable research effort and has to be developed. The number of applications is therefore still limited. Relatively simple applications, such as the development of more optimal exposure assessment strategies dependent on the endpoint, have found wider use. An important issue in the development of biomarkers of exposure or effect is validation of these indices. It is often assumed that a measurable biomarker can predict health risk better than traditional methods. However, unfortunately, very few validation studies substantiate this assumption.



After a hazard has been recognized and evaluated, the most appropriate interventions (methods of control) for a particular hazard must be determined. Control methods usually fall into three categories:

  1. engineering controls
  2. administrative controls
  3. personal protective equipment.


As with any change in work processes, training must be provided to ensure the success of the changes.

Engineering controls are changes to the process or equipment that reduce or eliminate exposures to an agent. For example, substituting a less toxic chemical in a process or installing exhaust ventilation to remove vapours generated during a process step, are examples of engineering controls. In the case of noise control, installing sound-absorbing materials, building enclosures and installing mufflers on air exhaust outlets are examples of engineering controls. Another type of engineering control might be changing the process itself. An example of this type of control would be removal of one or more degreasing steps in a process that originally required three degreasing steps. By removing the need for the task that produced the exposure, the overall exposure for the worker has been controlled. The advantage of engineering controls is the relatively small involvement of the worker, who can go about the job in a more controlled environment when, for instance, contaminants are automatically removed from the air. Contrast this to the situation where the selected method of control is a respirator to be worn by the worker while performing the task in an “uncontrolled” workplace. In addition to the employer actively installing engineering controls on existing equipment, new equipment can be purchased that contains the controls or other more effective controls. A combination approach has often been effective (i.e., installing some engineering controls now and requiring personal protective equipment until new equipment arrives with more effective controls that will eliminate the need for personal protective equipment). Some common examples of engineering controls are:

  • ventilation (both general and local exhaust ventilation)
  • isolation (place a barrier between the worker and the agent)
  • substitution (substitute less toxic, less flammable material, etc.)
  • change the process (eliminate hazardous steps).


The occupational hygienist must be sensitive to the worker’s job tasks and must solicit worker participation when designing or selecting engineering controls. Placing barriers in the workplace, for example, could significantly impair a worker’s ability to perform the job and may encourage “work arounds”. Engineering controls are the most effective methods of reducing exposures. They are also, often, the most expensive. Since engineering controls are effective and expensive it is important to maximize the involvement of the workers in the selection and design of the controls. This should result in a greater likelihood that the controls will reduce exposures.

Administrative controls involve changes in how a worker accomplishes the necessary job tasks—for example, how long they work in an area where exposures occur, or changes in work practices such as improvements in body positioning to reduce exposures. Administrative controls can add to the effectiveness of an intervention but have several drawbacks:

  1. Rotation of workers may reduce overall average exposure for the workday but it provides periods of high short-term exposure for a larger number of workers. As more becomes known about toxicants and their modes of action, short-term peak exposures may represent a greater risk than would be calculated based on their contribution to average exposure.
  2. Changing work practices of workers can present a significant enforcement and monitoring challenge. How work practices are enforced and monitored determines whether or not they will be effective. This constant management attention is a significant cost of administrative controls.


Personal protective equipment consists of devices provided to the worker and required to be worn while performing certain (or all) job tasks. Examples include respirators, chemical goggles, protective gloves and faceshields. Personal protective equipment is commonly used in cases where engineering controls have not been effective in controlling the exposure to acceptable levels or where engineering controls have not been found to be feasible (for cost or operational reasons). Personal protective equipment can provide significant protection to workers if worn and used correctly. In the case of respiratory protection, protection factors (ratio of concentration outside the respirator to that inside) can be 1,000 or more for positive-pressure supplied air respirators or ten for half-face air-purifying respirators. Gloves (if selected appropriately) can protect hands for hours from solvents. Goggles can provide effective protection from chemical splashes.

Intervention: Factors to Consider

Often a combination of controls is used to reduce the exposures to acceptable levels. Whatever methods are selected, the intervention must reduce the exposure and resulting hazard to an acceptable level. There are, however, many other factors that need to be considered when selecting an intervention. For example:

  • effectiveness of the controls
  • ease of use by the employee
  • cost of the controls
  • adequacy of the warning properties of the material
  • acceptable level of exposure
  • frequency of exposure
  • route(s) of exposure
  • regulatory requirements for specific controls.


Effectiveness of controls

Effectiveness of controls is obviously a prime consideration when taking action to reduce exposures. When comparing one type of intervention to another, the level of protection required must be appropriate for the challenge; too much control is a waste of resources. Those resources could be used to reduce other exposures or exposures of other employees. On the other hand, too little control leaves the worker exposed to unhealthy conditions. A useful first step is to rank the interventions according to their effectiveness, then use this ranking to evaluate the significance of the other factors.

Ease of use

For any control to be effective the worker must be able to perform his or her job tasks with the control in place. For example, if the control method selected is substitution, then the worker must know the hazards of the new chemical, be trained in safe handling procedures, understand proper disposal procedures, and so on. If the control is isolation—placing an enclosure around the substance or the worker—the enclosure must allow the worker to do his or her job. If the control measures interfere with the tasks of the job, the worker will be reluctant to use them and may find ways to accomplish the tasks that could result in increased, not decreased, exposures.


Every organization has limits on resources. The challenge is to maximize the use of those resources. When hazardous exposures are identified and an intervention strategy is being developed, cost must be a factor. The “best buy” many times will not be the lowest- or highest-cost solutions. Cost becomes a factor only after several viable methods of control have been identified. Cost of the controls can then be used to select the controls that will work best in that particular situation. If cost is the determining factor at the outset, poor or ineffective controls may be selected, or controls that interfere with the process in which the employee is working. It would be unwise to select an inexpensive set of controls that interfere with and slow down a manufacturing process. The process then would have a lower throughput and higher cost. In very short time the “real” costs of these “low cost” controls would become enormous. Industrial engineers understand the layout and overall process; production engineers understand the manufacturing steps and processes; the financial analysts understand the resource allocation problems. Occupational hygienists can provide a unique insight into these discussions due to their understanding of the specific employee’s job tasks, the employee’s interaction with the manufacturing equipment as well as how the controls will work in a particular setting. This team approach increases the likelihood of selecting the most appropriate (from a variety of perspectives) control.

Adequacy of warning properties

When protecting a worker against an occupational health hazard, the warning properties of the material, such as odour or irritation, must be considered. For example, if a semiconductor worker is working in an area where arsine gas is used, the extreme toxicity of the gas poses a significant potential hazard. The situation is compounded by arsine’s very poor warning properties—the workers cannot detect the arsine gas by sight or smell until it is well above acceptable levels. In this case, controls that are marginally effective at keeping exposures below acceptable levels should not be considered because excursions above acceptable levels cannot be detected by the workers. In this case, engineering controls should be installed to isolate the worker from the material. In addition, a continuous arsine gas monitor should be installed to warn workers of the failure of the engineering controls. In situations involving high toxicity and poor warning properties, preventive occupational hygiene is practised. The occupational hygienist must be flexible and thoughtful when approaching an exposure problem.

Acceptable level of exposure

If controls are being considered to protect a worker from a substance such as acetone, where the acceptable level of exposure may be in the range of 800 ppm, controlling to a level of 400 ppm or less may be achieved relatively easily. Contrast the example of acetone control to control of 2-ethoxyethanol, where the acceptable level of exposure may be in the range of 0.5 ppm. To obtain the same per cent reduction (0.5 ppm to 0.25 ppm) would probably require different controls. In fact, at these low levels of exposure, isolation of the material may become the primary means of control. At high levels of exposure, ventilation may provide the necessary reduction. Therefore, the acceptable level determined (by the government, company, etc.) for a substance can limit the selection of controls.

Frequency of exposure

When assessing toxicity the classic model uses the following relationship:


Dose, in this case, is the amount of material being made available for absorption. The previous discussion focused on minimizing (lowering) the concentration portion of this relationship. One might also reduce the time spent being exposed (the underlying reason for administrative controls). This would similarly reduce the dose. The issue here is not the employee spending time in a room, but how often an operation (task) is performed. The distinction is important. In the first example, the exposure is controlled by removing the workers when they are exposed to a selected amount of toxicant; the intervention effort is not directed at controlling the amount of toxicant (in many situations there may be a combination approach). In the second case, the frequency of the operation is being used to provide the appropriate controls, not to determine a work schedule. For example, if an operation such as degreasing is performed routinely by an employee, the controls may include ventilation, substitution of a less toxic solvent or even automation of the process. If the operation is performed rarely (e.g., once per quarter) personal protective equipment may be an option (depending on many of the factors described in this section). As these two examples illustrate, the frequency with which an operation is performed can directly affect the selection of controls. Whatever the exposure situation, the frequency with which a worker performs the tasks must be considered and factored into the control selection.

Route of exposure obviously is going to affect the method of control. If a respiratory irritant is present, ventilation, respirators, and so on, would be considered. The challenge for the occupational hygienist is identifying all routes of exposure. For example, glycol ethers are used as a carrier solvent in printing operations. Breathing-zone air concentrations can be measured and controls implemented. Glycol ethers, however, are rapidly absorbed through intact skin. The skin represents a significant route of exposure and must be considered. In fact, if the wrong gloves are chosen, the skin exposure may continue long after the air exposures have decreased (due to the employee continuing to use gloves that have experienced breakthrough). The hygienist must evaluate the substance—its physical properties, chemical and toxicological properties, and so on—to determine what routes of exposure are possible and plausible (based on the tasks performed by the employee).

In any discussion of controls, one of the factors that must be considered is the regulatory requirements for controls. There may well be codes of practice, regulations, and so on, that require a specific set of controls. The occupational hygienist has flexibility above and beyond the regulatory requirements, but the minimum mandated controls must be installed. Another aspect of the regulatory requirements is that the mandated controls may not work as well or may conflict with the best judgement of the occupational hygienist. The hygienist must be creative in these situations and find solutions that satisfy the regulatory as well as best practice goals of the organization.

Training and Labelling

Regardless of what form of intervention is eventually selected, training and other forms of notification must be provided to ensure that the workers understand the interventions, why they were selected, what reductions in exposure are expected, and the role of the workers in achieving those reductions. Without the participation and understanding of the workforce, the interventions will likely fail or at least operate at reduced efficiency. Training builds hazard awareness in the workforce. This new awareness can be invaluable to the occupational hygienist in identifying and reducing previously unrecognized exposures or new exposures.

Training, labelling and related activities may be part of a regulatory compliance scheme. It would be prudent to check the local regulations to ensure that whatever type of training or labelling is undertaken satisfies the regulatory as well as operational requirements.


In this short discussion on interventions, some general considerations have been presented to stimulate thought. In practice, these rules become very complex and often have significant ramifications for employee and company health. The occupational hygienist’s professional judgement is essential in selecting the best controls. Best is a term with many different meanings. The occupational hygienist must become adept at working in teams and soliciting input from the workers, management and technical staff.



Page 13 of 16

" DISCLAIMER: The ILO does not take responsibility for content presented on this web portal that is presented in any language other than English, which is the language used for the initial production and peer-review of original content. Certain statistics have not been updated since the production of the 4th edition of the Encyclopaedia (1998)."


Part I. The Body
Part II. Health Care
Part III. Management & Policy
Part IV. Tools and Approaches
Part V. Psychosocial and Organizational Factors
Part VI. General Hazards
Part VII. The Environment
Part VIII. Accidents and Safety Management
Part IX. Chemicals
Part X. Industries Based on Biological Resources
Part XI. Industries Based on Natural Resources
Part XII. Chemical Industries
Part XIII. Manufacturing Industries
Part XIV. Textile and Apparel Industries
Part XV. Transport Industries
Part XVI. Construction
Part XVII. Services and Trade
Part XVIII. Guides