This chapter provides an overview of major types of mental health disorder that can be associated with work—mood and affective disorders (e.g., dissatisfaction), burnout, post-traumatic stress disorder (PTSD), psychoses, cognitive disorders and substance abuse. The clinical picture, available assessment techniques, aetiological agents and factors, and specific prevention and management measures will be provided. The relationship with work, occupation or branch of industry will be illustrated and discussed where possible.
This introductory article first will provide a general perspective on occupational mental health itself. The concept of mental health will be elaborated upon, and a model will be presented. Next, we will discuss why attention should be paid to mental (ill) health and which occupational groups are at greatest risk. Finally, we will present a general intervention framework for successfully managing work-related mental health problems.
What Is Mental Health: A Conceptual Model
There are many different views about the components and processes of mental health. The concept is heavily value laden, and one definition is unlikely to be agreed upon. Like the strongly associated concept of “stress”, mental health is conceptualized as:
Mental health may also be associated with:
Thus, mental health is conceptualized not only as a process or outcome variable, but also as an independent variable—that is, as a personal characteristic that influences our behaviour.
In figure 1 a mental health model is presented. Mental health is determined by environmental characteristics, both in and outside the work situation, and by characteristics of the individual. Major environmental job characteristics are elaborated upon in the chapter “Psychosocial and organizational factors”, but some points on these environmental precursors of mental (ill) health have to be made here as well.
Figure 1. A model for mental health.
There are many models, most of them stemming from the field of work and organizational psychology, that identify precursors of mental ill health. These precursors are often labelled “stressors”. Those models differ in their scope and, related to this, in the number of stressor dimensions identified. An example of a relatively simple model is that of Karasek (Karasek and Theorell 1990), describing only three dimensions: psychological demands, decision latitude (incorporating skill discretion and decision authority) and social support. A more elaborate model is that of Warr (1994), with nine dimensions: opportunity for control (decision authority), opportunity for skill use (skill discretion), externally generated goals (quantitative and qualitative demands), variety, environmental clarity (information about consequences of behaviour, availability of feedback, information about the future, information about required behaviour), availability of money, physical security (low physical risk, absence of danger), opportunity for interpersonal contact (prerequisite for social support), and valued social position (cultural and company evaluations of status, personal evaluations of significance). From the above it is clear that the precursors of mental (ill) health are generally psychosocial in nature, and are related to work content, as well as working conditions, conditions of employment and (formal and informal) relationships at work.
Environmental risk factors for mental (ill) health generally result in short-term effects such as changes in mood and affect, like feelings of pleasure, enthusiasm or a depressed mood. These changes are often accompanied by changes in behaviour. We may think of restless behaviour, palliative coping (e.g., drinking) or avoiding, as well as active problem-solving behaviour. These affects and behaviours are generally accompanied by physiological changes as well, indicative of arousal and sometimes also of a disturbed homeostasis. When one or more of these stressors remains active, the short-term, reversible responses may result in more stable, less reversible mental health outcomes like burnout, psychoses or major depressive disorder. Situations that are extremely threatening may even immediately result in chronic mental health disorders (e.g., PTSD) which are difficult to reverse.
Person characteristics may interact with psychosocial risk factors at work and exacerbate or buffer their effects. The (perceived) coping ability may not only moderate or mediate the effects of environmental risk factors, but may also determine the appraisal of the risk factors in the environment. Part of the effect of the environmental risk factors on mental health results from this appraisal process.
Person characteristics (e.g., physical fitness) may not only act as precursors in the development of mental health, but may also change as a result of the effects. Coping ability may, for example, increase as the coping process progresses successfully (“learning”). Long-term mental health problems will, on the other hand, often reduce coping ability and capacity in the long run.
In occupational mental health research, attention has been particularly directed to affective well-being—factors such as job satisfaction, depressive moods and anxiety. The more chronic mental health disorders, resulting from long-term exposure to stressors and to a greater or lesser extent also related to personality disorders, have a much lower prevalence in the working population. These chronic mental health problems have a multitude of causal factors. Occupational stressors will consequently be only partly responsible for the chronic condition. Also, people suffering from these kinds of chronic problem will have great difficulty in maintaining their position at work, and many are on sick leave or have dropped out of work for quite a long period of time (1 year), or even permanently. These chronic problems, therefore, are often studied from a clinical perspective.
Since, in particular, affective moods and affects are so frequently studied in the occupational field, we will elaborate on them a little bit more. Affective well-being has been treated both in a rather undifferentiated way (ranging from feeling good to feeling bad), as well as by considering two dimensions: “pleasure” and “arousal” (figure 2). When variations in arousal are uncorrelated with pleasure, these variations alone are generally not considered to be an indicator of well-being.
Figure 2. Three principal axes for the measurement of affective well-being.
When, however, arousal and pleasure are correlated, four quadrants can be distinguished:
Well-being can be studied at two levels: a general, context-free level and a context-specific level. The work environment is such a specific context. Data analyses support the general notion that the relation between job characteristics and context-free, non-work mental health is mediated by an effect on work-related mental health. Work-related affective well-being has commonly been studied along the horizontal axis (Figure 2) in terms of job satisfaction. Affects related to comfort in particular have, however, largely been ignored. This is regrettable, since this affect might indicate resigned job satisfaction: people may not complain about their jobs, but may still be apathetic and uninvolved (Warr 1994).
Why Pay Attention to Mental Health Issues?
There are several reasons that illustrate the need for attention to mental health issues. First of all, national statistics of several countries indicate that a lot of people drop out of work because of mental health problems. In the Netherlands, for example, for one-third of those employees who are diagnosed as disabled for work each year, the problem is related to mental health. The majority of this category, 58%, is reported to be work related (Gründemann, Nijboer and Schellart 1991). Together with musculoskeletal problems, mental health problems account for about two-thirds of those who drop out for medical reasons each year.
Mental ill health is an extensive problem in other countries as well. According to the Health and Safety Executive Booklet, it has been estimated that 30 to 40% of all sickness absence from work in the UK is attributable to some form of mental illness (Ross 1989; O’Leary 1993). In the UK, it has been estimated that one in five of the working population suffers each year from some form of mental illness. It is difficult to be precise about the number of working days lost each year because of mental ill health. For the UK, a figure of 90 million certified days—or 30 times that lost as a result of industrial disputes—is widely quoted (O’Leary 1993). This compares with 8 million days lost as a result of alcoholism and drink-related diseases and 35 million days as a result of coronary heart disease and strokes.
Apart from the fact that mental ill health is costly, both in human and financial terms, there is a legal framework provided by the European Union (EU) in its framework directive on health and safety at work (89/391/EEC), enacted in 1993. Although mental health is not as such an element which is central to this directive, a certain amount of attention is given to this aspect of health in Article 6. The framework directive states, among other things, that the employer has:
“a duty to ensure the safety and health of workers in every aspect related to work, following general principles of prevention: avoiding risks, evaluating the risks which cannot be avoided, combating the risks at source, adapting the work to the individual, especially as regards the design of workplaces, the choice of work equipment and the choice of work and production methods, with a view, in particular, to alleviating monotonous work and work at a predetermined work rate and to reduce their effects on health.”
Despite this directive, not all European countries have adopted framework legislation on health and safety. In a study comparing regulations, policies and practices concerning mental health and stress at work in five European countries, those countries with such framework legislation (Sweden, the Netherlands and the UK) recognize mental health issues at work as important health and safety topics, whereas those countries which do not have such a framework (France, Germany) do not recognize mental health issues as important (Kompier et al. 1994).
Last but not least, prevention of mental ill health (at its source) pays. There are strong indications that important benefits result from preventive programmes. For example, of the employers in a national representative sample of companies from three major branches of industry, 69% state that motivation increased; 60%, that absence due to sickness decreased ; 49%, that the atmosphere improved; and 40%, that productivity increased as a result of a prevention programme (Houtman et al. 1995).
Occupational Risk Groups of Mental Health
Are specific groups of the working population at risk of mental health problems? This question cannot be answered in a straightforward manner, since hardly any national or international monitoring systems exist which identify risk factors, mental health consequences or risk groups. Only a “scattergram” can be given. In some countries national data exist for the distribution of occupational groups with respect to major risk factors (e.g., for the Netherlands, Houtman and Kompier 1995; for the United States, Karasek and Theorell 1990). The distribution of the occupational groups in the Netherlands on the dimensions of job demands and skill discretion (figure 3) agree fairly well with the US distribution shown by Karasek and Theorell, for those groups that are in both samples. In those occupations with high work pace and/or low skill discretion, the risk of mental health disorders is highest.
Figure 3. Risk for stress and mental ill health for different occupational groups, as determined by the combined effects of work pace and skill discretion.
Also, in some countries there are data for mental health outcomes as related to occupational groups. Occupational groups that are especially prone to drop out for reasons of mental ill health in the Netherlands are those in the service sector, such as health care personnel and teachers, as well as cleaning personnel, housekeepers and occupations in the transport branch (Gründemann, Nijboer and Schellart1991).
In the United States, occupations which were highly prone to major depressive disorder, as diagnosed with standardized coding systems (i.e., the third edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM III)) (American Psychiatric Association 1980), are juridicial employees, secretaries and teachers (Eaton et al. 1990).
Management of Mental Health Problems
The conceptual model (figure 1) suggests at least two targets of intervention in mental health issues:
Primary prevention, the type of prevention that should prevent mental ill health from occurring, should be directed at the precursors by alleviating or managing the risks in the environment and increasing the coping ability and capacity of the individual. Secondary prevention is directed at the maintenance of people at work who already have some form of (mental) health problem. This type of prevention should embrace the primary prevention strategy, accompanied by strategies to make both employees and their supervisors sensitive to signals of early mental ill health in order to reduce the consequences or prevent them from getting worse. Tertiary prevention is directed at the rehabilitation of people who have dropped out of work due to mental health problems. This type of prevention should be directed at adapting the workplace to the possibilities of the individual (which is often found to be quite effective), along with individual counselling and treatment. Table 1 provides a schematic framework for the management of mental health disorders at the workplace. Effective preventive policy plans of organizations should, in principle, take into account all three types of strategy (primary, secondary and tertiary prevention), as well as be directed at risks, consequences and person characteristics.
Table 1. A schematic overview of management strategies on mental health problems, and some examples.
Type of |
Intervention level |
|
Work environment |
Person characteristics and/or health outcomes |
|
Primary |
Redesign of task content Redesign of communication structure |
Training groups of employees on signalling and handling specific work- related problems (e.g., how to manage time pressure, robberies etc.) |
Secondary |
Introduction of a policy on how to act in case of absenteeism (e.g., training supervisors to discuss absence and return with employees concerned) Provide facilities within the organization, especially for risk groups (e.g., counsellor for sexual harassment) |
Training in relaxation techniques |
Tertiary |
Adaptation of an individual workplace |
Individual counselling Individual treatment or therapy (may also be with medication) |
The schedule as presented provides a method for systematic analysis of all possible types of measure. One can discuss whether a certain measure belongs somewhere else in the schedule; such a discussion is, however, not very fruitful, since it is often the case that primary preventive measures can work out positively for secondary prevention as well. The proposed systematic analysis may well result in a large number of potential measures, several of which may be adopted, either as a general aspect of the (health and safety) policy or in a specific case.
In conclusion: Although mental health is not a clearly defined state, process or outcome, it covers a generally agreed upon area of (ill) health. Part of this area can be covered by generally accepted diagnostic criteria (e.g., psychosis, major depressive disorder); the diagnostic nature of other parts is neither as clear nor as generally accepted. Examples of the latter are moods and affects, and also burnout. Despite this, there are many indications that mental (ill) health, including the more vague diagnostic criteria, is a major problem. Its costs are high, both in human and financial terms. In the following articles of this chapter, several mental health disorders—moods and affects (e.g., dissatisfaction), burnout, post-traumatic stress disorder, psychoses, cognitive disorders and substance abuse—will be discussed in much more depth with respect to the clinical picture, available assessment techniques, aetiological agents and factors, and specific prevention and management measures.
A lamp is an energy converter. Although it may carry out secondary functions, its prime purpose is the transformation of electrical energy into visible electromagnetic radiation. There are many ways to create light. The standard method for creating general lighting is the conversion of electrical energy into light.
Types of Light
Incandescence
When solids and liquids are heated, they emit visible radiation at temperatures above 1,000 K; this is known as incandescence.
Such heating is the basis of light generation in filament lamps: an electrical current passes through a thin tungsten wire, whose temperature rises to around 2,500 to 3,200 K, depending upon the type of lamp and its application.
There is a limit to this method, which is described by Planck’s Law for the performance of a black body radiator, according to which the spectral distribution of energy radiated increases with temperature. At about 3,600 K and above, there is a marked gain in emission of visible radiation, and the wavelength of maximum power shifts into the visible band. This temperature is close to the melting point of tungsten, which is used for the filament, so the practical temperature limit is around 2,700 K, above which filament evaporation becomes excessive. One result of these spectral shifts is that a large part of the radiation emitted is not given off as light but as heat in the infrared region. Filament lamps can thus be effective heating devices and are used in lamps designed for print drying, food preparation and animal rearing.
Electric discharge
Electrical discharge is a technique used in modern light sources for commerce and industry because of the more efficient production of light. Some lamp types combine the electrical discharge with photoluminescence.
An electric current passed through a gas will excite the atoms and molecules to emit radiation of a spectrum which is characteristic of the elements present. Two metals are commonly used, sodium and mercury, because their characteristics give useful radiations within the visible spectrum. Neither metal emits a continuous spectrum, and discharge lamps have selective spectra. Their colour rendering will never be identical to continuous spectra. Discharge lamps are often classed as high pressure or low pressure, although these terms are only relative, and a high-pressure sodium lamp operates at below one atmosphere.
Types of Luminescence
Photoluminescence occurs when radiation is absorbed by a solid and is then re-emitted at a different wavelength. When the re-emitted radiation is within the visible spectrum the process is called fluorescence or phosphorescence.
Electroluminescence occurs when light is generated by an electric current passed through certain solids, such as phosphor materials. It is used for self-illuminated signs and instrument panels but has not proved to be a practical light source for the lighting of buildings or exteriors.
Evolution of Electric Lamps
Although technological progress has enabled different lamps to be produced, the main factors influencing their development have been external market forces. For example, the production of filament lamps in use at the start of this century was possible only after the availability of good vacuum pumps and the drawing of tungsten wire. However, it was the large-scale generation and distribution of electricity to meet the demand for electric lighting that determined market growth. Electric lighting offered many advantages over gas- or oil-generated light, such as steady light that requires infrequent maintenance as well as the increased safety of having no exposed flame, and no local by-products of combustion.
During the period of recovery after the Second World War, the emphasis was on productivity. The fluorescent tubular lamp became the dominant light source because it made possible the shadow-free and comparatively heat-free lighting of factories and offices, allowing maximum use of the space. The light output and wattage requirements for a typical 1,500 mm fluorescent tubular lamp is given in table 1.
Table 1. Improved light output and wattage requirements of some typical 1,500 mm fluorescent tube lamps
Rating (W) |
Diameter (mm) |
Gas fill |
Light output (lumens) |
80 |
38 |
argon |
4,800 |
65 |
38 |
argon |
4,900 |
58 |
25 |
krypton |
5,100 |
50 |
25 |
argon |
5,100 |
By the 1970s oil prices rose and energy costs became a significant part of operating costs. Fluorescent lamps that produce the same amount of light with less electrical consumption were demanded by the market. Lamp design was refined in several ways. As the century closes there is a growing awareness of global environment issues. Better use of declining raw materials, recycling or safe disposal of products and the continuing concern over energy consumption (particularly energy generated from fossil fuels) are impacting on current lamp designs.
Performance Criteria
Performance criteria vary by application. In general, there is no particular hierarchy of importance of these criteria.
Light output: The lumen output of a lamp will determine its suitability in relation to the scale of the installation and the quantity of illumination required.
Colour appearance and colour rendering: Separate scales and numerical values apply to colour appearance and colour rendering. It is important to remember that the figures provide guidance only, and some are only approximations. Whenever possible, assessments of suitability should be made with actual lamps and with the colours or materials that apply to the situation.
Lamp life: Most lamps will require replacement several times during the life of the lighting installation, and designers should minimize the inconvenience to the occupants of odd failures and maintenance. Lamps are used in a wide variety of applications. The anticipated average life is often a compromise between cost and performance. For example, the lamp for a slide projector will have a life of a few hundred hours because the maximum light output is important to the quality of the image. By contrast, some roadway lighting lamps may be changed every two years, and this represents some 8,000 burning hours.
Further, lamp life is affected by operating conditions, and thus there is no simple figure that will apply in all conditions. Also, the effective lamp life may be determined by different failure modes. Physical failure such as filament or lamp rupture may be preceded by reduction in light output or changes in colour appearance. Lamp life is affected by external environmental conditions such as temperature, vibration, frequency of starting, supply voltage fluctuations, orientation and so on.
It should be noted that the average life quoted for a lamp type is the time for 50% failures from a batch of test lamps. This definition of life is not likely to be applicable to many commercial or industrial installations; thus practical lamp life is usually less than published values, which should be used for comparison only.
Efficiency: As a general rule the efficiency of a given type of lamp improves as the power rating increases, because most lamps have some fixed loss. However, different types of lamps have marked variation in efficiency. Lamps of the highest efficiency should be used, provided that the criteria of size, colour and lifetime are also met. Energy savings should not be at the expense of the visual comfort or the performance ability of the occupants. Some typical efficacies are given in table 2.
Table 2. Typical lamp efficacies
Lamp efficacies |
|
100 W filament lamp |
14 lumens/watt |
58 W fluorescent tube |
89 lumens/watt |
400 W high-pressure sodium |
125 lumens/watt |
131 W low-pressure sodium |
198 lumens/watt |
Main lamp types
Over the years, several nomenclature systems have been developed by national and international standards and registers.
In 1993, the International Electrotechnical Commission (IEC) published a new International Lamp Coding System (ILCOS) intended to replace existing national and regional coding systems. A list of some ILCOS short form codes for various lamps is given in table 3.
Table 3. International Lamp Coding System (ILCOS) short form coding system for some lamp types
Type (code) |
Common ratings (watts) |
Colour rendering |
Colour temperature (K) |
Life (hours) |
Compact fluorescent lamps (FS) |
5–55 |
good |
2,700–5,000 |
5,000–10,000 |
High-pressure mercury lamps (QE) |
80–750 |
fair |
3,300–3,800 |
20,000 |
High-pressure sodium lamps (S-) |
50–1,000 |
poor to good |
2,000–2,500 |
6,000–24,000 |
Incandescent lamps (I) |
5–500 |
good |
2,700 |
1,000–3,000 |
Induction lamps (XF) |
23–85 |
good |
3,000–4,000 |
10,000–60,000 |
Low-pressure sodium lamps (LS) |
26–180 |
monochromatic yellow colour |
1,800 |
16,000 |
Low-voltage tungsten halogen lamps (HS) |
12–100 |
good |
3,000 |
2,000–5,000 |
Metal halide lamps (M-) |
35–2,000 |
good to excellent |
3,000–5,000 |
6,000–20,000 |
Tubular fluorescent lamps (FD) |
4–100 |
fair to good |
2,700–6,500 |
10,000–15,000 |
Tungsten halogen lamps (HS) |
100–2,000 |
good |
3,000 |
2,000–4,000 |
Incandescent lamps
These lamps use a tungsten filament in an inert gas or vacuum with a glass envelope. The inert gas suppresses tungsten evaporation and lessens the envelope blackening. There is a large variety of lamp shapes, which are largely decorative in appearance. The construction of a typical General Lighting Service (GLS) lamp is given in figure 1.
Figure 1. Construction of a GLS lamp
Incandescent lamps are also available with a wide range of colours and finishes. The ILCOS codes and some typical shapes include those shown in table 4.
Table 4. Common colours and shapes of incandescent lamps, with their ILCOS codes
Colour/Shape |
Code |
Clear |
/C |
Frosted |
/F |
White |
/W |
Red |
/R |
Blue |
/B |
Green |
/G |
Yellow |
/Y |
Pear shaped (GLS) |
IA |
Candle |
IB |
Conical |
IC |
Globular |
IG |
Mushroom |
IM |
Incandescent lamps are still popular for domestic lighting because of their low cost and compact size. However, for commercial and industrial lighting the low efficacy generates very high operating costs, so discharge lamps are the normal choice. A 100 W lamp has a typical efficacy of 14 lumens/watt compared with 96 lumens/watt for a 36 W fluorescent lamp.
Incandescent lamps are simple to dim by reducing the supply voltage, and are still used where dimming is a desired control feature.
The tungsten filament is a compact light source, easily focused by reflectors or lenses. Incandescent lamps are useful for display lighting where directional control is needed.
Tungsten halogen lamps
These are similar to incandescent lamps and produce light in the same manner from a tungsten filament. However the bulb contains halogen gas (bromine or iodine) which is active in controlling tungsten evaporation. See figure 2.
Figure 2. The halogen cycle
Fundamental to the halogen cycle is a minimum bulb wall temperature of 250 °C to ensure that the tungsten halide remains in a gaseous state and does not condense on the bulb wall. This temperature means bulbs made from quartz in place of glass. With quartz it is possible to reduce the bulb size.
Most tungsten halogen lamps have an improved life over incandescent equivalents and the filament is at a higher temperature, creating more light and whiter colour.
Tungsten halogen lamps have become popular where small size and high performance are the main requirement. Typical examples are stage lighting, including film and TV, where directional control and dimming are common requirements.
Low-voltage tungsten halogen lamps
These were originally designed for slide and film projectors. At 12 V the filament for the same wattage as 230 V becomes smaller and thicker. This can be more efficiently focused, and the larger filament mass allows a higher operating temperature, increasing light output. The thick filament is more robust. These benefits were realized as being useful for the commercial display market, and even though it is necessary to have a step-down transformer, these lamps now dominate shop-window lighting. See figure 3.
Figure 3. Low-voltage dichroic reflector lamp
Although users of film projectors want as much light as possible, too much heat damages the transparency medium. A special type of reflector has been developed, which reflects only the visible radiation, allowing infrared radiation (heat) to pass through the back of lamp. This feature is now part of many low-voltage reflector lamps for display lighting as well as projector equipment.
Voltage sensitivity: All filament lamps are sensitive to voltage variation, and light output and life are affected. The move to “harmonize” the supply voltage throughout Europe at 230 V is being achieved by widening the tolerances to which the generating authorities can operate. The move is towards ±10%, which is a voltage range of 207 to 253 V. Incandescent and tungsten halogen lamps cannot be operated sensibly over this range, so it will be necessary to match actual supply voltage to lamp ratings. See figure 4.
Figure 4. GLS filament lamps and supply voltage
Discharge lamps will also be affected by this wide voltage variation, so the correct specification of control gear becomes important.
Tubular fluorescent lamps
These are low pressure mercury lamps and are available as “hot cathode” and “cold cathode” versions. The former is the conventional fluorescent tube for offices and factories; “hot cathode” relates to the starting of the lamp by pre-heating the electrodes to create sufficient ionization of the gas and mercury vapour to establish the discharge.
Cold cathode lamps are mainly used for signage and advertising. See figure 5.
Figure 5. Principle of fluorescent lamp
Fluorescent lamps require external control gear for starting and to control the lamp current. In addition to the small amount of mercury vapour, there is a starting gas (argon or krypton).
The low pressure of mercury generates a discharge of pale blue light. The major part of the radiation is in the UV region at 254 nm, a characteristic radiation frequency for mercury. Inside of the tube wall is a thin phosphor coating, which absorbs the UV and radiates the energy as visible light. The colour quality of the light is determined by the phosphor coating. A range of phosphors are available of varying colour appearance and colour rendering.
During the 1950s phosphors available offered a choice of reasonable efficacy (60 lumens/watt) with light deficient in reds and blues, or improved colour rendering from “deluxe” phosphors of lower efficiency (40 lumens/watt).
By the 1970s new, narrow-band phosphors had been developed. These separately radiated red, blue and green light but, combined, produced white light. Adjusting the proportions gave a range of different colour appearances, all with similar excellent colour rendering. These tri-phosphors are more efficient than the earlier types and represent the best economic lighting solution, even though the lamps are more expensive. Improved efficacy reduces operating and installation costs.
The tri-phosphor principle has been extended by multi-phosphor lamps where critical colour rendering is necessary, such as for art galleries and industrial colour matching.
The modern narrow-band phosphors are more durable, have better lumen maintenance, and increase lamp life.
Compact fluorescent lamps
The fluorescent tube is not a practical replacement for the incandescent lamp because of its linear shape. Small, narrow-bore tubes can be configured to approximately the same size as the incandescent lamp, but this imposes a much higher electrical loading on the phosphor material. The use of tri-phosphors is essential to achieve acceptable lamp life. See figure 6.
Figure 6. Four-leg compact fluorescent
All compact fluorescent lamps use tri-phosphors, so, when they are used together with linear fluorescent lamps, the latter should also be tri-phosphor to ensure colour consistency.
Some compact lamps include the operating control gear to form retro-fit devices for incandescent lamps. The range is increasing and enables easy upgrading of existing installations to more energy-efficient lighting. These integral units are not suitable for dimming where that was part of the original controls.
High-frequency electronic control gear: If the normal supply frequency of 50 or 60 Hz is increased to 30 kHz, there is a 10% gain in efficacy of fluorescent tubes. Electronic circuits can operate individual lamps at such frequencies. The electronic circuit is designed to provide the same light output as wire-wound control gear, from reduced lamp power. This offers compatibility of lumen package with the advantage that reduced lamp loading will increase lamp life significantly. Electronic control gear is capable of operating over a range of supply voltages.
There is no common standard for electronic control gear, and lamp performance may differ from the published information issued by the lamp makers.
The use of high-frequency electronic gear removes the normal problem of flicker, to which some occupants may be sensitive.
Induction lamps
Lamps using the principle of induction have recently appeared on the market. They are low-pressure mercury lamps with tri-phosphor coating and as light producers are similar to fluorescent lamps. The energy is transferred to the lamp by high-frequency radiation, at approximately 2.5 MHz from an antenna positioned centrally within the lamp. There is no physical connection between the lamp bulb and the coil. Without electrodes or other wire connections the construction of the discharge vessel is simpler and more durable. Lamp life is mainly determined by the reliability of the electronic components and the lumen maintenance of the phosphor coating.
High-pressure mercury lamps
High-pressure discharges are more compact and have higher electrical loads; therefore, they require quartz arc tubes to withstand the pressure and temperature. The arc tube is contained in an outer glass envelope with a nitrogen or argon-nitrogen atmosphere to reduce oxidation and arcing. The bulb effectively filters the UV radiation from the arc tube. See figure 7.
Figure 7. Mercury lamp construction
At high pressure, the mercury discharge is mainly blue and green radiation. To improve the colour a phosphor coating of the outer bulb adds red light. There are deluxe versions with an increased red content, which give higher light output and improved colour rendering.
All high-pressure discharge lamps take time to reach full output. The initial discharge is via the conducting gas fill, and the metal evaporates as the lamp temperature increases.
At the stable pressure the lamp will not immediately restart without special control gear. There is a delay while the lamp cools sufficiently and the pressure reduces, so that the normal supply voltage or ignitor circuit is adequate to re-establish the arc.
Discharge lamps have a negative resistance characteristic, and so the external control gear is necessary to control the current. There are losses due to these control gear components so the user should consider total watts when considering operating costs and electrical installation. There is an exception for high-pressure mercury lamps, and one type contains a tungsten filament which both acts as the current limiting device and adds warm colours to the blue/green discharge. This enables the direct replacement of incandescent lamps.
Although mercury lamps have a long life of about 20,000 hours, the light output will fall to about 55% of the initial output at the end of this period, and therefore the economic life can be shorter.
Metal halide lamps
The colour and light output of mercury discharge lamps can be improved by adding different metals to the mercury arc. For each lamp the dose is small, and for accurate application it is more convenient to handle the metals in powder form as halides. This breaks down as the lamp warms up and releases the metal.
A metal halide lamp can use a number of different metals, each of which give off a specific characteristic colour. These include:
There is no standard mixture of metals, so metal halide lamps from different manufacturers may not be compatible in appearance or operating performance. For lamps with the lower wattage ratings, 35 to 150 W, there is closer physical and electrical compatibility with a common standard.
Metal halide lamps require control gear, but the lack of compatibility means that it is necessary to match each combination of lamp and gear to ensure correct starting and running conditions.
Low-pressure sodium lamps
The arc tube is similar in size to the fluorescent tube but is made of special ply glass with an inner sodium resistant coating. The arc tube is formed in a narrow “U” shape and is contained in an outer vacuum jacket to ensure thermal stability. During starting, the lamps have a strong red glow from the neon gas fill.
The characteristic radiation from low-pressure sodium vapour is a monochromatic yellow. This is close to the peak sensitivity of the human eye, and low-pressure sodium lamps are the most efficient lamps available at nearly 200 lumens/watt. However the applications are limited to where colour discrimination is of no visual importance, such as trunk roads and underpasses, and residential streets.
In many situations these lamps are being replaced by high-pressure sodium lamps. Their smaller size offers better optical control, particularly for roadway lighting where there is growing concern over excessive sky glow.
High-pressure sodium lamps
These lamps are similar to high-pressure mercury lamps but offer better efficacy (over 100 lumens/watt) and excellent lumen maintenance. The reactive nature of sodium requires the arc tube to be manufactured from translucent polycrystalline alumina, as glass or quartz are unsuitable. The outer glass bulb contains a vacuum to prevent arcing and oxidation. There is no UV radiation from the sodium discharge so phosphor coatings are of no value. Some bulbs are frosted or coated to diffuse the light source. See figure 8.
Figure 8. High-pressure sodium lamp construction
As the sodium pressure is increased, the radiation becomes a broad band around the yellow peak, and the appearance is golden white. However, as the pressure increases, the efficiency decreases. There are currently three separate types of high-pressure sodium lamps available, as shown in table 5.
Table 5. Types of high-pressure sodium lamp
Lamp type (code) |
Colour (K) |
Efficacy (lumens/watt) |
Life (hours) |
Standard |
2,000 |
110 |
24,000 |
Deluxe |
2,200 |
80 |
14,000 |
White (SON) |
2,500 |
50 |
Generally the standard lamps are used for exterior lighting, deluxe lamps for industrial interiors, and White SON for commercial/display applications.
Dimming of Discharge Lamps
The high-pressure lamps cannot be satisfactorily dimmed, as changing the lamp power changes the pressure and thus the fundamental characteristics of the lamp.
Fluorescent lamps can be dimmed using high-frequency supplies generated typically within the electronic control gear. The colour appearance remains very constant. In addition, the light output is approximately proportional to the lamp power, with consequent saving in electrical power when the light output is reduced. By integrating the light output from the lamp with the prevailing level of natural daylight, a near constant level of illuminance can be provided in an interior.
Ionization is one of the techniques used to eliminate particulate matter from air. Ions act as condensation nuclei for small particles which, as they stick together, grow and precipitate.
The concentration of ions in closed indoor spaces is, as a general rule and if there are no additional sources of ions, inferior to that of open spaces. Hence the belief that increasing the concentration of negative ions in indoor air improves air quality.
Some studies based on epidemiological data and on planned experimental research assert that increasing the concentration of negative ions in work environments leads to improved worker efficiency and enhances the mood of employees, while positive ions have an adverse affect. However, parallel studies show that existing data on the effects of negative ionization on workers’ productivity are inconsistent and contradictory. Therefore, it seems that it is still not possible to assert unequivocally that the generation of negative ions is really beneficial.
Natural Ionization
Individual gas molecules in the atmosphere can ionize negatively by gaining, or positively by losing, an electron. For this to occur a given molecule must first gain enough energy—usually called the ionization energy of that particular molecule. Many sources of energy, both of cosmic and terrestrial origin, occur in nature that are capable of producing this phenomenon: background radiation in the atmosphere; electromagnetic solar waves (especially ultraviolet ones), cosmic rays, atomization of liquids such as the spray caused by waterfalls, the movement of great masses of air over the earth’s surface, electrical phenomena such as lightning and storms, the process of combustion and radioactive substances.
The electrical configurations of the ions that are formed this way, while not completely known yet, seems to include the ions of carbonation and H+, H3O+, O+, N+, OH–, H2O– and O2–. These ionized molecules can aggregate through adsorption on suspended particles (fog, silica and other contaminants). Ions are classified according to their size and their mobility. The latter is defined as a velocity in an electrical field expressed as a unit such as centimetres per second by voltage per centimetre (cm/s/V/cm), or, more compactly,
Atmospheric ions tend to disappear by recombination. Their half-life depends on their size and is inversely proportional to their mobility. Negative ions are statistically smaller and their half-life is of several minutes, while positive ions are larger and their half-life is about one half hour. The spatial charge is the quotient of the concentration of positive ions and the concentration of negative ions. The value of this relation is greater than one and depends on factors such as climate, location and season of the year. In living spaces this coefficient can have values that are lower than one. Characteristics are given in table 1.
Table 1. Characteristics of ions of given mobilities and diameter
Mobility (cm2/Vs) |
Diameter (mm) |
Characteristics |
3.0–0.1 |
0.001–0.003 |
Small, high mobility, short life |
0.1–0.005 |
0.003–0.03 |
Intermediate, slower than small ions |
0.005–0.002 |
>0.03 |
Slow ions, aggregates on particulate matter |
Artificial Ionization
Human activity modifies the natural ionization of air. Artificial ionization can be caused by industrial and nuclear processes and fires. Particulate matter suspended in air favours the formation of Langevin ions (ions aggregated on particulate matter). Electrical radiators increase the concentration of positive ions considerably. Air-conditioners also increase the spatial charge of indoor air.
Workplaces have machinery that produces positive and negative ions simultaneously, as in the case of machines that are important local sources of mechanical energy (presses, spinning and weaving machines), electrical energy (motors, electronic printers, copiers, high-voltage lines and installations), electromagnetic energy (cathode-ray screens, televisions, computer monitors) or radioactive energy (cobalt-42 therapy). These kinds of equipment create environments with higher concentrations of positive ions due to the latter’s higher half-life as compared to negative ions.
Environmental Concentrations of Ions
Concentrations of ions vary with environmental and meteorological conditions. In areas with little pollution, such as in forests and mountains, or at great altitudes, the concentration of small ions grows; in areas close to radioactive sources, waterfalls, or river rapids the concentrations can reach thousands of small ions per cubic centimetre. In the proximity of the sea and when the levels of humidity are high, on the other hand, there is an excess of large ions. In general, the average concentration of negative and positive ions in clean air is 500 and 600 ions per cubic centimetre respectively.
Some winds can carry great concentrations of positive ions—the Föhn in Switzerland, the Santa Ana in the United States, the Sirocco in North Africa, the Chinook in the Rocky Mountains and the Sharav in the Middle East.
In workplaces where there are no significant ionizing factors there is often an accumulation of large ions. This is especially true, for example, in places that are hermetically sealed and in mines. The concentration of negative ions decreases significantly in indoor spaces and in contaminated areas or areas that are dusty. There are many reasons why the concentration of negative ions also decreases in indoor spaces that have air-conditioning systems. One reason is that negative ions remain trapped in air ducts and air filters or are attracted to surfaces that are positively charged. Cathode-ray screens and computer monitors, for example, are positively charged, creating in their immediate vicinity a microclimate deficient in negative ions. Air filtration systems designed for “clean rooms” that require that levels of contamination with particulate matter be kept at a very low minimum seem also to eliminate negative ions.
On the other hand, an excess of humidity condenses ions, while a lack of it creates dry environments with large amounts of electrostatic charges. These electrostatic charges accumulate in plastic and synthetic fibres, both in the room and on people.
Ion Generators
Generators ionize air by delivering a large amount of energy. This energy may come from a source of alpha radiation (such as tritium) or from a source of electricity by the application of a high voltage to a sharply pointed electrode. Radioactive sources are forbidden in most countries because of the secondary problems of radioactivity.
Electric generators are made of a pointed electrode surrounded by a crown; the electrode is supplied with a negative voltage of thousands of volts, and the crown is grounded. Negative ions are expelled while positive ions are attracted to the generator. The amount of negative ions generated increases in proportion to the voltage applied and to the number of electrodes that it contains. Generators that have a greater number of electrodes and use a lower voltage are safer, because when voltage exceeds 8,000 to 10,000 volts the generator will produce not only ions, but also ozone and some nitrous oxides. The dissemination of ions is achieved by electrostatic repulsion.
The migration of ions will depend on the alignment of the magnetic field generated between the emission point and the objects that surround it. The concentration of ions surrounding the generators is not homogeneous and diminishes significantly as the distance from them increases. Fans installed in this equipment will increase the ionic dispersion zone. It is important to remember that the active elements of the generators need to be cleaned periodically to insure proper functioning.
The generators may also be based on atomizing water, on thermoelectric effects or on ultraviolet rays. There are many different types and sizes of generators. They may be installed on ceilings and walls or may be placed anywhere if they are the small, portable type.
Measuring Ions
Ion measuring devices are made by placing two conductive plates 0.75 cm apart and applying a variable voltage. Collected ions are measured by a picoamperemeter and the intensity of the current is registered. Variable voltages permit the measurement of concentrations of ions with different mobilities. The concentration of ions (N) is calculated from the intensity of the electrical current generated using the following formula:
where I is the current in amperes, V is the speed of the air flow, q is the charge of a univalent ion (1.6x10–19) in Coulombs and A is the effective area of the collector plates. It is assumed that all ions have a single charge and that they are all retained in the collector. It should be kept in mind that this method has its limitations due to background current and the influence of other factors such as humidity and fields of static electricity.
The Effects of Ions on the Body
Small negative ions are the ones which are supposed to have the greatest biological effect because of their greater mobility. High concentrations of negative ions can kill or block the growth of microscopic pathogens, but no adverse effects on humans have been described.
Some studies suggest that exposure to high concentrations of negative ions produces biochemical and physiological changes in some people that have a relaxing effect, reduce tension and headaches, improve alertness and cut reaction time. These effects could be due to the suppression of the neural hormone serotonin (5-HT) and of histamine in environments loaded with negative ions; these factors could affect a hypersensitive segment of the population. However, other studies reach different conclusions on the effects of negative ions on the body. Therefore, the benefits of negative ionization are still open to debate and further study is needed before the matter is decided.
With regard to heating, a given person’s needs will depend on many factors. They can be classified into two main groups, those related to the surroundings and those related to human factors. Among those related to the surroundings one might count geography (latitude and altitude), climate, the type of exposure of the space the person is in, or the barriers that protect the space against the external environment, etc. Among the human factors are the worker’s energy consumption, the pace of work or the amount of exertion needed for the job, the clothing or garments used against the cold and personal preferences or tastes.
The need for heating is seasonal in many regions, but this does not mean that heating is dispensable during the cold season. Cold environmental conditions affect health, mental and physical efficiency, precision and occasionally may increase the risk of accidents. The goal of a heating system is to maintain pleasant thermal conditions that will prevent or minimize adverse health effects.
The physiological characteristics of the human body allow it to withstand great variations in thermal conditions. Human beings maintain their thermal balance through the hypothalamus, by means of thermal receptors in the skin; body temperature is kept between 36 and 38°C as shown in figure 1.
Figure 1. Thermoregulatory mechanisms in human beings
Heating systems need to have very precise control mechanisms, especially in cases where workers carry out their tasks in a sitting or a fixed position that does not stimulate blood circulation to their extremities. Where the work performed allows a certain mobility, the control of the system may be somewhat less precise. Finally, where the work performed takes place in abnormally adverse conditions, as in refrigerated chambers or in very cold climatic conditions, support measures may be undertaken to protect special tissues, to regulate the time spent under those conditions or to supply heat by electrical systems incorporated into the worker’s garments.
Definition and Description of the Thermal Environment
A requirement that can be demanded of any properly functioning heating or air conditioning system is that it should allow for control of the variables that define the thermal environment, within specified limits, for each season of the year. These variables are
It has been shown that there is a very simple relation between the temperature of the air and of the wall surfaces of a given space, and the temperatures that provide the same perceived thermal sensation in a different room. This relation can be expressed as
where
Teat = equivalent air temperature for a given thermal sensation
Tdbt = air temperature measured with a dry bulb thermometer
Tast = measured average surface temperature of the walls.
For example, if in a given space the air and the walls are at 20° C, the equivalent temperature will be 20°C, and the perceived sensation of heat will be the same as in a room where the average temperature of the walls is 15°C and the air temperature is 25°C, because that room would have the same equivalent temperature. From the standpoint of temperature, the perceived sensation of thermal comfort would be the same.
Properties of humid air
In implementing an air-conditioning plan, three things that must be taken into consideration are the thermodynamic state of the air in the given space, of the air outside, and of the air that will be supplied to the room. The selection of a system capable of transforming the thermodynamic properties of the air supplied to the room will then be based on the existing thermal loads of each component. We therefore need to know the thermodynamic properties of humid air. They are as follows:
Tdbt = the dry bulb temperature reading, measured with a thermometer insulated from radiated heat
Tdpt = the dew point temperature reading. This is the temperature at which nonsaturated dry air reaches the saturation point
W = a humidity relation that ranges from zero for dry air to Ws for saturated air. It is expressed as kg of water vapour by kg of dry air
RH = relative humidity
t* = thermodynamic temperature with moist bulb
v = specific volume of air and water vapour (expressed in units of m3/kg). It is the inverse of density
H = enthalpy, kcal/kg of dry air and associated water vapour.
Of the above variables, only three are directly measurable. They are the dry bulb temperature reading, the dew point temperature reading and relative humidity. There is a fourth variable that is experimentally measurable, defined as the wet bulb temperature. The wet bulb temperature is measured with a thermometer whose bulb has been moistened and which is moved, typically with the aid of a sling, through nonsaturated moist air at a moderate speed. This variable differs by an insignificant amount from the thermodynamic temperature with a dry bulb (3 per cent), so they can both be used for calculations without erring too much.
Psychrometric diagram
The properties defined in the previous section are functionally related and can be portrayed in graphic form. This graphic representation is called a psychrometric diagram. It is a simplified graph derived from tables of the American Society of Heating, Refrigerating and Air Conditioning Engineers (ASHRAE). Enthalpy and the degree of humidity are shown on the coordinates of the diagram; the lines drawn show dry and humid temperatures, relative humidity and specific volume. With the psychrometric diagram, knowing any two of the aforementioned variables enables you to derive all the properties of humid air.
Conditions for thermal comfort
Thermal comfort is defined as a state of mind that expresses satisfaction with the thermal environment. It is influenced by physical and physiological factors.
It is difficult to prescribe general conditions that should be met for thermal comfort because conditions differ in various work situations; different conditions could even be required for the same work post when it is occupied by different people. A technical norm for thermal conditions required for comfort cannot be applied to all countries because of the different climatic conditions and their different customs governing dress.
Studies have been carried out with workers that do light manual labour, establishing a series of criteria for temperature, speed and humidity that are shown in table 1 (Bedford and Chrenko 1974).
Table 1. Proposed norms for environmental factors
Environmental factor |
Proposed norm |
Air temperature |
21 °C |
Average radiant temperature |
≥ 21 °C |
Relative humidity |
30–70% |
Speed of air flow |
0.05–0.1 metre/second |
Temperature gradient (from head to foot) |
≤ 2.5 °C |
The above factors are interrelated, requiring a lower air temperature in cases where there is high thermal radiation and requiring a higher air temperature when the speed of the air flow is also higher.
Generally, the corrections that should be carried out are the following:
The air temperature should be increased:
The air temperature should be decreased:
For a good sensation of thermal comfort the most desirable situation is one where the temperature of the environment is slightly higher than the temperature of the air, and where the flow of radiating thermal energy is the same in all directions and is not excessive overhead. The increase in temperature by height should be minimized, keeping feet warm without creating too much of a thermal load overhead. An important factor that has a bearing on the sensation of thermal comfort is the speed of the air flow. There are diagrams that give recommended air speeds as a function of the activity that is being carried out and the kind of clothing used (figure 2).
Figure 2. Comfort zones based on readings of overall temperatures and speed of air currents
In some countries there are norms for minimal environmental temperatures, but optimal values have not yet been established. Typically, the maximum value for air temperature is given as 20°C. With recent technical improvements, the complexity of measuring thermal comfort has increased. Many indexes have appeared, including the index of effective temperature (ET) and the index of effective temperature, corrected (CET); the index of caloric overload; the Heat Stress Index (HSI); the wet bulb globe temperature (WBGT); and the Fanger index of median values (IMV), among others. The WBGT index allows us to determine the intervals of rest required as a function of the intensity of the work performed so as to preclude thermal stress under working conditions. This is discussed more fully in the chapter Heat and Cold.
Thermal comfort zone in a psychrometric diagram
The range on the psychrometric diagram corresponding to conditions under which an adult perceives thermal comfort has been carefully studied and has been defined in the ASHRAE norm based on the effective temperature, defined as the temperature measured with a dry bulb thermometer in a uniform room with 50 per cent relative humidity, where people would have the same interchange of heat by radiant energy, convection and evaporation as they would with the level of humidity in the given local environment. The scale of effective temperature is defined by ASHRAE for a level of clothing of 0.6 clo—clo is a unit of insulation; 1 clo corresponds to the insulation provided by a normal set of clothes—that assumes a level of thermal insulation of 0.155 K m2W–1, where K is the exchange of heat by conduction measured in Watts per square metre (W m–2) for a movement of air of 0.2 m s–1 (at rest), for an exposure of one hour at a chosen sedentary activity of 1 met (unit of metabolic rate=50 Kcal/m2h). This comfort zone is seen in figure 2 and can be used for thermal environments where the measured temperature from radiant heat is approximately the same as the temperature measured by a dry bulb thermometer, and where the speed of air flow is below 0.2 m s–1 for people dressed in light clothing and carrying out sedentary activities.
Comfort formula: The Fanger method
The method developed by PO Fanger is based on a formula that relates variables of ambient temperature, average radiant temperature, relative speed of air flow, pressure of water vapour in ambient air, level of activity and thermal resistance of the clothing worn. An example derived from the comfort formula is shown in table 2, which can be used in practical applications for obtaining a comfortable temperature as a function of the clothing worn, the metabolic rate of the activity carried out and the speed of the air flow.
Table 2. Temperatures of thermal comfort (°C), at 50% relative humidity (based on the formula by PO Fanger)
Metabolism (Watts) |
105 |
|||
Radiating temperature |
clo |
20 °C |
25 °C |
30 °C |
Clothing (clo) |
|
|
|
|
0.5 |
30.5 |
29.0 |
27.0 |
|
1.5 |
30.6 |
29.5 |
28.3 |
|
Clothing (clo) |
|
|
|
|
0.5 |
26.7 |
24.3 |
22.7 |
|
1.5 |
27.0 |
25.7 |
24.5 |
|
Metabolism (Watts) |
157 |
|||
Radiating temperature |
clo |
20 °C |
25 °C |
30 °C |
Clothing (clo) |
|
|
|
|
0.5 |
23.0 |
20.7 |
18.3 |
|
1.5 |
23.5 |
23.3 |
22.0 |
|
Clothing (clo) |
|
|
|
|
0.5 |
16.0 |
14.0 |
11.5 |
|
1.5 |
18.3 |
17.0 |
15.7 |
|
Metabolism (Watts) |
210 |
|||
Radiating temperature |
clo |
20 °C |
25 °C |
30 °C |
Clothing (clo) |
|
|
|
|
0.5 |
15.0 |
13.0 |
7.4 |
|
1.5 |
18.3 |
17.0 |
16.0 |
|
Clothing (clo) |
|
|
|
|
0.5 |
–1.5 |
–3.0 |
/ |
|
1.5 |
–5.0 |
2.0 |
1.0 |
Heating Systems
The design of any heating system should be directly related to the work to be performed and the characteristics of the building where it will be installed. It is hard to find, in the case of industrial buildings, projects where the heating needs of the workers are considered, often because the processes and workstations have yet to be defined. Normally systems are designed with a very free range, considering only the thermal loads that will exist in the building and the amount of heat that needs to be supplied to maintain a given temperature within the building, without regard to heat distribution, the situation of workstations and other similarly less general factors. This leads to deficiencies in the design of certain buildings that translate into shortcomings like cold spots, draughts, an insufficient number of heating elements and other problems.
To end up with a good heating system in planning a building, the following are some of the considerations that should be addressed:
When heating is provided by burners without exhaust chimneys, special consideration should be given to the inhalation of the products of combustion. Normally, when the combustible materials are heating oil, gas or coke, they produce sulphur dioxide, nitrogen oxides, carbon monoxide and other combustion products. There exist human exposure limits for these compounds and they should be controlled, especially in closed spaces where the concentration of these gases can increase rapidly and the efficiency of the combustion reaction can decrease.
Planning a heating system always entails balancing various considerations, such as a low initial cost, flexibility of the service, energy efficiency and applicability. Therefore, the use of electricity during off-peak hours when it might be cheaper, for example, could make electric heaters cost-effective. The use of chemical systems for heat storage that can then be put to use during peak demand (using sodium sulphide, for example) is another option. It is also possible to study the placement of several different systems together, making them work in such a way that costs can be optimized.
The installation of heaters that are capable of using gas or heating oil is especially interesting. The direct use of electricity means consuming first-class energy that may turn out to be costly in many cases, but that may afford the needed flexibility under certain circumstances. Heat pumps and other cogeneration systems that take advantage of residual heat can afford solutions that may be very advantageous from the financial point of view. The problem with these systems is their high initial cost.
Today the tendency of heating and air conditioning systems is to aim to deliver optimal functioning and energy savings. New systems therefore include sensors and controls distributed throughout the spaces to be heated, obtaining a supply of heat only during the times necessary to obtain thermal comfort. These systems can save up to 30% of the energy costs of heating. Figure 3 shows some of the heating systems available, indicating their positive characteristics and their drawbacks.
Figure 3. Characteristics of the most common heating systems employed in worksites
Air-conditioning systems
Experience shows that industrial environments that are close to the comfort zone during summer months increase productivity, tend to register fewer accidents, have lower absenteeism and, in general, contribute to improved human relations. In the case of retail establishments, hospitals and buildings with large surfaces, air conditioning usually needs to be directed to be able to provide thermal comfort when outside conditions require it.
In certain industrial environments where external conditions are very severe, the goal of heating systems is geared more to providing enough heat to prevent possible adverse health effects than to providing enough heat for a comfortable thermal environment. Factors that should be carefully monitored are the maintenance and proper use of the air-conditioning equipment, especially when equipped with humidifiers, because they can become sources of microbial contamination with the risks that these contaminants may pose to human health.
Today ventilation and climate-control systems tend to cover, jointly and often using the same installation, the needs for heating, refrigerating and conditioning the air of a building. Multiple classifications may be used for refrigerating systems.
Depending on the configuration of the system they may be classified in the following way:
Depending on the coverage they provide, they can be classified in the following way:
The problems that most frequently plague these types of systems are excess heating or cooling if the system is not adjusted to respond to variations in thermal loads, or a lack of ventilation if the system does not introduce a minimal amount of outside air to renew the circulating indoor air. This creates stale indoor environments in which the quality of air deteriorates.
The basic elements of all air-conditioning systems are (see also figure 4):
Figure 4. Simplified schematic of air-conditioning system
One of the chief functions of a building in which nonindustrial activities are carried out (offices, schools, dwellings, etc.) is to provide the occupants with a healthy and comfortable environment in which to work. The quality of this environment depends, to a large degree, on whether the ventilation and climatization systems of the building are adequately designed and maintained and function properly.
These systems must therefore provide acceptable thermal conditions (temperature and humidity) and an acceptable quality of indoor air. In other words, they should aim for a suitable mix of outside air with indoor air and should employ filtration and cleaning systems capable of eliminating pollutants found in the indoor environment.
The idea that clean outdoor air is necessary for well-being in indoor spaces has been expressed since the eighteenth century. Benjamin Franklin recognized that air in a room is healthier if it is provided with natural ventilation by opening the windows. The idea that providing great quantities of outside air could help reduce the risk of contagion for illnesses like tuberculosis gained currency in the nineteenth century.
Studies carried out during the 1930s showed that, in order to dilute human biological effluvia to concentrations that would not cause discomfort due to odours, the volume of new outside air required for a room is between 17 and 30 cubic metres per hour per occupant.
In standard No. 62 set in 1973, the American Society of Heating, Refrigerating and Air Conditioning Engineers (ASHRAE) recommends a minimum flow of 34 cubic metres of outside air per hour per occupant to control odours. An absolute minimum of 8.5 m3/hr/occupant is recommended to prevent carbon dioxide from surpassing 2,500 ppm, which is half of the exposure limit set for industrial settings.
This same organization, in standard No. 90, set in 1975—in the middle of an energy crisis—adopted the aforementioned absolute minimum leaving aside, temporarily, the need for greater ventilation flows to dilute pollutants such as tobacco smoke, biological effluvia and so forth.
In its standard No. 62 (1981) ASHRAE rectified this omission and established its recommendation as 34 m3/hr/occupant for areas where smoking is permitted and 8.5 m3/hr/occupant in areas where smoking is forbidden.
The last standard published by ASHRAE, also No. 62 (1989), established a minimum of 25.5 m3/hr/occupant for occupied indoor spaces independently of whether smoking is permitted or not. It also recommends increasing this value when the air brought into the building is not mixed adequately in the breathing zone or if there are unusual sources of pollution present in the building.
In 1992, the Commission of European Communities published its Guidelines for Ventilation Requirements in Buildings. In contrast with existing recommendations for ventilation standards, this guide does not specify volumes of ventilation flow that should be provided for a given space; instead, it provides recommendations that are calculated as a function of the desired quality of indoor air.
Existing ventilation standards prescribe set volumes of ventilation flow that should be supplied per occupant. The tendencies evidenced in the new guidelines show that volume calculations alone do not guarantee a good quality of indoor air for every setting. This is the case for three fundamental reasons.
First, they assume that occupants are the only sources of contamination. Recent studies show that other sources of pollution, in addition to the occupants, should be taken into consideration as possible sources of pollution. Examples include furniture, upholstery and the ventilation system itself. The second reason is that these standards recommend the same amount of outside air regardless of the quality of air that is being conveyed into the building. And the third reason is that they do not clearly define the quality of indoor air required for the given space. Therefore, it is proposed that future ventilation standards should be based on the following three premises: the selection of a defined category of air quality for the space to be ventilated, the total load of pollutants in the occupied space and the quality of outside air available.
The Perceived Quality of Air
The quality of indoor air can be defined as the degree to which the demands and requirements of the human being are met. Basically, the occupants of a space demand two things of the air they breathe: to perceive the air they breathe as fresh and not foul, stale or irritating; and to know that the adverse health effects that may result from breathing that air are negligible.
It is common to think that the degree of quality of the air in a space depends more on the components of that air than on the impact of that air on the occupants. It may thus seem easy to evaluate the quality of the air, assuming that by knowing its composition its quality can be ascertained. This method of evaluating air quality works well in industrial settings, where we find chemical compounds that are implicated in or derived from the production process and where measuring devices and reference criteria to assess the concentrations exist. This method does not, however, work in nonindustrial settings. Nonindustrial settings are places where thousands of chemical substances can be found, but at very low concentrations, sometimes a thousand times lower than the recommended exposure limits; evaluating these substances one by one would result in a false assessment of the quality of that air, and the air would likely be judged to be of a high quality. But there is a missing aspect that remains to be considered, and that is the lack of knowledge that exists about the combined effect of those thousands of substances on human beings, and that may be the reason why that air is perceived as being foul, stale or irritating.
The conclusion that has been reached is that traditional methods used for industrial hygiene are not well-adapted to define the degree of quality that will be perceived by the human beings that breathe the air being evaluated. The alternative to chemical analysis is to use people as measuring devices to quantify air pollution, employing panels of judges to make the evaluations.
Human beings perceive the quality of air by two senses: the olfactory sense, situated in the nasal cavity and sensitive to hundreds of thousands of odorous substances, and the chemical sense, situated in the mucous membranes of the nose and eyes, and sensitive to a similar number of irritating substances present in air. It is the combined response of these two senses that determines how air is perceived and that allows the subject to judge whether its quality is acceptable.
The olf unit
One olf (from Latin = olfactus) is the emission rate of air pollutants (bioeffluents) from a standard person. One standard person is an average adult who works in an office or in a similar nonindustrial workplace, sedentary and in thermal comfort with a hygienic standard equipment to 0.7 bath/day. Pollution from a human being was chosen to define the term olf for two reasons: the first is that biological effluvia emitted by a person are well-known, and the second is that there was much data on the dissatisfaction caused by such biological effluvia.
Any other source of contamination can be expressed as the number of standard persons (olfs) needed to cause the same amount of dissatisfaction as the source of contamination that is being evaluated.
Figure 1 depicts a curve that defines an olf. This curve shows how contamination produced by a standard person (1 olf) is perceived at different rates of ventilation, and allows the calculation of the rate of dissatisfied individuals—in other words, those that will perceive the quality of air to be unacceptable just after they have entered the room. The curve is based on different European studies in which 168 people judged the quality of air polluted by over a thousand people, both men and women, considered to be standard. Similar studies conducted in North America and Japan show a high degree of correlation with the European data.
Figure 1. Olf definition curve
The decipol unit
The concentration of pollution in air depends on the source of contamination and its dilution as a result of ventilation. Perceived air pollution is defined as the concentration of human biological effluvia that would cause the same discomfort or dissatisfaction as the concentration of polluted air that is being evaluated. One decipol (from the Latin pollutio) is the contamination caused by a standard person (1 olf) when the rate of ventilation is 10 litres per second of noncontaminated air, so that we may write
1 decipol = 0.1 olf/(litre/second)
Figure 2, derived from the same data as the previous figure, shows the relation between the perceived quality of air, expressed as a percentage of dissatisfied individuals and in decipols.
Figure 2. Relation between the perceived quality of air expressed as a percentage of dissatisfied individuals and in decipols
To determine the rate of ventilation required from the point of view of comfort, selecting the degree of air quality desired in the given space is essential. Three categories or levels of quality are proposed in Table 1, and they are derived from Figures 1 and 2. Each level corresponds to a certain percentage of dissatisfied people. The selection of one or another level will depend, most of all, on what the space will be used for and on economic considerations.
Table 1. Levels of quality of indoor air
Perceived air quality |
|||
Category |
Percentage of dissatisfied |
Decipols |
Rate of ventilation required1 |
A |
10 |
0.6 |
16 |
B |
20 |
1.4 |
7 |
C |
30 |
2.5 |
4 |
1 Assuming that outside air is clean and the efficiency of the ventilation system is equal to one.
Source: CEC 1992.
As noted above, the data are the result of experiments carried out with panels of judges, but it is important to keep in mind that some of the substances found in air that can be dangerous (carcinogenic compounds, micro-organisms and radioactive substances, for example) are not recognized by the senses, and that the sensory effects of other contaminants bear no quantitative relationship to their toxicity.
Sources of Contamination
As was indicated earlier, one of the shortcomings of today’s ventilation standards is that they take into account only the occupants as the sources of contamination, whereas it is recognized that future standards should take all the possible sources of pollution into account. Aside from the occupants and their activities, including the possibility that they might smoke, there are other sources of pollution that contribute significantly to air pollution. Examples include furniture, upholstery and carpeting, construction materials, products used for decoration, cleaning products and the ventilation system itself.
What determines the load of pollution of air in a given space is the combination of all these sources of contamination. This load can be expressed as chemical contamination or as sensory contamination expressed in olfs. The latter integrates the effect of several chemical substances as they are perceived by human beings.
The chemical load
Contamination that emanates from a given material can be expressed as the rate of emission of each chemical substance. The total load of chemical pollution is calculated by adding all the sources, and is expressed in micrograms per second (μg/s).
In reality, it may be difficult to calculate the load of pollution because often little data are available on the rates of emission for many commonly used materials.
Sensory load
The load of pollution perceived by the senses is caused by those sources of contamination that have an impact on the perceived quality of air. The given value of this sensory load can be calculated by adding all the olfs of different sources of contamination that exist in a given space. As in the previous case, there is still not much information available on the olfs per square metre (olfs/m2) of many materials. For that reason it turns out to be more practical to estimate the sensory load of the entire building, including the occupants, the furnishings and the ventilation system.
Table 2 shows the pollution load in olfs by the occupants of the building as they carry out different types of activities, as a proportion of those who smoke and don’t smoke, and the production of various compounds like carbon dioxide (CO2), carbon monoxide (CO) and water vapour. Table 3 shows some examples of the typical occupancy rates in different kinds of spaces. And last, table 4 reflects the results of the sensory load—measured in olfs per square metre—found in different buildings.
Table 2. Contamination due to the occupants of a building
Sensory load olf/occupant |
CO2 |
CO3 |
Water vapour4 |
|
Sedentary, 1-1.2 met1 |
||||
0% smokers |
2 |
19 |
50 |
|
20% smokers2 |
2 |
19 |
11x10-3 |
50 |
40% smokers2 |
3 |
19 |
21x10-3 |
50 |
100% smokers2 |
6 |
19 |
53x10-3 |
50 |
Physical exertion |
||||
Low, 3 met |
4 |
50 |
200 |
|
Medium, 6 met |
10 |
100 |
430 |
|
High (athletic), |
20 |
170 |
750 |
|
Children |
||||
Child care centre |
1.2 |
18 |
90 |
|
School |
1.3 |
19 |
50 |
1 1 met is the metabolic rate of a sedentary person at rest (1 met = 58 W/m2 of skin surface).
2 Average consumption of 1.2 cigarettes/hour per smoker. Average rate of emission, 44 ml of CO per cigarette.
3 From tobacco smoke.
4 Applicable to people close to thermal neutrality.
Source: CEC 1992.
Table 3. Examples of the degree of occupancy of different buildings
Building |
Occupants/m2 |
Offices |
0.07 |
Conference rooms |
0.5 |
Theatres, other large gathering places |
1.5 |
Schools (classrooms) |
0.5 |
Child-care centres |
0.5 |
Dwellings |
0.05 |
Source: CEC 1992.
Table 4. Contamination due to the building
Sensory load—olf/m2 |
||
Average |
Interval |
|
Offices1 |
0.3 |
0.02–0.95 |
Schools (classrooms)2 |
0.3 |
0.12–0.54 |
Child care facilities3 |
0.4 |
0.20–0.74 |
Theatres4 |
0.5 |
0.13–1.32 |
Low-pollution buildings5 |
0.05–0.1 |
1 Data obtained in 24 mechanically ventilated offices.
2 Data obtained in 6 mechanically ventilated schools.
3 Data obtained in 9 mechanically ventilated child-care centres.
4 Data obtained in 5 mechanically ventilated theatres.
5 Goal that should be reached by new buildings.
Source: CEC 1992.
Quality of Outside Air
Another premise, one that rounds out the inputs needed for creation of ventilation standards for the future, is the quality of available outside air. Recommended exposure values for certain substances, both from inside and outside spaces, appear in the publication Air Quality Guidelines for Europe by the WHO (1987).
Table 5 shows the levels of perceived outside air quality, as well as the concentrations of several typical chemical pollutants found out of doors.
Table 5. Quality levels of outside air
Perceived |
Environmental pollutants2 |
||||
Decipol |
CO2 (mg/m3) |
CO (mg/m3) |
NO2 (mg/m3) |
SO2 (mg/m3) |
|
By the sea, in the mountains |
0 |
680 |
0-0.2 |
2 |
1 |
City, high quality |
0.1 |
700 |
1-2 |
5-20 |
5-20 |
City, low quality |
>0.5 |
700-800 |
4-6 |
50-80 |
50-100 |
1 The values of perceived air quality are daily average values.
2 The values of pollutants correspond to average yearly concentrations.
Source: CEC 1992.
It should be kept in mind that in many cases the quality of outside air can be worse than the levels indicated in the table or in the guidelines of the WHO. In such cases air needs to be cleaned before it is conveyed into occupied spaces.
Efficiency of Ventilation Systems
Another important factor that will affect the calculation of the ventilation requirements for a given space is the efficiency of ventilation (Ev), which is defined as the relation between the concentration of pollutants in extracted air (Ce) and the concentration in the breathing zone (Cb).
Ev = Ce/Cb
The efficiency of ventilation depends on the distribution of air and the location of the sources of pollution in the given space. If air and the contaminants are mixed completely, the efficiency of ventilation is equal to one; if the quality of air in the breathing zone is better than that of extracted air, then the efficiency is greater than one and the desired quality of air can be attained with lower rates of ventilation. On the other hand, greater rates of ventilation will be needed if the efficiency of ventilation is less than one, or to put it differently, if the quality of air in the breathing zone is inferior to the quality of extracted air.
In calculating the efficiency of ventilation it is useful to divide spaces into two zones, one into which the air is delivered, the other comprising the rest of the room. For ventilation systems that work by the mixing principle, the zone where air is delivered is generally found above the breathing zone, and the best conditions are reached when mixing is so thorough that both zones become one. For ventilation systems that work by the displacement principle, air is supplied in the zone occupied by people and the extraction zone is usually found overhead; here the best conditions are reached when mixing between both zones is minimal.
The efficiency of ventilation, therefore, is a function of the location and characteristics of the elements that supply and extract air and the location and characteristics of the sources of contamination. In addition, it is also a function of the temperature and of the volumes of air supplied. It is possible to calculate the efficiency of a ventilation system by numerical simulation or by taking measurements. When data are not available the values in figure 3 can be used for different ventilation systems. These reference values take into consideration the impact of air distribution but not the location of sources of pollution, assuming instead that they are uniformly distributed throughout the ventilated space.
Figure 3. Effectiveness of ventilation in breathing zone according to different ventilation principles
Calculating Ventilation Requirements
Figure 4 shows the equations used to calculate ventilation requirements from the point of view of comfort as well as that of protecting health.
Figure 4. Equations for calculating ventilation requirements
Ventilation requirements for comfort
The first steps in the calculation of comfort requirements is to decide the level of quality of indoor air that one wishes to obtain for the ventilated space (see Table 1), and to estimate the quality of outside air available (see Table 5).
The next step consists in estimating the sensory load, using Tables 8, 9, and 10 to select the loads according to the occupants and their activities, the type of building, and the level of occupancy by square metre of surface. The total value is obtained by adding all the data.
Depending on the operating principle of the ventilation system and using Figure 9, it is possible to estimate the efficiency of ventilation. Applying equation (1) in Figure 9 will yield a value for the required amount of ventilation.
Ventilation requirements for health protection
A procedure similar to the one described above, but using equation (2) in Figure 3, will provide a value for the stream of ventilation required to prevent health problems. To calculate this value it is necessary to identify a substance or group of critical chemical substances which one proposes to control and to estimate their concentrations in air; it is also necessary to allow for different evaluation criteria, taking into account the effects of the contaminant and the sensitivity of the occupants that you wish to protect—children or the elderly, for example.
Unfortunately, it is still difficult to estimate the ventilation requirements for health protection owing to the lack of information on some of the variables that enter into the calculations, such as the rates of emission of the contaminants (G), the evaluation criteria for indoor spaces (Cv) and others.
Studies carried out in the field show that spaces where ventilation is required to achieve comfortable conditions the concentrations of chemical substances is low. Nevertheless, those spaces may contain sources of pollution that are dangerous. The best policy in these cases is to eliminate, to substitute or to control the sources of pollution instead of diluting the contaminants by general ventilation.
When pollutants generated at a worksite are to be controlled by ventilating the entire locale we speak of general ventilation. The use of general ventilation implies accepting the fact that the pollutant will be distributed to some degree through the entire space of the worksite, and could therefore affect workers who are far from the source of contamination. General ventilation is, therefore, a strategy that is the opposite of localized extraction. Localized extraction seeks to eliminate the pollutant by intercepting it as closely as possible to the source (see “Indoor air: methods for control and cleaning”, elsewhere in this chapter).
One of the basic objectives of any general ventilation system is the control of body odours. This can be achieved by supplying no less than 0.45 cubic metres per minute, m3/min, of new air per occupant. When smoking is frequent or the work is physically strenuous, the rate of ventilation required is greater, and may surpass 0.9 m3/min per person.
If the only environmental problems that the ventilation system must overcome are the ones just described, it is a good idea to keep in mind that every space has a certain level of “natural” air renewal by means of so-called “infiltration,” which occurs through doors and windows, even when they are closed, and through other sites of wall penetration. Air-conditioning manuals usually provide ample information in this regard, but it can be said that as a minimum the level of ventilation due to infiltration falls between 0.25 and 0.5 renewals per hour. An industrial site will commonly experience between 0.5 and 3 renewals of air per hour.
When used to control chemical pollutants, general ventilation must be limited to only those situations where the amounts of pollutants generated are not very high, where their toxicity is relatively moderate and where workers do not carry out their tasks in the immediate vicinity of the source of contamination. If these injunctions are not respected, it will be difficult to obtain acceptance for adequate control of the work environment because such high renewal rates must be used that the high air speeds will likely create discomfort, and because high renewal rates are expensive to maintain. It is therefore unusual to recommend the use of general ventilation for the control of chemical substances except in the case of solvents which have admissible concentrations of more than 100 parts per million.
When, on the other hand, the goal of general ventilation is to maintain the thermal characteristics of the work environment with a view to legally acceptable limits or technical recommendations such as the International Organization for Standardization (ISO) guidelines, this method has fewer limitations. General ventilation is therefore used more often to control the thermal environment than to limit chemical contamination, but its usefulness as a complement of localized extraction techniques should be clearly recognized.
While for many years the phrases general ventilation and ventilation by dilution were considered synonymous, today that is no longer the case because of a new general ventilation strategy: ventilation by displacement. Even though ventilation by dilution and ventilation by displacement fit within the definition of general ventilation we have outlined above, they both differ widely in the strategy they employ to control contamination.
Ventilation by dilution has the goal of mixing the air that is introduced mechanically as completely as possible with all the air that is already within the space, so that the concentration of a given pollutant will be as uniform as possible throughout (or so that the temperature will be as uniform as possible, if thermal control is the goal desired). To achieve this uniform mixture air is injected from the ceiling as streams at a relatively high speed, and these streams generate a strong circulation of air. The result is a high degree of mixing of the new air with the air already present inside the space.
Ventilation by displacement, in its ideal conceptualization, consists of injecting air into a space in such a way that new air displaces the air previously there without mixing with it. Ventilation by displacement is achieved by injecting new air into a space at a low speed and close to the floor, and extracting air near the ceiling. Using ventilation by displacement to control the thermal environment has the advantage that it profits from the natural movement of air generated by density variations that are themselves due to temperature differences. Even though ventilation by displacement is already widely used in industrial situations, the scientific literature on the subject is still quite limited, and the evaluation of its effectiveness is therefore still difficult.
Ventilation by Dilution
The design of a system of ventilation by dilution is based on the hypothesis that the concentration of the pollutant is the same throughout the space in question. This is the model that chemical engineers often refer to as a stirred tank.
If you assume that the air that is injected into the space is free of the pollutant and that at the initial time the concentration within the space is zero, you will need to know two facts in order to calculate the required rate of ventilation: the amount of the pollutant that is generated in the space and the level of environmental concentration that is sought (which hypothetically would be the same throughout).
Under these conditions, the corresponding calculations yield the following equation:
where
c(t) = the concentration of the contaminant in the space at time t
a = the amount of the pollutant generated (mass per unit of time)
Q = the rate at which new air is supplied (volume per unit of time)
V = the volume of the space in question.
The above equation shows that the concentration will tend to a steady state at the value a/Q, and that it will do so faster the smaller the value of Q/V, frequently referred to as “the number of renewals per unit of time”. Although occasionally the index of the quality of ventilation is regarded as practically equivalent to that value, the above equation clearly shows that its influence is limited to controlling the speed of stabilization of the environmental conditions, but not the level of concentration at which such a steady state will occur. That will depend only on the amount of the pollutant that is generated (a), and on the rate of ventilation (Q).
When the air of a given space is contaminated but no new amounts of the pollutant are generated, the speed of diminution of the concentration over a period of time is given by the following expression:
where Q and V have the meaning described above, t1 and t2 are, respectively, the initial and the final times and c1 and c2 are the initial and final concentrations.
Expressions can be found for calculations in instances where the initial concentration is not zero (Constance 1983; ACGIH 1992), where the air injected into the space is not totally devoid of the pollutant (because to reduce heating costs in the winter part of the air is recycled, for example), or where the amounts of the pollutant generated vary as a function of time.
If we disregard the transition stage and assume that the steady state has been achieved, the equation indicates that the rate of ventilation is equivalent to a/clim, where clim is the value of the concentration that must be maintained in the given space. This value will be established by regulations or, as an ancillary norm, by technical recommendations such as the threshold limit values (TLV) of the American Conference of Governmental Industrial Hygienists (ACGIH), which recommends that the rate of ventilation be calculated by the formula
where a and clim have the meaning already described and K is a safety factor. A value of K between 1 and 10 must be selected as a function of the efficacy of the air mixture in the given space, of the toxicity of the solvent (the smaller clim is, the greater the value of K will be), and of any other circumstance deemed relevant by the industrial hygienist. The ACGIH, among others, cites the duration of the process, the cycle of operations and the usual location of the workers with respect to the sources of emission of the pollutant, the number of these sources and their location in the given space, the seasonal changes in the amount of natural ventilation and the anticipated reduction in the functional efficacy of the ventilation equipment as other determining criteria.
In any case, the use of the above formula requires a reasonably exact knowledge of the values of a and K that should be used, and we therefore provide some suggestions in this regard.
The amount of pollutant generated may quite frequently be estimated by the amount of certain materials consumed in the process that generates the pollutant. So, in the case of a solvent, the amount used will be a good indication of the maximum amount that can be found in the environment.
As indicated above, the value of K should be determined as a function of the efficacy of the air mixture in the given space. This value will, therefore, be smaller in direct proportion to how good the estimation is of finding the same concentration of the pollutant at any point within the given space. This, in turn, will depend on how air is distributed within the space being ventilated.
According to these criteria, minimum values of K should be used when air is injected into the space in a distributed fashion (by using a plenum, for example), and when the injection and extraction of air are at opposite ends of the given space. On the other hand, higher values for K should be used when air is supplied intermittently and air is extracted at points close to the intake of new air (figure 1).
Figure 1. Schematic of air circulation in room with two supply openings
It should be noted that when air is injected into a given space—especially if it is done at a high speed—the stream of air created will exert a considerable pull on the air surrounding it. This air then mixes with the stream and slows it down, creating measurable turbulence as well. As a consequence, this process results in intense mixing of the air already in the space and the new air that is injected, generating internal air currents. Predicting these currents, even generally, requires a large dose of experience (figure 2).
Figure 2. Suggested K factors for inlet and exhaust locations
In order to avoid problems that result from workers’ being subjected to streams of air at relatively high speeds, air is commonly injected by way of diffusing grates designed in such a way that they facilitate the rapid mixing of new air with the air already present in the space. In this way, the areas where air moves at high speeds are kept as small as possible.
The stream effect just described is not produced near points where air escapes or is extracted through doors, windows, extraction vents or other openings. Air reaches extraction grates from all directions, so even at a relatively short distance from them, air movement is not easily perceived as an air current.
In any case, in dealing with air distribution, it is important to keep in mind the convenience of placing workstations, to the extent possible, in such a way that new air reaches the workers before it reaches the sources of contamination.
When in the given space there are important sources of heat, the movement of air will largely be conditioned by the convection currents that are due to density differences between denser, cold air and lighter, warm air. In spaces of this kind, the designer of air distribution must not fail to keep in mind the existence of these heat sources, or the movement of air may turn out to be very different from the one predicted.
The presence of chemical contamination, on the other hand, does not alter in a measurable way the density of air. While in a pure state the pollutants may have a density that is very different from that of air (usually much greater), given the real, existing concentrations in the workplace, the mix of air and pollutant does not have a density significantly different than the density of pure air.
Furthermore, it should be pointed out that one of the most common mistakes made in applying this type of ventilation is supplying the space only with air extractors, without any forethought given to adequate intakes of air. In these cases, the effectiveness of the extraction ventilators is diminished and, therefore, the actual rates of air extraction are much less than planned. The result is greater ambient concentrations of the pollutant in the given space than those initially calculated.
To avoid this problem some thought should be given to how air will be introduced into the space. The recommended course of action is to use immission ventilators as well as extraction ventilators. Normally, the rate of extraction should be greater than the rate of immission in order to allow for infiltration through windows and other openings. In addition, it is advisable to keep the space under slightly negative pressure to prevent the contamination generated from drifting into areas that are not contaminated.
Ventilation by Displacement
As mentioned above, with ventilation by displacement one seeks to minimize the mixing of new air and the air previously found in the given space, and tries to adjust the system to the model known as plug flow. This is usually accomplished by introducing air at slow speeds and at low elevations in the given space and extracting it near the ceiling; this has two advantages over ventilation by dilution.
In the first place, it makes lower rates of air renewal possible, because pollution concentrates near the ceiling of the space, where there are no workers to breathe it. The average concentration in the given space will then be higher than the clim value we have referred to before, but that does not imply a higher risk for the workers because in the occupied zone of the given space the concentration of the pollutant will be the same or lower than a clim.
In addition, when the goal of ventilation is the control of the thermal environment, ventilation by displacement makes it possible to introduce warmer air into the given space than would be required by a system of ventilation by dilution. This is because the warm air that is extracted is at a temperature several degrees higher than the temperature in the occupied zone of the space.
The fundamental principles of ventilation by displacement were developed by Sandberg, who in the early 1980s developed a general theory for the analysis of situations where there were nonuniform concentrations of pollutants in enclosed spaces. This allowed us to overcome the theoretical limitations of ventilation by dilution (which presupposes a uniform concentration throughout the given space) and opened the way for practical applications (Sandberg 1981).
Even though ventilation by displacement is widely used in some countries, particularly in Scandinavia, very few studies have been published in which the efficacy of different methods are compared in actual installations. This is no doubt because of the practical difficulties of installing two different ventilation systems in a real factory, and because the experimental analysis of these types of systems require the use of tracers. Tracing is done by adding a tracer gas to the air ventilation current and then measuring the concentrations of the gas at different points within the space and in the extracted air. This sort of examination makes it possible to infer how air is distributed within the space and to then compare the efficacy of different ventilation systems.
The few studies available that have been carried out in actual existing installations are not conclusive, except as regards the fact that systems that employ ventilation by displacement provide better air renewal. In these studies, however, reservations are often expressed about the results in so far as they have not been confirmed by measurements of the ambient level of contamination at the worksites.
The quality of air inside a building is due to a series of factors that include the quality of outside air, the design of the ventilation/airconditioning system, the way that the system works and is maintained and the sources of indoor pollution. In general terms, the level of concentration of any contaminant in an indoor space will be determined by the balance between the generation of the pollutant and the rate of its elimination.
As for the generation of contaminants, the sources of pollution may also be external or internal. The external sources include atmospheric pollution due to industrial combustion processes, vehicular traffic, power plants and so on; pollution emitted near the intake shafts where air is drawn into the building, such as that from refrigeration towers or the exhaust vents of other buildings; and emanations from contaminated soil such as radon gas, leaks from gasoline tanks or pesticides.
Among the sources of internal pollution, it is worth mentioning those associated with the ventilation and air-conditioning systems themselves (chiefly the microbiological contamination of any segment of such systems), the materials used to build and decorate the building, and the occupants of the building. Specific sources of indoor pollution are tobacco smoke, laboratories, photocopiers, photographic labs and printing presses, gyms, beauty parlours, kitchens and cafeterias, bathrooms, parking garages and boiler rooms. All these sources should have a general ventilation system and air extracted from these areas should not be recycled through the building. When the situation warrants it, these areas should also have a localized ventilation system that operates by extraction.
Evaluating the quality of indoor air comprises, among other tasks, the measurement and evaluation of contaminants that may be present in the building. Several indicators are used to ascertain the quality of air inside a building. They include the concentrations of carbon monoxide and carbon dioxide, total volatile organic compounds (TVOC), total suspended particles (TSP) and the rate of ventilation. Various criteria or recommended target values exist for the evaluation of some of the substances found in interior spaces. These are listed in different standards or guidelines, such as the guidelines for the quality of interior air promulgated by the World Health Organization (WHO), or the standards of the American Society of Heating, Refrigerating and Air Conditioning Engineers (ASHRAE).
For many of these substances, however, there are no defined standards. For now the recommended course of action is to apply the values and standards for industrial environments provided by the American Conference of Governmental Industrial Hygienists (ACGIH 1992). Safety or correction factors are then applied on the order of one-half, one-tenth or one-hundredth of the values specified.
The methods of control of indoor air can be divided in two main groups: control of the source of pollution, or control of the environment with ventilation and air cleaning strategies.
Control of the Source of Pollution
The source of pollution can be controlled by various means, including the following:
Control of the Environment
The indoor environments of nonindustrial buildings usually have many sources of pollution and, in addition, they tend to be scattered. The system most commonly employed to correct or prevent pollution problems indoors, therefore, is ventilation, either general or by dilution. This method consists of moving and directing the flow of air to capture, contain and transport pollutants from their source to the ventilation system. In addition, general ventilation also permits the control of the thermal characteristics of the indoor environment by air conditioning and recirculating air (see “Aims and principles of general and dilution ventilation”, elsewhere in this chapter).
In order to dilute internal pollution, increasing the volume of outside air is advisable only when the system is of the proper size and does not cause a lack of ventilation in other parts of the system or when the added volume does not prevent proper air-conditioning. For a ventilation system to be as effective as possible, localized extractors should be installed at the sources of pollution; air mixed with pollution should not be recycled; occupants should be placed near air diffusion vents and sources of pollution near extraction vents; pollutants should be expelled by the shortest possible route; and spaces that have localized sources of pollution should be kept at negative pressure relative to outside atmospheric pressure.
Most ventilation deficiencies seem to be linked to an inadequate amount of outside air. An improper distribution of ventilated air, however, can also result in poor air quality problems. In rooms with very high ceilings, for instance, where warm (less dense) air is supplied from above, air temperature may become stratified and ventilation will then fail to dilute the pollution present in the room. The placement and location of air diffusion vents and air return vents relative to the occupants and the sources of contamination is a consideration that requires special attention when the ventilation system is being designed.
Air Cleaning Techniques
Air cleaning methods should be precisely designed and selected for specific, very concrete types of pollutants. Once installed, regular maintenance will prevent the system from becoming a new source of contamination. The following are descriptions of six methods used to eliminate pollutants from air.
Filtration of particles
Filtration is a useful method to eliminate liquids or solids in suspension, but it should be borne in mind that it does not eliminate gases or vapours. Filters may capture particles by obstruction, impact, interception, diffusion and electrostatic attraction. Filtration of an indoor air conditioning system is necessary for many reasons. One is to prevent the accumulation of dirt that may cause a diminution of its heating or cooling efficiency. The system may also be corroded by certain particles (sulphuric acid and chlorides). Filtration is also necessary to prevent a loss of equilibrium in the ventilation system due to deposits on the fan blades and false information being fed to the controls because of clogged sensors.
Indoor air filtration systems benefit from placing at least two filters in series. The first, a pre-filter or primary filter, retains only the larger particles. This filter should be changed often and will lengthen the life of the next filter. The secondary filter is more efficient than the first, and can filter out fungal spores, synthetic fibres and in general finer dust than that collected by the primary filter. These filters should be fine enough to eliminate irritants and toxic particles.
A filter is selected based on its effectiveness, its capacity to accumulate dust, its loss of charge and the required level of air purity. A filter’s effectiveness is measured according to ASHRAE 52-76 and Eurovent 4/5 standards (ASHRAE 1992; CEN 1979). Their capacity for retention measures the mass of dust retained multiplied by the volume of air filtered and is used to characterize filters that retain only large particles (low and medium efficiency filters). To measure its retention capacity, a synthetic aerosol dust of known concentration and granulometry is forced through a filter. the portion retained in the filter is calculated by gravimetry.
The efficiency of a filter is expressed by multiplying the number of particles retained by the volume of air filtered. This value is the one used to characterize filters that also retain finer particles. To calculate the efficiency of a filter, a current of atmospheric aerosol is forced through it containing an aerosol of particles with a diameter between 0.5 and 1 μm. The amount of captured particles is measured with an opacitimeter, which measures the opacity caused by the sediment.
The DOP is a value used to characterize very high-efficiency particulate air (HEPA) filters. The DOP of a filter is calculated with an aerosol made by vapourizing and condensing dioctylphthalate, which produces particles 0.3 μm in diameter. This method is based on the light-scattering property of drops of dioctylphthalate: if we put the filter through this test the intensity of scattered light is proportional to the surface concentration of this material and the penetration of the filter can be measured by the relative intensity of scattered light before and after filtering the aerosol. For a filter to earn the HEPA designation it must be better than 99.97 per cent efficient on the basis of this test.
Although there is a direct relationship between them, the results of the three methods are not directly comparable. The efficiency of all filters diminishes as they clog up, and they can then become a source of odours and contamination. The useful life of a high efficiency filter can be greatly extended by using one or several filters of a lower rating in front of the high efficiency filter. Table 1 shows the initial, final and mean yields of different filters according to criteria established by ASHRAE 52-76 for particles 0.3 μm in diameter.
Table 1. The effectiveness of filters (according to ASHRAE standard 52-76) for particles of 3 mm diameter
Filter description |
ASHRAE 52-76 |
Efficiency (%) |
|||
Dust spot (%) |
Arrestance (%) |
Initial |
Final |
Median |
|
Medium |
25–30 |
92 |
1 |
25 |
15 |
Medium |
40–45 |
96 |
5 |
55 |
34 |
High |
60–65 |
97 |
19 |
70 |
50 |
High |
80–85 |
98 |
50 |
86 |
68 |
High |
90–95 |
99 |
75 |
99 |
87 |
95% HEPA |
— |
— |
95 |
99.5 |
99.1 |
99.97% HEPA |
— |
— |
99.97 |
99.7 |
99.97 |
Electrostatic precipitation
This method proves useful for controlling particulate matter. Equipment of this sort works by ionizing particles and then eliminating them from the air current as they are attracted to and captured by a collecting electrode. Ionization occurs when the contaminated effluent passes through the electrical field generated by a strong voltage applied between the collecting and the discharge electrodes. The voltage is obtained by a direct current generator. The collecting electrode has a large surface and is usually positively charged, while the discharge electrode consists of a negatively charged cable.
The most important factors that affect the ionization of particles are the condition of the effluent, its discharge and the characteristics of the particles (size, concentration, resistance, etc.). The effectiveness of capture increases with humidity, and the size and density of the particles, and decreases with the increased viscosity of the effluent.
The main advantage of these devices is that they are highly effective at collecting solids and liquids, even when particle size is very fine. In addition, these systems may be used for heavy volumes and high temperatures. The loss of pressure is minimal. The drawbacks of these systems are their high initial cost, their large space requirements and the safety risks they pose given the very high voltages involved, especially when they are used for industrial applications.
Electrostatic precipitators are used in a full range, from industrial settings to reduce the emission of particles to domestic settings to improve the quality of indoor air. The latter are smaller devices that operate at voltages in the range of 10,000 to 15,000 volts. They ordinarily have systems with automatic voltage regulators which ensure that enough tension is always applied to produce ionization without causing a discharge between both electrodes.
Generation of negative ions
This method is used to eliminate particles suspended in air and, in the opinion of some authors, to create healthier environments. The efficacy of this method as a way to reduce discomfort or illness is still being studied.
Gas adsorption
This method is used to eliminate polluting gases and vapours like formaldehyde, sulphur dioxide, ozone, nitrogen oxides and organic vapours. Adsorption is a physical phenomena by which gas molecules are trapped by an adsorbent solid. The adsorbent consists of a porous solid with a very large surface area. To clean this kind of pollutant from the air, it is made to flow through a cartridge full of the adsorbent. Activated carbon is the most widely used; it traps a wide range of inorganic gases and organic compounds. Aliphatic, chlorinated and aromatic hydrocarbons, ketones, alcohols and esters are some examples.
Silica gel is also an inorganic adsorbent, and is used to trap more polar compounds such as amines and water. There are also other, organic adsorbents made up of porous polymers. It is important to keep in mind that all adsorbent solids trap only a certain amount of pollutant and then, once saturated, need to be regenerated or replaced. Another method of capture through adsorbent solids is to use a mixture of active alumina and carbon impregnated with specific reactants. Some metallic oxides, for instance, capture mercury vapour, hydrogen sulphide and ethylene. It must be borne in mind that carbon dioxide is not retained by adsorption.
Gas absorption
Eliminating gases and fumes by absorption involves a system that fixes molecules by passing them through an absorbent solution with which they react chemically. This is a very selective method and it uses reagents specific to the pollutant that needs to be captured.
The reagent is generally dissolved in water. It also must be replaced or regenerated before it is used up. Because this system is based on transferring the pollutant from the gaseous phase to the liquid phase, the reagent’s physical and chemical properties are very important. Its solubility and reactivity are especially important; other aspects that play an important part in this transfer from gaseous to liquid phase are pH, temperature and the area of contact between gas and liquid. Where the pollutant is highly soluble, it is sufficient to bubble it through the solution to fix it to the reagent. Where the pollutant is not as readily soluble the system that must be employed must ensure a greater area of contact between gas and liquid. Some examples of absorbents and the contaminants for which they are especially suited are given in table 2.
Table 2. Reagents used as absorbents for various contaminants
Absorbent |
Contaminant |
Diethylhydroxamine |
Hydrogen sulphide |
Potassium permangenate |
Odiferous gases |
Hydrochloric and sulphuric acids |
Amines |
Sodium sulphide |
Aldehydes |
Sodium hydroxide |
Formaldehyde |
Ozonization
This method of improving the quality of indoor air is based on the use of ozone gas. Ozone is generated from oxygen gas by ultraviolet radiation or electric discharge, and employed to eliminate contaminants dispersed in air. The great oxidizing power of this gas makes it suitable for use as an antimicrobial agent, a deodorant and a disinfectant and it can help to eliminate noxious gases and fumes. It is also employed to purify spaces with high concentrations of carbon monoxide. In industrial settings it is used to treat the air in kitchens, cafeterias, food and fish processing plants, chemical plants, residual sewage treatment plants, rubber plants, refrigeration plants and so on. In office spaces it is used with air conditioning installations to improve the quality of indoor air.
Ozone is a bluish gas with a characteristic penetrating smell. At high concentrations it is toxic and even fatal to man. Ozone is formed by the action of ultraviolet radiation or an electric discharge on oxygen. The intentional, accidental and natural production of ozone should be differentiated. Ozone is an extremely toxic and irritating gas both at short-term and long-term exposure. Because of the way it reacts in the body, no levels are known for which there are no biological effects. These data are discussed more fully in the chemicals section of this Encyclopaedia.
Processes that employ ozone should be carried out in enclosed spaces or have a localized extraction system to capture any release of gas at the source. Ozone cylinders should be stored in refrigerated areas, away from any reducing agents, inflammable materials or products that may catalyze its breakdown. It should be kept in mind that if ozonizers function at negative pressures, and have automatic shut-off devices in case of failure, the possibility of leaks is minimized.
Electrical equipment for processes that employ ozone should be perfectly insulated and maintenance on them should be done by experienced personnel. When using ozonizers, conduits and accessory equipment should have devices that shut ozonizers down immediately when a leak is detected; in case of a loss of efficiency in the ventilation, dehumidifying or refrigeration functions; when there occurs an excess of pressure or a vacuum (depending on the system); or when the output of the system is either excessive or insufficient.
When ozonizers are installed, they should be provided with ozone specific detectors. The sense of smell cannot be trusted because it can become saturated. Ozone leaks can be detected with reactive strips of potassium iodide that turn blue, but this is not a specific method because the test is positive for most oxidants. It is better to monitor for leaks on a continuing basis using electrochemical cells, ultraviolet photometry or chemiluminesence, with the chosen detection device connected directly to an alarm system that acts when certain concentrations are reached.
People in urban settings spend between 80 and 90% of their time in indoor spaces while carrying out sedentary activities, both during work and during leisure time. (See figure 1).
Figure 1. Urban dwellers spend 80 to 90% of their time indoors
This fact led to the creation within these indoor spaces of environments that were more comfortable and homogeneous than those found outdoors with their changing climatic conditions. To make this possible, the air within these spaces had to be conditioned, being warmed during the cold season and cooled during the hot season.
For air conditioning to be efficient and cost-effective it was necessary to control the air coming into the buildings from the outside, which could not be expected to have the desired thermal characteristics. The result was increasingly airtight buildings and more stringent control of the amount of ambient air that was used to renew stagnant indoor air.
The energy crisis at the beginning of the 1970s—and the resulting need to save energy—represented another state of affairs often responsible for drastic reductions in the volume of ambient air used for renewal and ventilation. What was commonly done then was to recycle the air inside a building many times over. This was done, of course, with the aim of reducing the cost of air-conditioning. But something else began to happen: the number of complaints, discomfort and/or health problems of the occupants of these buildings increased considerably. This, in turn, increased the social and financial costs due to absenteeism and led specialists to study the origin of complaints that, until then, were thought to be independent of pollution.
It is not a complicated matter to explain what led to the appearance of complaints: buildings are built more and more hermetically, the volume of air supplied for ventilation is reduced, more materials and products are used to insulate buildings thermally, the number of chemical products and synthetic materials used multiplies and diversifies and individual control of the environment is gradually lost. The result is an indoor environment that is increasingly contaminated.
The occupants of buildings with degraded environments then react, for the most part, by expressing complaints about aspects of their environment and by presenting clinical symptoms. The symptoms most commonly heard of are the following sort: irritation of mucous membranes (eyes, nose and throat), headaches, shortness of breath, higher incidence of colds, allergies and so on.
When the time comes to define the possible causes that trigger these complaints, the apparent simplicity of the task gives way in fact to a very complex situation as one attempts to establish the relation of cause and effect. In this case one must look at all the factors (whether environmental or of other origins) that may be implicated in the complaints or the health problems that have appeared.
The conclusion—after many years of studying this problem—is that these problems have multiple origins. The exceptions are those cases where the relationship of cause and effect has been clearly established, as in the case of the outbreak of Legionnaires’ disease, for example, or the problems of irritation or of increased sensitivity due to exposure to formaldehyde.
The phenomenon is given the name of sick building syndrome, and is defined as those symptoms affecting the occupants of a building where complaints due to malaise are more frequent than might be reasonably expected.
Table 1 shows some examples of pollutants and the most common sources of emissions that can be associated with a drop in the quality of indoor air.
In addition to indoor air quality, which is affected by chemical and biological pollutants, sick building syndrome is attributed to many other factors. Some are physical, such as heat, noise and illumination; some are psychosocial, chief among them the way work is organized, labour relations, the pace of work and the workload.
Table 1. The most common indoor pollutants and their sources
Site |
Sources of emission |
Pollutant |
Outdoors |
Fixed sources |
|
Industrial sites, energy production |
Sulphur dioxide, nitrogen oxides, ozone, particulate matter, carbon monoxide, organic compounds |
|
Motor vehicles |
Carbon monoxide, lead, nitrogen oxides |
|
Soil |
Radon, microorganisms |
|
Indoors |
Construction materials |
|
Stone, concrete |
Radon |
|
Wood composites, veneer |
Formaldehyde, organic compounds |
|
Insulation |
Formaldehyde, fiberglass |
|
Fire retardants |
Asbestos |
|
Paint |
Organic compounds, lead |
|
Equipment and installations |
||
Heating systems, kitchens |
Carbon monoxide and dioxide, nitrogen oxides, organic compounds, particulate matter |
|
Photocopiers |
Ozone |
|
Ventilation systems |
Fibres, microorganisms |
|
Occupants |
||
Metabolic activity |
Carbon dioxide, water vapour, odours |
|
Biological activity |
Microorganisms |
|
Human activity |
||
Smoking |
Carbon monoxide, other compounds, particulate matter |
|
Air fresheners |
Fluorocarbons, odours |
|
Cleaning |
Organic compounds, odours |
|
Leisure, artistic activities |
Organic compounds, odours |
Indoor air plays a very important role in sick building syndrome, and controlling its quality can therefore help, in most cases, to rectify or help improve conditions that lead to the appearance of the syndrome. It should be remembered, however, that air quality is not the only factor that should be considered in evaluating indoor environments.
Measures for the Control of Indoor Environments
Experience shows that most of the problems that occur in indoor environments are the result of decisions made during the design and construction of a building. Although these problems can be solved later by taking corrective measures, it should be pointed out that preventing and correcting deficiencies during the design of the building is more effective and cost-efficient.
The great variety of possible sources of pollution determines the multiplicity of corrective actions that can be taken to bring them under control. The design of a building may involve professionals from various fields, such as architects, engineers, interior designers and others. It is therefore important at this stage to keep in mind the different factors that can contribute to eliminate or minimize the possible future problems that may arise because of poor air quality. The factors that should be considered are
Selecting a building site
Air pollution may originate at sources that are close to or far from the chosen site. This type of pollution includes, for the most part, organic and inorganic gases that result from combustion—whether from motor vehicles, industrial plants, or electrical plants near the site—and airborne particulate matter of various origins.
Pollution found in the soil includes gaseous compounds from buried organic matter and radon. These contaminants can penetrate into the building through cracks in the building materials that are in contact with the soil or by migration through semi-permeable materials.
When the construction of a building is in the planning stages, the different possible sites should be evaluated. The best site should be chosen, taking these facts and information into consideration:
On the other hand, local sources of pollution must be controlled using various specific techniques, such as draining or cleaning the soil, depressurizing the soil or using architectural or scenic baffles.
Architectural design
The integrity of a building has been, for centuries, a fundamental injunction at the time of planning and designing a new building. To this end consideration has been given, today as in the past, to the capacities of materials to withstand degradation by humidity, temperature changes, air movement, radiation, the attack of chemical and biological agents or natural disasters.
The fact that the above-mentioned factors should be considered when undertaking any architectural project is not an issue in the current context: in addition, the project must implement the right decisions with regard to the integrity and well-being of the occupants. During this phase of the project decisions must be made about such concerns as the design of interior spaces, the selection of materials, the location of activities that could be potential sources of pollution, the openings of the building to the outside, the windows and the ventilation system.
Building openings
Effective measures of control during the design of the building consist of planning the location and orientation of these openings with an eye to minimizing the amount of contamination that can enter the building from previously detected sources of pollution. The following considerations should be kept in mind:
Figure 2. Penetration of pollution from the outside
Windows
During recent years there has been a reversal of the trend seen in the 1970s and the 1980s, and now there is a tendency to include working windows in new architectural projects. This confers several advantages. One of them is the ability to provide supplementary ventilation in those areas (few in number, it is hoped) that need it, assuming that the ventilation system has sensors in those areas to prevent imbalances. It should be kept in mind that the ability to open a window does not always guarantee that fresh air will enter a building; if the ventilation system is pressurized, opening a window will not provide extra ventilation. Other advantages are of a definitely psychosocial character, allowing the occupants a certain degree of individual control over their surroundings and direct and visual access to the outdoors.
Protection against humidity
The principal means of control consist of reducing humidity in the foundations of the building, where micro-organisms, especially fungi, can frequently spread and develop.
Dehumidifying the area and pressurizing the soil can prevent the appearance of biological agents and can also prevent the penetration of chemical pollutants that may be present in the soil.
Sealing and controlling the enclosed areas of the building most susceptible to humidity in the air is another measure that should be considered, since humidity can damage the materials used to clad the building, with the result that these materials may then become a source of microbiological contamination.
Planning of indoor spaces
It is important to know during the planning stages the use to which the building will be put or the activities that will be carried out within it. It is important above all to know which activities may be a source of contamination; this knowledge can then be used to limit and control these potential sources of pollution. Some examples of activities that may be sources of contamination within a building are the preparation of food, printing and graphic arts, smoking and the use of photocopying machines.
The location of these activities in specific locales, separate and insulated from other activities, should be decided in such a way that occupants of the building are affected as little as possible.
It is advisable that these processes be provided with a localized extraction system and/or general ventilation systems with special characteristics. The first of these measures is intended to control contaminants at the source of emission. The second, applicable when there are numerous sources, when they are dispersed within a given space, or when the pollutant is extremely dangerous, should comply with the following requirements: it should be capable of providing volumes of new air which are adequate given the established standards for the activity in question, it should not reuse any of the air by mixing it with the general flow of ventilation in the building and it should include supplementary forced-air extraction where needed. In such cases the flow of air in these locales should be carefully planned, to avoid transferring pollutants between contiguous spaces—by creating, for example, negative pressure in a given space.
Sometimes control is achieved by eliminating or reducing the presence of pollutants in the air by filtration or by cleaning the air chemically. In using these control techniques, the physical and chemical characteristics of the pollutants should be kept in mind. Filtration systems, for instance, are adequate for the removal of particulate matter from the air—so long as the efficiency of the filter is matched to the size of the particles that are being filtered—but allow gases and vapours to pass through.
The elimination of the source of pollution is the most effective way to control pollution in indoor spaces. A good example that illustrates the point are the restrictions and prohibitions against smoking in the workplace. Where smoking is permitted, it is generally restricted to special areas that are equipped with special ventilation systems.
Selection of materials
In trying to prevent possible pollution problems within a building, attention should be given to the characteristics of the materials used for construction and decoration, to the furnishings, the normal work activities that will be performed, the way the building will be cleaned and disinfected and the way insects and other pests will be controlled. It is also possible to reduce the levels of volatile organic compounds (VOCs), for example, by considering only materials and furniture that have known rates of emission for these compounds and selecting those with the lowest levels.
Today, even though some laboratories and institutions have carried out studies on emissions of this kind, the information available on the rates of emission of contaminants for construction materials is scarce; this scarcity is moreover aggravated by the vast number of products available and the variability they exhibit over time.
In spite of this difficulty, some producers have begun to study their products and to include, usually at the request of the consumer or the construction professional, information on the research that has been done. Products are more and more frequently labelled environmentally safe, non-toxic and so on.
There are still many problems to overcome, however. Examples of these problems include the high cost of the necessary analyses both in time and money; the lack of standards for the methods used to assay the samples; the complicated interpretation of results obtained due to lack of knowledge of the health effects of some contaminants; and the lack of agreement among researchers on whether materials with high levels of emission that emit for a short period of time are preferable to materials with low levels of emission that emit over longer periods of time.
But the fact is that in coming years the market for construction and decoration materials will become more competitive and will come under more legislative pressure. This will result in the elimination of some products or their substitution with other products that have lower rates of emission. Measures of this sort are already being taken with the adhesives used in the production of moquette fabric for upholstery and are further exemplified by the elimination of dangerous compounds such as mercury and pentachlorophenol in the production of paint.
Until more is known and legislative regulation in this field matures, decisions as to the selection of the most appropriate materials and products to use or install in new buildings will be left to the professionals. Outlined here are some considerations that can help them arrive at a decision:
Ventilation systems and the control of indoor climates
In enclosed spaces, ventilation is one of the most important methods for the control of air quality. There are so many sources of pollution in these spaces, and the characteristics of these pollutants are so varied, that it is almost impossible to manage them completely in the design stage. The pollution generated by the very occupants of the building—by the activities they engage in and the products they use for personal hygiene—are a case in point; in general, these sources of contamination are beyond the control of the designer.
Ventilation is, therefore, the method of control normally used to dilute and eliminate contaminants from polluted indoor spaces; it may be carried out with clean outdoor air or recycled air that is conveniently purified.
Many different points need to be considered in designing a ventilation system if it is to serve as an adequate pollution control method. Among them are the quality of outside air that will be used; the special requirements of certain pollutants or of their generating source; the preventive maintenance of the ventilation system itself, which should also be considered a possible source of contamination; and the distribution of air inside the building.
Table 2 summarizes the main points that should be considered in the design of a ventilation system for the maintenance of quality indoor environments.
In a typical ventilation/air conditioning system, air that has been taken from outside and that has been mixed with a variable portion of recycled air passes through different air conditioning systems, is usually filtered, is heated or cooled according to the season and is humidified or dehumidified as needed.
Table 2. Basic requirements for a ventilation system by dilution
System component |
Requirement |
Dilution by outside air |
A minimum volume of air by occupant per hour should be guaranteed. |
The aim should be to renew the volume of inside air a minimum number of times per hour. |
|
The volume of outside air supplied should be increased based on the intensity of the sources of pollution. |
|
Direct extraction to the outside should be guaranteed for spaces where pollution-generating activities will take place. |
|
Air intake locations |
Placing air intakes near plumes of known sources of pollution should be avoided. |
One should avoid areas near stagnant water and the aerosols that emanate from refrigeration towers. |
|
The entry of any animals should be prevented and birds should be prevented from perching or nesting near intakes. |
|
Location of air extraction |
Extraction vents should be placed as far as possible from air intake locations and the height of the discharge vent should be increased. |
Orientation of discharge vents should be in the opposite direction from air intake hoods. |
|
Filtration and cleaning |
Mechanical and electrical filters for particulate matter should be used. |
One should install a system for the chemical elimination of pollutants. |
|
Microbiological control |
Placing any porous materials in direct contact with air currents, including those in the distribution conduits, should be avoided. |
One should avoid the collection of stagnant water where condensation is formed in air-conditioning units. |
|
A preventive maintenance programme should be established and the periodic cleaning of humidifiers and refrigeration towers should be scheduled. |
|
Air distribution |
One should eliminate and prevent the formation of any dead zones (where there is no ventilation) and the stratification of air. |
It is preferable to mix the air where the occupants breathe it. |
|
Adequate pressures should be maintained in all locales based on the activities that are performed in them. |
|
Air propulsion and extraction systems should be controlled to maintain equilibrium between them. |
Once treated, air is distributed by conduits to every area of the building and is delivered through dispersion gratings. It then mixes throughout the occupied spaces exchanging heat and renewing the indoor atmosphere before it is at last drawn away from each locale by return ducts.
The amount of outside air that should be used to dilute and to eliminate pollutants is the subject of much study and controversy. In recent years there have been changes in the recommended levels of outside air and in the published ventilation standards, in most cases involving increases in the volumes of outside air used. In spite of this, it has been noted that these recommendations are insufficient to control effectively all the sources of pollution. This is because the established standards are based on occupancy and disregard other important sources of pollution, such as the materials employed in construction, the furnishings and the quality of the air taken from the outside.
Therefore, the amount of ventilation required should be based on three fundamental considerations: the quality of air that you wish to obtain, the quality of outside air available and the total load of pollution in the space that will be ventilated. This is the starting point of the studies that have been carried out by professor PO Fanger and his team (Fanger 1988, 1989). These studies are geared to establishing new ventilation standards that meet air quality requirements and that provide an acceptable level of comfort as perceived by the occupants.
One of the factors that affects the quality of air in inside spaces is the quality of outside air available. The characteristics of exterior sources of pollution, like vehicular traffic and industrial or agricultural activities, put their control beyond the reach of the designers, the owners and the occupants of the building. It is in cases of this sort that the environmental authorities must assume the responsibility for establishing environmental protection guidelines and of making sure that they are adhered to. There are, however, many control measures that can be applied and that are useful in the reduction and the elimination of airborne pollution.
As was mentioned above, special care should be given to the location and orientation of air intake and exhaust ducts, in order to avoid drawing pollution back in from the building itself or from its installations (refrigeration towers, kitchen and bathroom vents, etc.), as well as from buildings in the immediate vicinity.
When outside air or recycled air is found to be polluted, the recommended control measures consist of filtering it and cleaning it. The most effective method of removing particulate matter is with electrostatic precipitators and mechanical retention filters. The latter will be most effective the more precisely they are calibrated to the size of the particles to be eliminated.
The use of systems capable of eliminating gases and vapours through chemical absorption and/or adsorption is a technique rarely used in nonindustrial situations; however, it is common to find systems that mask the pollution problem, especially smells for example, by the use of air fresheners.
Other techniques to clean and improve the quality of air consist of using ionizers and ozonizers. Prudence would be the best policy on the use of these systems to achieve improvements in air quality until their real properties and their possible negative health effects are clearly known.
Once air has been treated and cooled or heated it is delivered to indoor spaces. Whether the distribution of air is acceptable or not will depend, in great measure, on the selection, the number and the placement of diffusion grates.
Given the differences of opinion on the effectiveness of the different procedures that should be followed to mix air, some designers have begun to use, in some situations, air distribution systems that deliver air at floor level or on the walls as an alternative to diffusion grates on the ceiling. In any case, the location of the return registers should be carefully planned to avoid short-circuiting the entry and exit of air, which would prevent it from mixing completely as shown in figure 3.
Figure 3. Example of how air distribution can be shortcircuited in indoor spaces
Depending on how compartmentalized workspaces are, air distribution may present a variety of different problems. For example, in open workspaces where diffusion grates are on the ceiling, air in the room may not mix completely. This problem tends to be compounded when the type of ventilation system used can supply variable volumes of air. The distribution conduits of these systems are equipped with terminals that modify the amount of air supplied to the conduits based on the data received from area thermostats.
A difficulty can develop when air flows at a reduced rate through a significant number of these terminals—a situation that arises when the thermostats of different areas reach the desired temperature—and the power to the fans that push the air is automatically reduced. The result is that the total flow of air through the system is less, in some cases much less, or even that the immission of new outside air is interrupted altogether. Placing sensors that control the flow of outside air at the intake of the system can insure that a minimum flow of new air is maintained at all times.
Another problem that regularly emerges is that air flow is blocked due to the placement of partial or total partitions in the workspace. There are many ways to correct this situation. One way is to leave an open space at the lower end of the panels that divide the cubicles. Other ways include the installation of supplementary fans and the placement of the diffusion grilles on the floor. The use of supplementary induction fan coils aid in mixing the air and allow individualized control of the thermal conditions of the given space. Without detracting from the importance of air quality per se and the means to control it, it should be kept in mind that a comfortable indoor environment is attained by the equilibrium of the different elements that affect it. Taking any action—even positive action—affecting one of the elements without regard to the rest may affect the equilibrium among them, leading to new complaints from the occupants of the building. Tables 3 and 4 show how some of these actions, intended to improve the quality of indoor air, lead to the failure of other elements in the equation, so that adjusting the working environment may have repercussions on the quality of indoor air.
Table 3. Indoor air quality control measures and their effects on indoor environments
Action |
Effect |
Thermal environment |
|
Increase in volume of fresh air |
Increase in draughts |
Reduction of relative humidity to check microbiological agents |
Insufficient relative humidity |
Acoustic environment |
|
Intermittent supplying of outside air to conserve |
Intermittent noise exposure |
Visual environment |
|
Reduction in the use of fluorescent lights to reduce |
Reduction in the effectiveness of the illumination |
Psychosocial environment |
|
Open offices |
Loss of intimacy and of a defined workspace |
Table 4. Adjustments of the working environment and their effects on indoor air quality
Action |
Effect |
Thermal environment |
|
Basing the supply of outside air on thermal |
Insufficient volumes of fresh air |
The use of humidifiers |
Potential microbiological hazard |
Acoustic environment |
|
Increase in the use of insulating materials |
Possible release of pollutants |
Visual environment |
|
Systems based solely on artificial illumination |
Dissatisfaction, plant mortality, growth of microbiological agents |
Psychosocial environment |
|
Using equipment in the workspace, such as photocopiers and printer |
Increase in the level of pollution |
Insuring the quality of the overall environment of a building when it is in the design stages depends, to a great extent, on its management, but above all on a positive attitude towards the occupants of that building. The occupants are the best sensors the owners of the building can rely on in order to gauge the proper functioning of the installations intended to provide a quality indoor environment.
Control systems based on a “Big Brother” approach, making all the decisions regulating interior environments such as lighting, temperature, ventilation, and so on, tend to have a negative effect on the psychological and sociological well-being of the occupants. Occupants then see their capacity to create environmental conditions that meet their needs diminished or blocked. In addition, control systems of this type are sometimes incapable of changing to meet the different environmental requirements that may arise due to changes in the activities performed in a given space, the number of people working in it or changes in the way space is allocated.
The solution could consist of installing a system of centralized control for the indoor environment, with localized controls regulated by the occupants. This idea, very commonly used in the realm of the visual environment where general illumination is supplemented by more localized illumination, should be expanded to other concerns: general and localized heating and air-conditioning, general and localized supplies of fresh air and so on.
To sum up, it can be said that in each instance a portion of the environmental conditions should be optimized by means of a centralized control based on safety, health and economic considerations, while the different local environmental conditions should be optimized by the users of the space. Different users will have different needs and will react differently to given conditions. A compromise of this sort between the different parts will doubtless lead to greater satisfaction, well-being and productivity.
David A. Warrell*
* Adapted from The Oxford Textbook of Medicine, edited by DJ Weatherall, JGG Ledingham and DA Warrell (2nd edition, 1987), pp. 6.66-6.77. By permission of Oxford University Press.
Clinical Features
A proportion of patients bitten by venomous snakes (60%), depending on the species, will develop minimal or no signs of toxic symptoms (envenoming) despite having puncture marks which indicate that the snake’s fangs have penetrated the skin.
Fear and effects of treatment, as well as the snake’s venom, contribute to the symptoms and signs. Even patients who are not envenomed may feel flushed, dizzy and breathless, with constriction of the chest, palpitations, sweating and acroparaesthesiae. Tight tourniquets may produce congested and ischaemic limbs; local incisions at the site of the bite may cause bleeding and sensory loss; and herbal medicines often induce vomiting.
The earliest symptoms directly attributable to the bite are local pain and bleeding from the fang punctures, followed by pain, tenderness, swelling and bruising extending up the limb, lymphangitis and tender enlargement of regional lymph nodes. Early syncope, vomiting, colic, diarrhoea, angio-oedema and wheezing may occur in patients bitten by European Vipera, Daboia russelii, Bothrops sp, Australian Elapids and Atractaspis engaddensis. Nausea and vomiting are common symptoms of severe envenoming.
Types of bites
Colubridae (back-fanged snakes such as Dispholidus typus, Thelotornis sp, Rhabdophis sp, Philodryas sp)
There is local swelling, bleeding from the fang marks and sometimes (Rhabophis tigrinus) fainting. Later vomiting, colicky abdominal pain and headache, and widespread systemic bleeding with extensive ecchymoses (bruising), incoagulable blood, intravascular haemolysis and kidney failure may develop. Envenoming may develop slowly over several days.
Atractaspididae (burrowing asps, Natal black snake)
Local effects include pain, swelling, blistering, necrosis and tender enlargement of local lymph nodes. Violent gastro-intestinal symptoms (nausea, vomiting and diarrhoea), anaphylaxis (dyspnoea, respiratory failure, shock) and ECG changes (a-v block, ST, T-wave changes) have been described in patients envenomed by A. engaddensis.
Elapidae (cobras, kraits, mambas, coral snakesand Australian venomous snakes)
Bites by kraits, mambas, coral snakes and some cobras (e.g., Naja haje and N. nivea) produce minimal local effects, whereas bites by African spitting cobras (N. nigricollis, N. mossambica, etc.) and Asian cobras (N. naja, N. kaouthia, N. sumatrana, etc.) cause tender local swelling which may be extensive, blistering and superficial necrosis.
Early symptoms of neurotoxicity before there are objective neurological signs include vomiting, “heaviness” of the eyelids, blurred vision, fasciculations, paraesthesiae around the mouth, hyperacusis, headache, dizziness, vertigo, hypersalivation, congested conjunctivae and “gooseflesh”. Paralysis starts as ptosis and external ophthalmoplegia appearing as early as 15 minutes after the bite, but sometimes delayed for ten hours or more. Later the face, palate, jaws, tongue, vocal cords, neck muscles and muscles of deglutition become progressively paralysed. Respiratory failure may be precipitated by upper airway obstruction at this stage, or later after paralysis of intercostal muscles, diaphragm and accessory muscles of respiration. Neurotoxic effects are completely reversible, either acutely in response to antivenom or anticholinesterases (e.g., following bites by Asian cobras, some Latin American coral snakes—Micrurus, and Australian death adders—Acanthophis) or they may wear off spontaneously in one to seven days.
Envenoming by Australian snakes causes early vomiting, headache and syncopal attacks, neurotoxicity, haemostatic disturbances and, with some species, ECG changes, generalized rhabdomyolysis and kidney failure. Painful enlargement of regional lymph nodes suggests impending systemic envenoming, but local signs are usually absent or mild except after bites by Pseudechis sp.
Venom ophthalmia caused by “spitting” elapids
Patients “spat” at by spitting elapids experience intense pain in the eye, conjunctivitis, blepharospasm, palpebral oedema and leucorrhoea. Corneal erosions are detectable in more than half the patients spat at by N. nigricollis. Rarely, venom is absorbed into the anterior chamber, causing hypopyon and anterior uveitis. Secondary infection of corneal abrasions may lead to permanent blinding opacities or panophthalmitis.
Viperidae (vipers, adders, rattlesnakes, lance-headed vipers, moccasins and pit vipers)
Local envenoming is relatively severe. Swelling may become detectable within 15 minutes but is sometimes delayed for several hours. It spreads rapidly and may involve the whole limb and adjacent trunk. There is associated pain and tenderness in regional lymph nodes. Bruising, blistering and necrosis may appear during the next few days. Necrosis is particularly frequent and severe following bites by some rattlesnakes, lance-headed vipers (genus Bothrops), Asian pit vipers and African vipers (genera Echis and Bitis). When the envenomed tissue is contained in a tight fascial compartment such as the pulp space of the fingers or toes or the anterior tibial compartment, ischaemia may result. If there is no swelling two hours after a viper bite it is usually safe to assume that there has been no envenoming. However, fatal envenoming by a few species can occur in the absence of local signs (e.g., Crotalus durissus terrificus, C. scutulatus and Burmese Russell’s viper).
Blood pressure abnormalities are a consistent feature of envenoming by Viperidae. Persistent bleeding from fang puncture wounds, venepuncture or injection sites, other new and partially healed wounds and post partum, suggests that the blood is incoagulable. Spontaneous systemic haemorrhage is most often detected in the gums, but may also be seen as epistaxis, haematemesis, cutaneous ecchymoses, haemoptysis, subconjunctival, retroperitoneal and intracranial haemorrhages. Patients envenomed by the Burmese Russell’s viper may bleed into the anterior pituitary gland (Sheehan’s syndrome).
Hypotension and shock are common in patients bitten by some of the North American rattlesnakes (e.g., C. adamanteus, C. atrox and C. scutulatus), Bothrops, Daboia and Vipera species (e.g., V. palaestinae and V. berus). The central venous pressure is usually low and the pulse rate rapid, suggesting hypovolaemia, for which the usual cause is extravasation of fluid into the bitten limb. Patients envenomed by Burmese Russell’s vipers show evidence of generally increased vascular permeability. Direct involvement of the heart muscle is suggested by an abnormal ECG or cardiac arrhythmia. Patients envenomed by some species of the genera Vipera and Bothrops may experience transient recurrent fainting attacks associated with features of an autopharmacological or anaphylactic reaction such as vomiting, sweating, colic, diarrhoea, shock and angio-oedema, appearing as early as five minutes or as late as many hours after the bite.
Renal (kidney) failure is the major cause of death in patients envenomed by Russell’s vipers who may become oliguric within a few hours of the bite and have loin pain suggesting renal ischaemia. Renal failure is also a feature of envenoming by Bothrops species and C. d. terrificus.
Neurotoxicity, resembling that seen in patients bitten by Elapidae, is seen after bites by C. d. terrificus, Gloydius blomhoffii, Bitis atropos and Sri Lankan D. russelii pulchella. There may be evidence of generalized rhabdomyolysis. Progression to respiratory or generalized paralysis is unusual.
Laboratory Investigations
The peripheral neutrophil count is raised to 20,000 cells per microlitre or more in severely envenomed patients. Initial haemo-concentration, resulting from extravasation of plasma (Crotalus species and Burmese D. russelii), is followed by anaemia caused by bleeding or, more rarely, haemolysis. Thrombocytopenia is common following bites by pit vipers (e.g., C. rhodostoma, Crotalus viridis helleri) and some Viperidae (e.g., Bitis arietans and D. russelii), but is unusual after bites by Echis species. A useful test for venom-induced defibrin(ogen)ation is the simple whole blood clotting test. A few millilitres of venous blood is placed in a new, clean, dry, glass test tube, left undisturbed for 20 minutes at ambient temperature, and then tipped to see if it has clotted or not. Incoagulable blood indicates systemic envenoming and may be diagnostic of a particular species (for example Echis species in Africa). Patients with generalized rhabdomyolysis show a steep rise in serum creatine kinase, myoglobin and potassium. Black or brown urine suggests generalized rhabdomyolysis or intravascular haemolysis. Concentrations of serum enzymes such as creatine phosphokinase and aspartate aminotransferase are moderately raised in patients with severe local envenoming, probably because of local muscle damage at the site of the bite. Urine should be examined for blood/haemoglobin, myoglobin and protein and for microscopic haematuria and red cell casts.
Treatment
First aid
Patients should be moved to the nearest medical facility as quickly and comfortably as possible, avoiding movement of the bitten limb, which should be immobilized with a splint or sling.
Most traditional first-aid methods are potentially harmful and should not be used. Local incisions and suction may introduce infection, damage tissues and cause persistent bleeding, and are unlikely to remove much venom from the wound. The vacuum extractor method is of unproven benefit in human patients and could damage soft tissues. Potassium permanganate and cryotherapy potentiate local necrosis. Electric shock is potentially dangerous and has not proved beneficial. Tourniquets and compression bands can cause gangrene, fibrinolysis, peripheral nerve palsies and increased local envenoming in the occluded limb.
The pressure immobilization method involves firm but not tight bandaging of the entire bitten limb with a crepe bandage 4-5 m long by 10 cm wide starting over the site of the bite and incorporating a splint. In animals, this method was effective in preventing systemic uptake of Australian elapid and other venoms, but in humans it has not been subjected to clinical trials. Pressure immobilization is recommended for bites by snakes with neurotoxic venoms (e.g., Elapidae, Hydrophiidae) but not when local swelling and necrosis may be a problem (e.g., Viperidae).
Pursuing, capturing or killing the snake should not be encouraged, but if the snake has been killed already it should be taken with the patient to hospital. It must not be touched with bare hands, as reflex bites may occur even after the snake is apparently dead.
Patients being transported to hospital should be laid on their side to prevent aspiration of vomit. Persistent vomiting is treated with chlorpromazine by intravenous injection (25 to 50 mg for adults, 1 mg/kg body weight for children). Syncope, shock, angio-oedema and other anaphylactic (autopharmacological) symptoms are treated with 0.1% adrenaline by subcutaneous injection (0.5 ml for adults, 0.01 ml/kg body weight for children), and an antihistamine such as chlorpheniramine maleate is given by slow intravenous injection (10 mg for adults, 0.2 mg/kg body weight for children). Patients with incoagulable blood develop large haematomas after intramuscular and subcutaneous injections; the intravenous route should be used whenever possible. Respiratory distress and cyanosis are treated by establishing an airway, giving oxygen and, if necessary, assisted ventilation. If the patient is unconscious and no femoral or carotid pulses can be detected, cardiopulmonary resuscitation (CPR) should be started immediately.
Hospital treatment
Clinical assessment
In most cases of snakebite there are uncertainties about the species responsible and the quantity and composition of venom injected. Ideally, therefore, patients should be admitted to hospital for at least 24 hours of observation. Local swelling is usually detectable within 15 minutes of significant pit viper envenoming and within two hours of envenoming by most other snakes. Bites by kraits (Bungarus), coral snakes (Micrurus, Micruroides), some other elapids and sea snakes may cause no local envenoming. Fang marks are sometimes invisible. Pain and tender enlargement of lymph nodes draining the bitten area is an early sign of envenoming by Viperidae, some Elapidae and Australasian elapids. All the patient’s tooth sockets should be examined meticulously, as this is usually the first site at which spontaneous bleeding can be detected clinically; other common sites are nose, eyes (conjunctivae), skin and gastro-intestinal tract. Bleeding from venipuncture sites and other wounds implies incoagulable blood. Hypotension and shock are important signs of hypovolaemia or cardiotoxicity, seen particularly in patients bitten by North American rattlesnakes and some Viperinae (e.g., V berus, D russelii, V palaestinae). Ptosis (e.g., drooping of the eyelid) is the earliest sign of neurotoxic envenoming. Respiratory muscle power should be assessed objectively—for example, by measuring vital capacity. Trismus, generalized muscle tenderness and brownish-black urine suggests rhabdomyolysis (Hydrophiidae). If a procoagulant venom is suspected, coagulability of whole blood should be checked at the bedside using the 20-minute whole blood clotting test.
Blood pressure, pulse rate, respiratory rate, level of consciousness, presence/absence of ptosis, extent of local swelling and any new symptoms must be recorded at frequent intervals.
Antivenom treatment
The most important decision is whether or not to give antivenom, as this is the only specific antidote. There is now convincing evidence that in patients with severe envenoming, the benefits of this treatment far outweigh the risk of antivenom reactions (see below).
General indications for antivenom
Antivenom is indicated if there are signs of systemic envenoming such as:
Supporting evidence of severe envenoming is a neutrophil leucocytosis, elevated serum enzymes such as creatine kinase and aminotransferases, haemoconcentration, severe anaemia, myoglobinuria, haemoglobinuria, methaemoglobinuria, hypoxaemia or acidosis.
In the absence of systemic envenoming, local swelling involving more than half the bitten limb, extensive blistering or bruising, bites on digits and rapid progression of swelling are indications for antivenom, especially in patients bitten by species whose venoms are known to cause local necrosis (e.g., Viperidae, Asian cobras and African spitting cobras).
Special indications for antivenom
Some developed countries have the financial and technical resources for a wider range of indications:
United States and Canada: After bites by the most dangerous rattlesnakes (C. atrox, C. adamanteus, C. viridis, C. horridus and C. scutulatus) early antivenom therapy is recommended before systemic envenoming is evident. Rapid spread of local swelling is considered to be an indication for antivenom, as is immediate pain or any other symptom or sign of envenoming after bites by coral snakes (Micruroides euryxanthus and Micrurus fulvius).
Australia: Antivenom is recommended for patients with proved or suspected snakebite if there are tender regional lymph nodes or other evidence of systemic spread of venom, and in anyone effectively bitten by an identified highly venomous species.
Europe: (Adder: Vipera berus and other European Vipera): Antivenom is indicated to prevent morbidity and reduce the length of convalescence in patients with moderately severe envenoming as well as to save the lives of severely envenomed patients. Indications are:
Patients bitten by European Vipera who show any evidence of envenoming should be admitted to hospital for observation for at least 24 hours. Antivenom should be given whenever there is evidence of systemic envenoming—(1) or (2) above—even if its appearance is delayed for several days after the bite.
Prediction of antivenom reactions
It is important to realize that most antivenom reactions are not caused by acquired Type I, IgE-mediated hypersensitivity but by complement activation by IgG aggregates or Fc fragments. Skin and conjunctival tests do not predict early (anaphylactic) or late (serum sickness type) antivenom reactions but delay treatment and may sensitize the patient. They should not be used.
Contraindications to antivenom
Patients with a history of reactions to equine antiserum suffer an increased incidence and severity of reactions when given equine antivenom. Atopic subjects have no increased risk of reactions, but if they develop a reaction it is likely to be severe. In such cases, reactions may be prevented or ameliorated by pretreatment with subcutaneous adrenaline, antihistamine and hydrocortisone, or by continuous intravenous infusion of adrenaline during antivenom administration. Rapid desensitization is not recommended.
Selection and administration of antivenom
Antivenom should be given only if its stated range of specificity includes the species responsible for the bite. Opaque solutions should be discarded, as precipitation of protein indicates loss of activity and increased risk of reactions. Monospecific (monovalent) antivenom is ideal if the biting species is known. Polyspecific (polyvalent) antivenoms are used in many countries because it is difficult to identify the snake responsible. Polyspecific antivenoms may be just as effective as monospecific ones but contain less specific venom-neutralizing activity per unit weight of immunoglobulin. Apart from the venoms used for immunizing the animal in which the antivenom has been produced, other venoms may be covered by paraspecific neutralization (e.g., Hydrophiidae venoms by tiger snake—Notechis scutatus—antivenom).
Antivenom treatment is indicated as long as signs of systemic envenoming persist (i.e., for several days) but ideally it should be given as soon as these signs appear. The intravenous route is the most effective. Infusion of antivenom diluted in approximately 5 ml of isotonic fluid/kg body weight is easier to control than intravenous “push” injection of undiluted antivenom given at the rate of about 4 ml/min, but there is no difference in the incidence or severity of antivenom reactions in patients treated by these two methods.
Dose of antivenom
Manufacturers’ recommendations are based on mouse protection tests and may be misleading. Clinical trials are needed to establish appropriate starting doses of major antivenoms. In most countries the dose of antivenom is empirical. Children must be given the same dose as adults.
Response to antivenom
Marked symptomatic improvement may be seen soon after antivenom has been injected. In shocked patients, the blood pressure may rise and consciousness return (C. rhodostoma, V. berus, Bitis arietans). Neurotoxic signs may improve within 30 minutes (Acanthophis sp, N. kaouthia), but this usually takes several hours. Spontaneous systemic bleeding usually stops within 15 to 30 minutes, and blood coagulability is restored within six hours of antivenom, provided that a neutralizing dose has been given. More antivenom should be given if severe signs of envenoming persist after one to two hours or if blood coagulability is not restored within about six hours. Systemic envenoming may recur hours or days after an initially good response to antivenom. This is explained by continuing absorption of venom from the injection site and the clearance of antivenom from the bloodstream. The apparent serum half-lives of equine F(ab’)2 antivenoms in envenomed patients range from 26 to 95 hours. Envenomed patients should therefore be assessed daily for at least three or four days.
Antivenom reactions
Treatment of antivenom reactions
Adrenaline (epinephrine) is the effective treatment for early reactions; 0.5 to 1.0 ml of 0.1% (1 in 1000, 1 mg/ml) is given by subcutaneous injection to adults (children 0.01 ml/kg) at the first signs of a reaction. The dose may be repeated if the reaction is not controlled. An antihistamine H1 antagonist, such as chlorpheniramine maleate (10 mg for adults, 0.2 mg/kg for children) should be given by intravenous injection to combat the effects of histamine release during the reaction. Pyrogenic reactions are treated by cooling the patient and giving antipyretics (paracetamol). Late reactions respond to an oral antihistamine such as chlorpheniramine (2 mg every six hours for adults, 0.25 mg/kg/day in divided doses for children) or to oral prednisolone (5 mg every six hours for five to seven days for adults, 0.7 mg/kg/day in divided doses for children).
Supportive treatment
Neurotoxic envenoming
Bulbar and respiratory paralysis may lead to death from aspiration, airway obstruction or respiratory failure. A clear airway must be maintained and, if respiratory distress develops, a cuffed endotracheal tube should be inserted or tracheostomy performed. Anticholinesterases have a variable but potentially useful effect in patients with neurotoxic envenoming, especially when post-synaptic neurotoxins are involved. The “Tensilon test” should be done in all cases of severe neurotoxic envenoming as with suspected myasthenia gravis. Atropine sulphate (0.6 mg for adults, 50 μg/kg body weight for children) is given by intravenous injection (to block muscarinic effects of acetylcholine) followed by an intravenous injection of edrophonium chloride (10 mg for adults, 0.25 mg/kg for children). Patients who respond convincingly can be maintained on neostigmine methyl sulphate (50 to 100 μg/kg body weight) and atropine, every four hours or by continuous infusion.
Hypotension and shock
If the jugular or central venous pressure is low or there is other clinical evidence of hypovolaemia or exsanguination, a plasma expander, preferably fresh whole blood or fresh frozen plasma, should be infused. If there is persistent or profound hypotension or evidence of increased capillary permeability (e.g., facial and conjunctival oedema, serous effusions, haemoconcentration, hypoalbuminaemia) a selective vasoconstrictor such as dopamine (starting dose 2.5 to 5 μg/kg body weight/min by infusion into a central vein) should be used.
Oliguria and renal failure
Urine output, serum creatinine, urea and electrolytes should be measured each day in patients with severe envenoming and in those bitten by species known to cause renal failure (e.g., D. russelii, C. d. terrificus, Bothrops species, sea snakes). If urine output drops below 400 ml in 24 hours, urethral and central venous catheters should be inserted. If urine flow fails to increase after cautious rehydration and diuretics (e.g., frusemide up to 1000 mg by intravenous infusion), dopamine (2.5 μg/kg body weight/min by intravenous infusion) should be tried and the patient placed on strict fluid balance. If these measures are ineffective, peritoneal or haemodialysis or haemofiltration are usually required.
Local infection at the site of the bite
Bites by some species (e.g., Bothrops sp, C. rhodostoma) seem particularly likely to be complicated by local infections caused by bacteria in the snake’s venom or on its fangs. These should be prevented with penicillin, chloramphenicol or erythromycin and a booster dose of tetanus toxoid, especially if the wound has been incised or tampered with in any way. An aminoglycoside such as gentamicin and metronidazole should be added if there is evidence of local necrosis.
Management of local envenoming
Bullae can be drained with a fine needle. The bitten limb should be nursed in the most comfortable position. Once definite signs of necrosis have appeared (blackened anaesthetic area with putrid odour or signs of sloughing), surgical debridement, immediate split skin grafting and broad-spectrum antimicrobial cover are indicated. Increased pressure within tight fascial compartments such as the digital pulp spaces and anterior tibial compartment may cause ischaemic damage. This complication is most likely after bites by North American rattlesnakes such as C. adamanteus, Calloselasma rhodostoma, Trimeresurus flavoviridis, Bothrops sp and Bitis arietans. The signs are excessive pain, weakness of the compartmental muscles and pain when they are passively stretched, hypaesthesia of areas of skin supplied by nerves running through the compartment, and obvious tenseness of the compartment. Detection of arterial pulses (e.g., by Doppler ultrasound) does not exclude intracompartmental ischaemia. Intracompartmental pressures exceeding 45 mm Hg are associated with a high risk of ischaemic necrosis. In these circumstances, fasciotomy may be considered but must not be attempted until blood coagulability and a platelet count of more than 50,000/ μl have been restored. Early adequate antivenom treatment will prevent the development of intracompartmental syndromes in most cases.
Haemostatic disturbances
Once specific antivenom has been given to neutralize venom procoagulants, restoration of coagulability and platelet function may be accelerated by giving fresh whole blood, fresh frozen plasma, cryoprecipitates (containing fibrinogen, factor VIII, fibronectin and some factors V and XIII) or platelet concentrates. Heparin must not be used. Corticosterioids have no place in the treatment of envenoming.
Treatment of snake venom ophthalmia
When cobra venom is “spat” into the eyes, first aid consists of irrigation with generous volumes of water or any other bland liquid which is available. Adrenaline drops (0.1 per cent) may relieve the pain. Unless a corneal abrasion can be excluded by fluorescein staining or slit lamp examination, treatment should be the same as for any corneal injury: a topical antimicrobial such as tetracycline or chloramphenicol should be applied. Instillation of diluted antivenom is not currently recommended.
J.A. Rioux and B. Juminer*
*Adapted from 3rd edition, Encyclopaedia of Occupational Health and Safety.
Annually millions of scorpion stings and anaphylactic reactions to insect stings may occur worldwide, causing tens of thousands of deaths in humans each year. Between 30,000 and 45,000 cases of scorpion stings are reported annually in Tunisia, causing between 35 and 100 deaths, mostly among children. Envenomation (toxic effects) is an occupational hazard for populations involved in agriculture and forestry in these regions.
Among the animals that can inflict injury on humans by the action of their venom are invertebrates, such as Arachnida (spiders, scorpions and sun spiders), Acarina (ticks and mites), Chilopoda (centipedes) and Hexapoda (bees, wasps, butterflies, and midges).
Invertebrates
Arachnida (spiders—Aranea)
All species are venomous, but in practice only a few types produce injury in humans. Spider poisoning may be of two types:
Prevention. In areas where there is a danger of venomous spiders, sleeping accommodation should be provided with mosquito nets and workers should be equipped with footwear and working clothes that give adequate protection.
Scorpions (Scorpionida)
These arachnids have a sharp poison claw on the end of the abdomen with which they can inflict a painful sting, the seriousness of which varies according to the species, the amount of venom injected and the season (the most dangerous season being at the end of the scorpions’ hibernation period). In the Mediterranean region, South America and Mexico, the scorpion is responsible for more deaths than poisonous snakes. Many species are nocturnal and are less aggressive during the day. The most dangerous species (Buthidae) are found in arid and tropical regions; their venom is neurotropic and highly toxic. In all cases, the scorpion sting immediately produces intense local signs (acute pain, inflammation) followed by general manifestations such as tendency to fainting, salivation, sneezing, lachrymation and diarrhoea. The course in young children is often fatal. The most dangerous species are found amongst the genera Androctonus (sub-Saharan Africa), Centrurus (Mexico) and Tituus (Brazil). The scorpion will not spontaneously attack humans, and stings only when it considers itself endangered, as when trapped in a dark corner or when boots or clothes in which it has taken refuge are shaken or put on. Scorpions are highly sensitive to halogenated pesticides (e.g., DDT).
Sun spiders (Solpugida)
This order of arachnid is found chiefly in steppe and sub-desert zones such as the Sahara, Andes, Asia Minor, Mexico and Texas, and is non-venomous; nevertheless, sun spiders are extremely aggressive, may be as large as 10 cm across and have a fearsome appearance. In exceptional cases, the wounds they inflict may prove serious due to their multiplicity. Solpugids are nocturnal predators and may attack a sleeping individual.
Ticks and mites (Acarina)
Ticks are blood-sucking arachnids at all stages of their life cycle, and the “saliva” they inject through their feeding organs may have a toxic effect. Poisoning may be severe, although mainly in children (tick paralysis), and may be accompanied by reflex suppression. In exceptional cases death may ensue due to bulbar paralysis (in particular where a tick has attached itself to the scalp). Mites are haematophagic only at the larval stage, and their bite produces pruritic inflammation of the skin. The incidence of mite bites is high in tropical regions.
Treatment. Ticks should be detached after they are anaesthetized with a drop of benzene, ethyl ether or xylene. Prevention is based on the use of organophosphorus pesticide pest repellents.
Centipedes (Chilopoda)
Centipedes differ from millipedes (Diplopoda) in that they have only one pair of legs per body segment and that the appendages of the first body segment are poison fangs. The most dangerous species are encountered in the Philippines. Centipede venom has only a localized effect (painful oedema).
Treatment. Bites should be treated with topical applications of dilute ammonia, permanganate or hypochlorite lotions. Antihistamines may also be administered.
Insects (Hexapoda)
Insects may inject venom via the mouthparts (Simuliidae—black flies, Culicidae—mosquitoes, Phlebotomus—sandflies) or via the sting (bees, wasps, hornets, carnivorous ants). They may cause rash with their hairs (caterpillars, butterflies), or they may produce blisters by their haemolymph (Cantharidae—blister flies and Staphylinidae—rove beetles). Black fly bites produce necrotic lesions, sometimes with general disorders; mosquito bites produce diffuse pruriginous lesions. The stings of Hymenoptera (bees, etc.) produce intense local pain with erythema, oedema and, sometimes, necrosis. General accidents may result from sensitization or multiplicity of stings (shivering, nausea, dyspnoea, chilling of the extremities). Stings on the face or the tongue are particularly serious and may cause death by asphyxiation due to glottal oedema. Caterpillars and butterflies may cause generalized pruriginous skin lesions of an urticarial or oedematous type (Quincke’s oedema), sometimes accompanied by conjunctivitis. Superimposed infection is not infrequent. The venom from blister flies produces vesicular or bullous skin lesions (Poederus). There is also the danger of visceral complications (toxic nephritis). Certain insects such as Hymenoptera and caterpillars are found in all parts of the world; other suborders are more localized, however. Dangerous butterflies are found mainly in Guyana and the Central African Republic; blister flies are found in Japan, South America and Kenya; black flies live in the intertropical regions and in central Europe; sandflies are found in the Middle East.
Prevention. First level prevention includes mosquito nets and repellent and/or insecticide application. Workers who are severely exposed to insect bites can be desensitized in cases of allergy by the administration of increasingly large doses of insect body extract.
" DISCLAIMER: The ILO does not take responsibility for content presented on this web portal that is presented in any language other than English, which is the language used for the initial production and peer-review of original content. Certain statistics have not been updated since the production of the 4th edition of the Encyclopaedia (1998)."