There are many forms of compensation used in business and government organizations throughout the world to pay workers for their physical and mental contribution. Compensation provides money for human effort and is necessary for individual and family existence in most societies. Trading work for money is a long-established practice.
The health-stressor aspect of compensation is most closely linked with compensation plans that offer incentives for extra or sustained human effort. Job stress can certainly exist in any work setting where compensation is not based on incentives. However, physical and mental performance levels that are well above normal and that could lead to physical injury or injurious mental stress is more likely to be found in environments with certain kinds of incentive compensation.
Performance Measures and Stress
Performance measurements in one form or another are used by most organizations, and are essential for incentive programmes. Performance measures (standards) can be established for output, quality, throughput time, or any other productivity measure. Lord Kelvin in 1883 had this to say about measurements: “I often say that when you can measure what you are speaking about, and express it in numbers, you know something about it; but when you cannot measure it, when you cannot express it in numbers, your knowledge is a meagre and unsatisfactory kind; it may be the beginning of knowledge, but you have scarcely, in your thoughts, advanced to the stage of science, whatever the matter may be.”
Performance measures should be carefully linked to the fundamental goals of the organization. Inappropriate performance measurements have often had little or no effect on goal attainment. Some common criticisms of performance measures include unclear purpose, vagueness, lack of connection (or even opposition, for that matter) to the business strategy, unfairness or inconsistency, and their liability to be used chiefly for “punishing” people. But measurements can serve as indispensable benchmarks: remember the saying, “If you don’t know where you are, you can’t get to where you want to be”. The bottom line is that workers at all levels in an organization demonstrate more of the behaviours that they are measured on and rewarded to evince. What gets measured and rewarded gets done.
Performance measures must be fair and consistent to minimize stress among the workforce. There are several methods utilised to establish performance measures ranging from judgement estimation (guessing) to engineered work measurement techniques. Under the work measurement approach to setting performance measures, 100% performance is defined as a “fair day’s work pace”. This is the work effort and skill at which an average well-trained employee can work without undue fatigue while producing an acceptable quality of work over the course of a work shift. A 100% performance is not maximum performance; it is the normal or average effort and skill for a group of workers. By way of comparison, the 70% benchmark is generally regarded as the minimum tolerable level of performance, while the 120% benchmark is the incentive effort and skill that the average worker should be able to attain when provided with a bonus of at least 20% above the base rate of pay. While a number of incentive plans have been established using the 120% benchmark, this value varies among plans. The general design criteria recommended for wage incentive plans provide workers the opportunity to earn approximately 20 to 35% above base rate if they are normally skilled and execute high effort continuously.
Despite the inherent appeal of a “fair day’s work for a fair day’s pay”, some possible stress problems exist with a work measurement approach to setting performance measures. Performance measures are fixed in reference to the normal or average performance of a given work group (i.e., work standards based on group as opposed to individual performance). Thus, by definition, a large segment of those working at a task will fall below average (i.e., the 100% performance benchmark) generating a demand–resource imbalance that exceeds physical or mental stress limits. Workers who have difficulty meeting performance measures are likely to experience stress through work overload, negative supervisor feedback, and threat of job loss if they consistently perform below the 100% performance benchmark.
Incentive Programmes
In one form or another, incentives have been used for many years. For example, in the New Testament (II Timothy 2:6) Saint Paul declares, “It is the hard-working farmer who ought to have the first share of the crops”. Today, most organizations are striving to improve productivity and quality in order to maintain or improve their position in the business world. Most often workers will not give extra or sustained effort without some form of incentive. Properly designed and implemented financial incentive programmes can help. Before any incentive programme is implemented, some measure of performance must be established. All incentive programmes can be categorized as follows: direct financial, indirect financial, and intangible (non-financial).
Direct financial programmes may be applied to individuals or groups of workers. For individuals, each employee’s incentive is governed by his or her performance relative to a standard for a given time period. Group plans are applicable to two or more individuals working as a team on tasks that are usually interdependent. Each employee’s group incentive is usually based on his or her base rate and the group performance during the incentive period.
The motivation to sustain higher output levels is usually greater for individual incentives because of the opportunity for the high-performing worker to earn a greater incentive. However, as organizations move toward participative management and empowered work groups and teams, group incentives usually provide the best overall results. The group effort makes overall improvements to the total system as compared to optimizing individual outputs. Gainsharing (a group incentive system that has teams for continuous improvement and provides a share, usually 50%, of all productivity gains above a benchmark standard) is one form of a direct group incentive programme that is well suited for the continuous improvement organization.
Indirect financial programmes are usually less effective than direct financial programmes because direct financial incentives are stronger motivators. The principal advantage of indirect plans is that they require less detailed and accurate performance measures. Organizational policies that favourably affect morale, result in increased productivity and provide some financial benefit to employees are considered to be indirect incentive programmes. It is important to note that for indirect financial programmes no exact relationship exists between employee output and financial incentives. Examples of indirect incentive programmes include relatively high base rates, generous fringe benefits, awards programmes, year-end bonuses and profit-sharing.
Intangible incentive programmes include rewards that do not have any (or very little) financial impact on employees. These programmes, however, when viewed as desirable by the employees, can improve productivity. Examples of intangible incentive programmes include job enrichment (adding challenge and intrinsic satisfaction to the specific task assignments), job enlargement (adding tasks to complete a “whole” piece or unit of work output), nonfinancial suggestion plans, employee involvement groups and time off without any reduction in pay.
Summary and Conclusions
Incentives in some form are an integral part of many compensation plans. In general, incentive plans should be carefully evaluated to make sure that workers are not exceeding safe ergonomic or mental stress limits. This is particularly important for individual direct financial plans. It is usually a lesser problem in group direct, indirect or intangible plans.
Incentives are desirable because they enhance productivity and provide workers an opportunity to earn extra income or other benefits. Gainsharing is today one of the best forms of incentive compensation for any work group or team organization that wishes to offer bonus earnings and to achieve improvement in the workplace without risking the imposition of negative health-stressors by the incentive plan itself.
The organizational context in which people work is characterized by numerous features (e.g., leadership, structure, rewards, communication) subsumed under the general concepts of organizational climate and culture. Climate refers to perceptions of organizational practices reported by people who work there (Rousseau 1988). Studies of climate include many of the most central concepts in organizational research. Common features of climate include communication (as describable, say, by openness), conflict (constructive or dysfunctional), leadership (as it involves support or focus) and reward emphasis (i.e., whether an organization is characterized by positive versus negative feedback, or reward- or punishment-orientation). When studied together, we observe that organizational features are highly interrelated (e.g., leadership and rewards). Climate characterizes practices at several levels in organizations (e.g., work unit climate and organizational climate). Studies of climate vary in the activities they focus upon, for example, climates for safety or climates for service. Climate is essentially a description of the work setting by those directly involved with it.
The relationship of climate to employee well-being (e.g., satisfaction, job stress and strain) has been widely studied. Since climate measures subsume the major organizational characteristics workers experience, virtually any study of employee perceptions of their work setting can be thought of as a climate study. Studies link climate features (particularly leadership, communication openness, participative management and conflict resolution) with employee satisfaction and (inversely) stress levels (Schneider 1985). Stressful organizational climates are characterized by limited participation in decisions, use of punishment and negative feedback (rather than rewards and positive feedback), conflict avoidance or confrontation (rather than problem solving), and nonsupportive group and leader relations. Socially supportive climates benefit employee mental health, with lower rates of anxiety and depression in supportive settings (Repetti 1987). When collective climates exist (where members who interact with each other share common perceptions of the organization) research observes that shared perceptions of undesirable organizational features are linked with low morale and instances of psychogenic illness (Colligan, Pennebaker and Murphy 1982). When climate research adopts a specific focus, as in the study of climate for safety in an organization, evidence is provided that lack of openness in communication regarding safety issues, few rewards for reporting occupational hazards, and other negative climate features increase the incidence of work-related accidents and injury (Zohar 1980).
Since climates exist at many levels in organizations and can encompass a variety of practices, assessment of employee risk factors needs to systematically span the relationships (whether in the work unit, the department or the entire organization) and activities (e.g., safety, communication or rewards) in which employees are involved. Climate-based risk factors can differ from one part of the organization to another.
Culture constitutes the values, norms and ways of behaving which organization members share. Researchers identify five basic elements of culture in organizations: fundamental assumptions (unconscious beliefs that shape member’s interpretations, e.g., views regarding time, environmental hostility or stability), values (preferences for certain outcomes over others, e.g., service or profit), behavioural norms (beliefs regarding appropriate and inappropriate behaviours, e.g., dress codes and teamwork), patterns of behaviours (observable recurrent practices, e.g., structured performance feedback and upward referral of decisions) and artefacts (symbols and objects used to express cultural messages, e.g., mission statements and logos). Cultural elements which are more subjective (i.e., assumptions, values and norms) reflect the way members think about and interpret their work setting. These subjective features shape the meaning that patterns of behaviours and artefacts take on within the organization. Culture, like climate, can exist at many levels, including:
Cultures can be strong (widely shared by members), weak (not widely shared), or in transition (characterized by gradual replacement of one culture by another).
In contrast with climate, culture is less frequently studied as a contributing factor to employee well-being or occupational risk. The absence of such research is due both to the relatively recent emergence of culture as a concept in organizational studies and to ideological debates regarding the nature of culture, its measurement (quantitative versus qualitative), and the appropriateness of the concept for cross-sectional study (Rousseau 1990). According to quantitative culture research focusing on behavioural norms and values, team-oriented norms are associated with higher member satisfaction and lower strain than are control- or bureaucratically -oriented norms (Rousseau 1989). Furthermore, the extent to which the worker’s values are consistent with those of the organization affects stress and satisfaction (O’Reilly and Chatman 1991). Weak cultures and cultures fragmented by role conflict and member disagreement are found to provoke stress reactions and crises in professional identities (Meyerson 1990). The fragmentation or breakdown of organizational cultures due to economic or political upheavals affects the well-being of members psychologically and physically, particular in the wake of downsizings, plant closings and other effects of concurrent organizational restructurings (Hirsch 1987). The appropriateness of particular cultural forms (e.g., hierarchic or militaristic) for modern society has been challenged by several culture studies (e.g., Hirschhorn 1984; Rousseau 1989) concerned with the stress and health-related outcomes of operators (e.g., nuclear power technicians and air traffic controllers) and subsequent risks for the general public.
Assessing risk factors in the light of information about organizational culture requires first attention to the extent to which organization members share or differ in basic beliefs, values and norms. Differences in function, location and education create subcultures within organizations and mean that culture-based risk factors can vary within the same organization. Since cultures tend to be stable and resistant to change, organizational history can aid assessment of risk factors both in terms of stable and ongoing cultural features as well as recent changes that can create stressors associated with turbulence (Hirsch 1987).
Climate and culture overlap to a certain extent, with perceptions of culture’s patterns of behaviour being a large part of what climate research addresses. However, organization members may describe organizational features (climate) in the same way but interpret them differently due to cultural and subcultural influences (Rosen, Greenlagh and Anderson 1981). For example, structured leadership and limited participation in decision making may be viewed as negative and controlling from one perspective or as positive and legitimate from another. Social influence reflecting the organization’s culture shapes the interpretation members make of organizational features and activities. Thus, it would seem appropriate to assess both climate and culture simultaneously in investigating the impact of the organization on the well-being of members.
Most of the articles in this chapter deal with aspects of the work environment that are proximal to the individual employee. The focus of this article, however, is to examine the impact of more distal, macrolevel characteristics of organizations as a whole that may affect employees’ health and well-being. That is, are there ways in which organizations structure their internal environments that promote health among the employees of that organization or, conversely, place employees at greater risk of experiencing stress? Most theoretical models of occupational or job stress incorporate organizational structural variables such as organizational size, lack of participation in decision making, and formalization (Beehr and Newman 1978; Kahn and Byosiere 1992).
Organizational structure refers to the formal distribution of work roles and functions within an organization coordinating the various functions or subsystems within the organization to efficiently attain the organization’s goals (Porras and Robertson 1992). As such, structure represents a coordinated set of subsystems to facilitate the accomplishment of the organization’s goals and mission and defines the division of labour, the authority relationships, formal lines of communication, the roles of each organizational subsystem and the interrelationships among these subsystems. Therefore, organizational structure can be viewed as a system of formal mechanisms to enhance the understandability of events, predictability of events and control over events within the organization which Sutton and Kahn (1987) proposed as the three work-relevant antidotes against the stress-strain effect in organizational life.
One of the earliest organizational characteristics examined as a potential risk factor was organizational size. Contrary to the literature on risk of exposure to hazardous agents in the work environment, which suggests that larger organizations or plants are safer, being less hazardous and better equipped to handle potential hazards (Emmett 1991), larger organizations originally were hypothesized to put employees at greater risk of occupational stress. It was proposed that larger organizations tend to adapt a bureaucratic organizational structure to coordinate the increased complexity. This bureaucratic structure would be characterized by a division of labour based on functional specialization, a well-defined hierarchy of authority, a system of rules covering the rights and duties of job incumbents, impersonal treatment of workers and a system of procedures for dealing with work situations (Bennis 1969). On the surface, it would appear that many of these dimensions of bureaucracy would actually improve or maintain the predictability and understandability of events in the work environment and thus serve to reduce stress within the work environment. However, it also appears that these dimensions can reduce employees’ control over events in the work environment through a rigid hierarchy of authority.
Given these characteristics of bureaucratic structure, it is not surprising that organizational size, per se, has received no consistent support as a macro-organization risk factor (Kahn and Byosiere 1992). Payne and Pugh’s (1976) review, however, provides some evidence that organizational size indirectly increases the risk of stress. They report that larger organizations suffered a reduction in the amount of communication, an increase in the amount of job and task specifications and a decrease in coordination. These effects could lead to less understanding and predictability of events in the work environment as well as a decrease in control over work events, thus increasing experienced stress (Tetrick and LaRocco 1987).
These findings on organizational size have led to the supposition that the two aspects of organizational structure that seem to pose the most risk for employees are formalization and centralization. Formalization refers to the written procedures and rules governing employees’ activities, and centralization refers to the extent to which the decision-making power in the organization is narrowly distributed to higher levels in the organization. Pines (1982) pointed out that it is not formalization within a bureaucracy that results in experienced stress or burnout but the unnecessary red tape, paperwork and communication problems that can result from formalization. Rules and regulations can be vague creating ambiguity or contradiction resulting in conflict or lack of understanding concerning appropriate actions to be taken in specific situations. If the rules and regulations are too detailed, employees may feel frustrated in their ability to achieve their goals especially in customer or client-oriented organizations. Inadequate communication can result in employees feeling isolated and alienated based on the lack of predictability and understanding of events in the work environment.
While these aspects of the work environment appear to be accepted as potential risk factors, the empirical literature on formalization and centralization are far from consistent. The lack of consistent evidence may stem from at least two sources. First, in many of the studies, there is an assumption of a single organizational structure having a consistent level of formalization and centralization throughout the entire organization. Hall (1969) concluded that organizations can be meaningfully studied as totalities; however, he demonstrated that the degree of formalization as well as decision-making authority can differ within organizational units. Therefore, if one is looking at an individual level phenomenon such as occupational stress, it may be more meaningful to look at the structure of smaller organizational units than that of the whole organization. Secondly, there is some evidence suggesting that there are individual differences in response to structural variables. For example, Marino and White (1985) found that formalization was positively related to job stress among individuals with an internal locus of control and negatively related to stress among individuals who generally believe that they have little control over their environments. Lack of participation, on the other hand, was not moderated by locus of control and resulted in increased levels of job stress. There also appear to be some cultural differences affecting individual responses to structural variables, which would be important for multinational organizations having to operate across national boundaries (Peterson et al. 1995). These cultural differences also may explain the difficulty in adopting organizational structures and procedures from other nations.
Despite the rather limited empirical evidence implicating structural variables as psychosocial risk factors, it has been recommended that organizations should change their structures to be flatter with fewer levels of hierarchy or number of communication channels, more decentralized with more decision- making authority at lower levels in the organization and more integrated with less job specialization (Newman and Beehr 1979). These recommendations are consistent with organizational theorists who have suggested that traditional bureaucratic structure may not be the most efficient or healthiest form of organizational structure (Bennis 1969). This may be especially true in light of technological advances in production and communication that characterize the postindustrial workplace (Hirschhorn 1991).
The past two decades have seen considerable interest in the redesign of organizations to deal with external environmental threats resulting from increased globalization and international competition in North America and Western Europe (Whitaker 1991). Straw, Sandelands and Dutton (1988) proposed that organizations react to environmental threats by restricting information and constricting control. This can be expected to reduce the predictability, understandability and control of work events thereby increasing the stress experienced by the employees of the organization. Therefore, structural changes that prevent these threat-ridigity effects would appear to be beneficial to both the organization’s and employees’ health and well-being.
The use of a matrix organizational structure is one approach for organizations to structure their internal environments in response to greater environmental instability. Baber (1983) describes the ideal type of matrix organization as one in which there are two or more intersecting lines of authority, organizational goals are achieved through the use of task-oriented work groups which are cross-functional and temporary, and functional departments continue to exist as mechanisms for routine personnel functions and professional development. Therefore, the matrix organization provides the organization with the needed flexibility to be responsive to environmental instability if the personnel have sufficient flexibility gained from the diversification of their skills and an ability to learn quickly.
While empirical research has yet to establish the effects of this organizational structure, several authors have suggested that the matrix organization may increase the stress experienced by employees. For example, Quick and Quick (1984) point out that the multiple lines of authority (task and functional supervisors) found in matrix organizations increase the potential for role conflict. Also, Hirschhorn (1991) suggests that with postindustrial work organizations, workers frequently face new challenges requiring them to take a learning role. This results in employees having to acknowledge their own temporary incompetencies and loss of control which can lead to increased stress. Therefore, it appears that new organizational structures such as the matrix organization also have potential risk factors associated with them.
Attempts to change or redesign organizations, regardless of the particular structure that an organization chooses to adopt, can have stress-inducing properties by disrupting security and stability, generating uncertainty for people’s position, role and status, and exposing conflict which must be confronted and resolved (Golembiewski 1982). These stress-inducing properties can be offset, however, by the stress-reducing properties of organizational development which incorporate greater empowerment and decision making across all levels in the organization, enhanced openness in communication, collaboration and training in team building and conflict resolution (Golembiewski 1982; Porras and Robertson 1992).
Conclusion
While the literature suggests that there are occupational risk factors associated with various organizational structures, the impact of these macrolevel aspects of organizations appear to be indirect. Organizational structure can provide a framework to enhance the predictability, understandability and control of events in the work environment; however, the effect of structure on employees’ health and well-being is mediated by more proximal work-environment characteristics such as role characteristics and interpersonal relations. Structuring organizations for healthy employees as well as healthy organizations requires organizational flexibility, worker flexibility and attention to the sociotechnical systems that coordinate the technological demands and the social structure within the organization.
Selye (1974) suggested that having to live with other people is one of the most stressful aspects of life. Good relations between members of a work group are considered a central factor in individual and organizational health (Cooper and Payne 1988) particularly in terms of the boss–subordinate relationship. Poor relationships at work are defined as having “low trust, low levels of supportiveness and low interest in problem solving within the organization” (Cooper and Payne 1988). Mistrust is positively correlated with high role ambiguity, which leads to inadequate interpersonal communications between individuals and psychological strain in the form of low job satisfaction, decreased well-being and a feeling of being threatened by one’s superior and colleagues (Kahn et al. 1964; French and Caplan 1973).
Supportive social relationships at work are less likely to create the interpersonal pressures associated with rivalry, office politics and unconstructive competition (Cooper and Payne 1991). McLean (1979) suggests that social support in the form of group cohesion, interpersonal trust and liking for a superior is associated with decreased levels of perceived job stress and better health. Inconsiderate behaviour on the part of a supervisor appears to contribute significantly to feelings of job pressure (McLean 1979). Close supervision and rigid performance monitoring also have stressful consequences—in this connection a great deal of research has been carried out which indicates that a managerial style characterized by lack of effective consultation and communication, unjustified restrictions on employee behaviour, and lack of control over one’s job is associated with negative psychological moods and behavioural responses (for example, escapist drinking and heavy smoking) (Caplan et al. 1975), increased cardiovascular risk (Karasek 1979) and other stress-related manifestations. On the other hand, offering broader opportunities to employees to participate in decision making at work can result in improved performance, lower staff turnover and improved levels of mental and physical well-being. A participatory style of management should also extend to worker involvement in the improvement of safety in the workplace; this could help to overcome apathy among blue-collar workers, which is acknowledged as a significant factor in the cause of accidents (Robens 1972; Sutherland and Cooper 1986).
Early work in the relationship between managerial style and stress was carried out by Lewin (for example, in Lewin, Lippitt and White 1939), in which he documented the stressful and unproductive effects of authoritarian management styles. More recently, Karasek’s (1979) work highlights the importance of managers’ providing workers with greater control at work or a more participative management style. In a six-year prospective study he demonstrated that job control (i.e., the freedom to use one’s intellectual discretion) and work schedule freedom were significant predictors of risk of coronary heart disease. Restriction of opportunity for participation and autonomy results in increased depression, exhaustion, illness rates and pill consumption. Feelings of being unable to make changes concerning a job and lack of consultation are commonly reported stressors among blue-collar workers in the steel industry (Kelly and Cooper 1981), oil and gas workers on rigs and platforms in the North Sea (Sutherland and Cooper 1986) and many other blue-collar workers (Cooper and Smith 1985). On the other hand, as Gowler and Legge (1975) indicate, a participatory management style can create its own potentially stressful situations, for example, a mismatch of formal and actual power, resentment of the erosion of formal power, conflicting pressures both to be participative and to meet high production standards, and subordinates’ refusal to participate.
Although there has been a substantial research focus on the differences between authoritarian versus participatory management styles on employee performance and health, there have also been other, idiosyncratic approaches to managerial style (Jennings, Cox and Cooper 1994). For example, Levinson (1978) has focused on the impact of the “abrasive” manager. Abrasive managers are usually achievement-oriented, hard-driving and intelligent (similar to the type A personality), but function less well at the emotional level. As Quick and Quick (1984) point out, the need for perfection, the preoccupation with self and the condescending, critical style of the abrasive manager induce feelings of inadequacy among their subordinates. As Levinson suggests, the abrasive personality as a peer is both difficult and stressful to deal with, but as a superior, the consequences are potentially very damaging to interpersonal relationships and highly stressful for subordinates in the organization.
In addition, there are theories and research which suggest that the effect on employee health and safety of managerial style and personality can only be understood in the context of the nature of the task and the power of the manager or leader. For example, Fiedler’s (1967) contingency theory suggests that there are eight main group situations based upon combinations of dichotomies: (a) the warmth of the relations between the leader and follower; (b) the level structure imposed by the task; and (c) the power of the leader. The eight combinations could be arranged in a continuum with, at one end (octant one) a leader who has good relations with members, facing a highly structured task and possessing strong power; and, at the other end (octant eight), a leader who has poor relations with members, facing a loosely structured task and having low power. In terms of stress, it could be argued that the octants formed a continuum from low stress to high stress. Fiedler also examined two types of leader: the leader who would value negatively most of the characteristics of the member he liked least (the lower LPC leader) and the leader who would see many positive qualities even in the members whom he disliked (the high LPC leader). Fiedler made specific predictions about the performance of the leader. He suggested that the low LPC leader (who had difficulty in seeing merits in subordinates he disliked) would be most effective in octants one and eight, where there would be very low and very high levels of stress, respectively. On the other hand, a high LPC leader (who is able to see merits even in those he disliked) would be more effective in the middle octants, where moderate stress levels could be expected. In general, subsequent research (for example, Strube and Garcia 1981) has supported Fiedler’s ideas.
Additional leadership theories suggest that task-oriented managers or leaders create stress. Seltzer, Numerof and Bass (1989) found that intellectually stimulating leaders increased perceived stress and “burnout” among their subordinates. Misumi (1985) found that production-oriented leaders generated physiological symptoms of stress. Bass (1992) finds that in laboratory experiments, production-oriented leadership causes higher levels of anxiety and hostility. On the other hand, transformational and charismatic leadership theories (Burns 1978) focus upon the effect which those leaders have upon their subordinates who are generally more self-assured and perceive more meaning in their work. It has been found that these types of leader or manager reduce the stress levels of their subordinates.
On balance, therefore, managers who tend to demonstrate “considerate” behaviour, to have a participative management style, to be less production- or task-oriented and to provide subordinates with control over their jobs are likely to reduce the incidence of ill health and accidents at work.
One of the more remarkable social transformations of this century was the emergence of a powerful Japanese economy from the debris of the Second World War. Fundamental to this climb to global competitiveness were a commitment to quality and a determination to prove false the then-common belief that Japanese goods were shoddy and worthless. Guided by the innovative teachings of Deming (1993), Juran (1988) and others, Japanese managers and engineers adopted practices that have ultimately evolved into a comprehensive management system rooted in the basic concept of quality. Fundamentally, this system represents a shift in thinking. The traditional view was that quality had to be balanced against the cost of attaining it. The view that Deming and Juran urged was that higher quality led to lower total cost and that a systems approach to improving work processes would help in attaining both of these objectives. Japanese managers adopted this management philosophy, engineers learned and practised statistical quality control, workers were trained and involved in process improvement, and the outcome was dramatic (Ishikawa 1985; Imai 1986).
By 1980, alarmed at the erosion of their markets and seeking to broaden their reach in the global economy, European and American managers began to search for ways to regain a competitive position. In the ensuing 15 years, more and more companies came to understand the principles underlying quality management and to apply them, initially in industrial production and later in the service sector as well. While there are a variety of names for this management system, the most commonly used is total quality management or TQM; an exception is the health care sector, which more frequently uses the term continuous quality improvement, or CQI. Recently, the term business process reengineering (BPR) has also come into use, but this tends to mean an emphasis on specific techniques for process improvement rather than on the adoption of a comprehensive management system or philosophy.
TQM is available in many “flavours,” but it is important to understand it as a system that includes both a management philosophy and a powerful set of tools for improving the efficiency of work processes. Some of the common elements of TQM include the following (Feigenbaum 1991; Mann 1989; Senge 1991):
Typically, organizations successfully adopting TQM find they must make changes on three fronts.
One is transformation. This involves such actions as defining and communicating a vision of the organization’s future, changing the management culture from top-down oversight to one of employee involvement, fostering collaboration instead of competition and refocusing the purpose of all work on meeting customer requirements. Seeing the organization as a system of interrelated processes is at the core of TQM, and is an essential means of securing a totally integrated effort towards improving performance at all levels. All employees must know the vision and the aim of the organization (the system) and understand where their work fits in it, or no amount of training in applying TQM process improvement tools can do much good. However, lack of genuine change of organizational culture, particularly among lower echelons of managers, is frequently the downfall of many nascent TQM efforts; Heilpern (1989) observes, “We have come to the conclusion that the major barriers to quality superiority are not technical, they are behavioural.” Unlike earlier, flawed “quality circle” programmes, in which improvement was expected to “convect” upward, TQM demands top management leadership and the firm expectation that middle management will facilitate employee participation (Hill 1991).
A second basis for successful TQM is strategic planning. The achievement of an organization’s vision and goals is tied to the development and deployment of a strategic quality plan. One corporation defined this as “a customer-driven plan for the application of quality principles to key business objectives and the continuous improvement of work processes” (Yarborough 1994). It is senior management’s responsibility—indeed, its obligation to workers, stockholders and beneficiaries alike—to link its quality philosophy to sound and feasible goals that can reasonably be attained. Deming (1993) called this “constancy of purpose” and saw its absence as a source of insecurity for the workforce of the organization. The fundamental intent of strategic planning is to align the activities of all of the people throughout the company or organization so that it can achieve its core goals and can react with agility to a changing environment. It is evident that it both requires and reinforces the need for widespread participation of supervisors and workers at all levels in shaping the goal-directed work of the company (Shiba, Graham and Walden 1994).
Only when these two changes are adequately carried out can one hope for success in the third: the implementation of continuous quality improvement. Quality outcomes, and with them customer satisfaction and improved competitive position, ultimately rest on widespread deployment of process improvement skills. Often, TQM programmes accomplish this through increased investments in training and through assignment of workers (frequently volunteers) to teams charged with addressing a problem. A basic concept of TQM is that the person most likely to know how a job can be done better is the person who is doing it at a given moment. Empowering these workers to make useful changes in their work processes is a part of the cultural transformation underlying TQM; equipping them with knowledge, skills and tools to do so is part of continuous quality improvement.
The collection of statistical data is a typical and basic step taken by workers and teams to understand how to improve work processes. Deming and others adapted their techniques from the seminal work of Shewhart in the 1920s (Schmidt and Finnigan 1992). Among the most useful TQM tools are: (a) the Pareto Chart, a graphical device for identifying the more frequently occurring problems, and hence the ones to be addressed first; (b) the statistical control chart, an analytic tool for ascertaining the degree of variability in the unimproved process; and (c) flow charting, a means to document exactly how the process is carried out at present. Possibly the most ubiquitous and important tool is the Ishikawa Diagram (or “fishbone” diagram), whose invention is credited to Kaoru Ishikawa (1985). This instrument is a simple but effective way by which team members can collaborate on identifying the root causes of the process problem under study, and thus point the path to process improvement.
TQM, effectively implemented, may be important to workers and worker health in many ways. For example, the adoption of TQM can have an indirect influence. In a very basic sense, an organization that makes a quality transformation has arguably improved its chances of economic survival and success, and hence those of its employees. Moreover, it is likely to be one where respect for people is a basic tenet. Indeed, TQM experts often speak of “shared values”, those things that must be exemplified in the behaviour of both management and workers. These are often publicized throughout the organization as formal values statements or aspiration statements, and typically include such emotive language as “trust”, “respecting each other”, “open communications”, and “valuing our diversity” (Howard 1990).
Thus, it is tempting to suppose that quality workplaces will be “worker-friendly”—where worker-improved processes become less hazardous and where the climate is less stressful. The logic of quality is to build quality into a product or service, not to detect failures after the fact. It can be summed up in a word—prevention (Widfeldt and Widfeldt 1992). Such a logic is clearly compatible with the public health logic of emphasizing prevention in occupational health. As Williams (1993) points out in a hypothetical example, “If the quality and design of castings in the foundry industry were improved there would be reduced exposure ... to vibration as less finishing of castings would be needed.” Some anecdotal support for this supposition comes from satisfied employers who cite trend data on job health measures, climate surveys that show better employee satisfaction, and more numerous safety and health awards in facilities using TQM. Williams further presents two case studies in UK settings that exemplify such employer reports (Williams 1993).
Unfortunately, virtually no published studies offer firm evidence on the matter. What is lacking is a research base of controlled studies that document health outcomes, consider the possibility of detrimental as well as positive health influences, and link all of this causally to measurable factors of business philosophy and TQM practice. Given the significant prevalence of TQM enterprises in the global economy of the 1990s, this is a research agenda with genuine potential to define whether TQM is in fact a supportive tool in the prevention armamentarium of occupational safety and health.
We are on somewhat firmer ground to suggest that TQM can have a direct influence on worker health when it explicitly focuses quality improvement efforts on safety and health. Obviously, like all other work in an enterprise, occupational and environmental health activity is made up of interrelated processes, and the tools of process improvement are readily applied to them. One of the criteria against which candidates are examined for the Baldridge Award, the most important competitive honour granted to US organizations, is the competitor’s improvements in occupational health and safety. Yarborough has described how the occupational and environmental health (OEH) employees of a major corporation were instructed by senior management to adopt TQM with the rest of the company and how OEH was integrated into the company’s strategic quality plan (Yarborough 1994). The chief executive of a US utility that was the first non-Japanese company ever to win Japan’s coveted Deming Prize notes that safety was accorded a high priority in the TQM effort: “Of all the company’s major quality indicators, the only one that addresses the internal customer is employee safety.” By defining safety as a process, subjecting it to continuous improvement, and tracking lost-time injuries per 100 employees as a quality indicator, the utility reduced its injury rate by half, reaching the lowest point in the history of the company (Hudiberg 1991).
In summary, TQM is a comprehensive management system grounded in a management philosophy that emphasizes the human dimensions of work. It is supported by a powerful set of technologies that use data derived from work processes to document, analyse and continuously improve these processes.
The term unemployment describes the situation of individuals who desire to work but are unable to trade their skills and labour for pay. It is used to indicate either an individual’s personal experience of failure to find gainful work, or the experience of an aggregate in a community, a geographic region or a country. The collective phenomenon of unemployment is often expressed as the unemployment rate, that is, the number of people who are seeking work divided by the total number of people in the labour force, which in turn consists of both the employed and the unemployed. Individuals who desire to work for pay but have given up their efforts to find work are termed discouraged workers. These persons are not listed in official reports as members of the group of unemployed workers, for they are no longer considered to be part of the labour force.
The Organization for Economic Cooperation and Development (OECD) provides statistical information on the magnitude of unemployment in 25 countries around the world (OECD 1995). These consist mostly of the economically developed countries of Europe and North America, as well as Japan, New Zealand and Australia. According to the report for the year 1994, the total unemployment rate in these countries was 8.1% (or 34.3 million individuals). In the developed countries of central and western Europe, the unemployment rate was 9.9% (11 million), in the southern European countries 13.7% (9.2 million), and in the United States 6.1% (8 million). Of the 25 countries studied, only six (Austria, Iceland, Japan, Mexico, Luxembourg and Switzerland) had an unemployment rate below 5%. The report projected only a slight overall decrease (less than one-half of 1%) in unemployment for the years 1995 and 1996. These figures suggest that millions of individuals will continue to be vulnerable to the harmful effects of unemployment in the foreseeable future (Reich 1991).
A large number of people become unemployed at various periods during their lives. Depending on the structure of the economy and on its cycles of expansion and contraction, unemployment may strike students who drop out of school; those who have been graduated from a high school, trade school or college but find it difficult to enter the labour market for the first time; women seeking to return to gainful employment after raising their children; veterans of the armed services; and older persons who want to supplement their income after retirement. However, at any given time, the largest segment of the unemployed population, usually between 50 and 65%, consists of displaced workers who have lost their jobs. The problems associated with unemployment are most visible in this segment of the unemployed partly because of its size. Unemployment is also a serious problem for minorities and younger persons. Their unemployment rates are often two to three times higher than that of the general population (USDOL 1995).
The fundamental causes of unemployment are rooted in demographic, economic and technological changes. The restructuring of local and national economies usually gives rise to at least temporary periods of high unemployment rates. The trend towards the globalization of markets, coupled with accelerated technological changes, results in greater economic competition and the transfer of industries and services to new places that supply more advantageous economic conditions in terms of taxation, a cheaper labour force and more accommodating labour and environmental laws. Inevitably, these changes exacerbate the problems of unemployment in areas that are economically depressed.
Most people depend on the income from a job to provide themselves and their families with the necessities of life and to sustain their accustomed standard of living. When they lose a job, they experience a substantial reduction in their income. Mean duration of unemployment, in the United States for example, varies between 16 and 20 weeks, with a median between eight and ten weeks (USDOL 1995). If the period of unemployment that follows the job loss persists so that unemployment benefits are exhausted, the displaced worker faces a financial crisis. That crisis plays itself out as a cascading series of stressful events that may include loss of a car through repossession, foreclosure on a house, loss of medical care, and food shortages. Indeed, an abundance of research in Europe and the United States shows that economic hardship is the most consistent outcome of unemployment (Fryer and Payne 1986), and that economic hardship mediates the adverse impact of unemployment on various other outcomes, in particular, on mental health (Kessler, Turner and House 1988).
There is a great deal of evidence that job loss and unemployment produce significant deterioration in mental health (Fryer and Payne 1986). The most common outcomes of job loss and unemployment are increases in anxiety, somatic symptoms and depression symptomatology (Dooley, Catalano and Wilson 1994; Hamilton et al. 1990; Kessler, House and Turner 1987; Warr, Jackson and Banks 1988). Furthermore, there is some evidence that unemployment increases by over twofold the risk of onset of clinical depression (Dooley, Catalano and Wilson 1994). In addition to the well-documented adverse effects of unemployment on mental health, there is research that implicates unemployment as a contributing factor to other outcomes (see Catalano 1991 for a review). These outcomes include suicide (Brenner 1976), separation and divorce (Stack 1981; Liem and Liem 1988), child neglect and abuse (Steinberg, Catalano and Dooley 1981), alcohol abuse (Dooley, Catalano and Hough 1992; Catalano et al. 1993a), violence in the workplace (Catalano et al. 1993b), criminal behaviour (Allan and Steffensmeier 1989), and highway fatalities (Leigh and Waldon 1991). Finally, there is also some evidence, based primarily on self-report, that unemployment contributes to physical illness (Kessler, House and Turner 1987).
The adverse effects of unemployment on displaced workers are not limited to the period during which they have no jobs. In most instances, when workers become re-employed, their new jobs are significantly worse than the jobs they lost. Even after four years in their new positions, their earnings are substantially lower than those of similar workers who were not laid off (Ruhm 1991).
Because the fundamental causes of job loss and unemployment are rooted in societal and economic processes, remedies for their adverse social effects must be sought in comprehensive economic and social policies (Blinder 1987). At the same time, various community-based programmes can be undertaken to reduce the negative social and psychological impact of unemployment at the local level. There is overwhelming evidence that re-employment reduces distress and depression symptoms and restores psychosocial functioning to pre-unemployment levels (Kessler, Turner and House 1989; Vinokur, Caplan and Williams 1987). Therefore, programmes for displaced workers or others who wish to become employed should be aimed primarily at promoting and facilitating their re-employment or new entry into the labour force. A variety of such programmes have been tried successfully. Among these are special community-based intervention programmes for creating new ventures that in turn generate job opportunities (e.g., Last et al. 1995), and others that focus on retraining (e.g., Wolf et al. 1995).
Of the various programmes that attempt to promote re-employment, the most common are job search programmes organized as job clubs that attempt to intensify job search efforts (Azrin and Beasalel 1982), or workshops that focus more broadly on enhancing job search skills and facilitating transition into re-employment in high-quality jobs (e.g., Caplan et al. 1989). Cost/benefit analyses have demonstrated that these job search programmes are cost effective (Meyer 1995; Vinokur et al. 1991). Furthermore, there is also evidence that they could prevent deterioration in mental health and possibly the onset of clinical depression (Price, van Ryn and Vinokur 1992).
Similarly, in the case of organizational downsizing, industries can reduce the scope of unemployment by devising ways to involve workers in the decision-making process regarding the management of the downsizing programme (Kozlowski et al. 1993; London 1995; Price 1990). Workers may choose to pool their resources and buy out the industry, thus avoiding layoffs; to reduce working hours to spread and even out the reduction in force; to agree to a reduction in wages to minimize layoffs; to retrain and/or relocate to take new jobs; or to participate in outplacement programmes. Employers can facilitate the process by timely implementation of a strategic plan that offers the above-mentioned programmes and services to workers at risk of being laid off. As has been indicated already, unemployment leads to pernicious outcomes at both the personal and societal level. A combination of comprehensive government policies, flexible downsizing strategies by business and industry, and community-based programmes can help to mitigate the adverse consequences of a problem that will continue to affect the lives of millions of people for years to come.
Downsizing, layoffs, re-engineering, reshaping, reduction in force (RIF), mergers, early retirement, and outplacement—the description of these increasingly familiar changes has become a matter of commonplace jargon around the world in the past two decades. As companies have fallen on hard times, workers at all organizational levels have been expended and many remaining jobs have been altered. The job loss count in a single year (1992–93) includes Eastman Kodak, 2,000; Siemens, 13,000; Daimler-Benz, 27,000; Phillips, 40,000; and IBM, 65,000 (The Economist 1993, extracted from “Job Future Ambiguity” (John M. Ivancevich)). Job cuts have occurred at companies earning healthy profits as well as at firms faced with the need to cut costs. The trend of cutting jobs and changing the way remaining jobs are performed is expected to continue even after worldwide economic growth returns.
Why has losing and changing jobs become so widespread? There is no simple answer that fits every organization or situation. However, one or more of a number of factors is usually implicated, including lost market share, increasing international and domestic competition, increasing labour costs, obsolete plant and technologies and poor managerial practices. These factors have resulted in managerial decisions to slim down, re-engineer jobs and alter the psychological contract between the employer and the worker.
A work situation in which an employee could count on job security or the opportunity to hold multiple positions via career-enhancing promotions in a single firm has changed drastically. Similarly, the binding power of the traditional employer-worker psychological contract has weakened as millions of managers and non-managers have been let go. Japan was once famous for providing “lifetime” employment to individuals. Today, even in Japan, a growing number of workers, especially in large firms, are not assured of lifetime employment. The Japanese, like their counterparts across the world, are facing what can be referred to as increased job insecurity and an ambiguous picture of what the future holds.
Job Insecurity: An Interpretation
Maslow (1954), Herzberg, Mausner and Snyderman (1959) and Super (1957) have proposed that individuals have a need for safety or security. That is, individual workers sense security when holding a permanent job or when being able to control the tasks performed on the job. Unfortunately, there has been a limited number of empirical studies that have thoroughly examined the job security needs of workers (Kuhnert and Pulmer 1991; Kuhnert, Sims and Lahey 1989).
On the other hand, with the increased attention that is being paid to downsizing, layoffs and mergers, more researchers have begun to investigate the notion of job insecurity. The nature, causes and consequences of job insecurity have been considered by Greenhalgh and Rosenblatt (1984) who offer a definition of job insecurity as “perceived powerlessness to maintain desired continuity in a threatened job situation”. In Greenhalgh and Rosenblatt’s framework, job insecurity is considered a part of a person’s environment. In the stress literature, job insecurity is considered to be a stressor that introduces a threat that is interpreted and responded to by an individual. An individual’s interpretation and response could possibly include the decreased effort to perform well, feeling ill or below par, seeking employment elsewhere, increased coping to deal with the threat, or seeking more colleague interaction to buffer the feelings of insecurity.
Lazarus’ theory of psychological stress (Lazarus 1966; Lazarus and Folkman 1984) is centred on the concept of cognitive appraisal. Regardless of the actual severity of the danger facing a person, the occurrence of psychological stress depends upon the individual’s own evaluation of the threatening situation (here, job insecurity).
Selected Research on Job Insecurity
Unfortunately, like the research on job security, there is a paucity of well-designed studies of job insecurity. Furthermore, the majority of job insecurity studies incorporate unitary measurement methods. Few researchers examining stressors in general or job insecurity specifically have adopted a multiple-level approach to assessment. This is understandable because of the limitations of resources. However, the problems created by unitary assessments of job insecurity have resulted in a limited understanding of the construct. There are available to researchers four basic methods of measuring job insecurity: self-report, performance, psychophysiological and biochemical. It is still debatable whether these four types of measure assess different aspects of the consequences of job insecurity (Baum, Grunberg and Singer 1982). Each type of measure has limitations that must be recognized.
In addition to measurement problems in job insecurity research, it must be noted that there is a predominance of concentration in imminent or actual job loss. As noted by researchers (Greenhalgh and Rosenblatt 1984; Roskies and Louis-Guerin 1990), there should be more attention paid to “concern about a significant deterioration in terms and conditions of employment.” The deterioration of working conditions would logically seem to play a role in a person’s attitudes and behaviours.
Brenner (1987) has discussed the relationship between a job insecurity factor, unemployment, and mortality. He proposed that uncertainty, or the threat of instability, rather than unemployment itself causes higher mortality. The threat of being unemployed or losing control of one’s job activities can be powerful enough to contribute to psychiatric problems.
In a study of 1,291 managers, Roskies and Louis-Guerin (1990) examined the perceptions of workers facing layoffs, as well as those of managerial personnel working in firms that worked in stable, growth-oriented firms. A minority of managers were stressed about imminent job loss. However, a substantial number of managers were more stressed about a deterioration in working conditions and long-term job security.
Roskies, Louis-Guerin and Fournier (1993) proposed in a research study that job insecurity may be a major psychological stressor. In this study of personnel in the airline industry, the researchers determined that personality disposition (positive and negative) plays a role in the impact of job security or the mental health of workers.
Addressing the Problem of Job Insecurity
Organizations have numerous alternatives to downsizing, layoffs and reduction in force. Displaying compassion that clearly shows that management realizes the hardships that job loss and future job ambiguity pose is an important step. Alternatives such as reduced work weeks, across-the-board salary cuts, attractive early retirement packages, retraining existing employees and voluntary layoff programmes can be implemented (Wexley and Silverman 1993).
The global marketplace has increased job demands and job skill requirements. For some people, the effect of increased job demands and job skill requirements will provide career opportunities. For others, these changes could exacerbate the feelings of job insecurity. It is difficult to pinpoint exactly how individual workers will respond. However, managers must be aware of how job insecurity can result in negative consequences. Furthermore, managers need to acknowledge and respond to job insecurity. But possessing a better understanding of the notion of job insecurity and its potential negative impact on the performance, behaviour and attitudes of workers is a step in the right direction for managers.
It will obviously require more rigorous research to better understand the full range of consequences of job insecurity among selected workers. As additional information becomes available, managers need to be open-minded about attempting to help workers cope with job insecurity. Redefining the way work is organized and executed should become a useful alternative to traditional job design methods. Managers have a responsibility:
Since job insecurity is likely to remain a perceived threat for many, but not all, workers, managers need to develop and implement strategies to address this factor. The institutional costs of ignoring job insecurity are too great for any firm to accept. Whether managers can efficiently deal with workers who feel insecure about their jobs and working conditions is fast becoming a measure of managerial competency.
The nature, prevalence, predictors and possible consequences of workplace violence have begun to attract the attention of labour and management practitioners, and researchers. The reason for this is the increasing occurrence of highly visible workplace murders. Once the focus is placed on workplace violence, it becomes clear that there are several issues, including the nature (or definition), prevalence, predictors, consequences and ultimately prevention of workplace violence.
Definition and Prevalence of Workplace Violence
The definition and prevalence of workplace violence are integrally related.
Consistent with the relative recency with which workplace violence has attracted attention, there is no uniform definition. This is an important issue for several reasons. First, until a uniform definition exists, any estimates of prevalence remain incomparable across studies and sites. Secondly, the nature of the violence is linked to strategies for prevention and interventions. For example, focusing on all instances of shootings within the workplace includes incidents that reflect the continuation of family conflicts, as well as those that reflect work-related stressors and conflicts. While employees would no doubt be affected in both situations, the control the organization has over the former is more limited, and hence the implications for interventions are different from those situations in which workplace shootings are a direct function of workplace stressors and conflicts.
Some statistics suggest that workplace murders are the fastest growing form of murder in the United States (for example, Anfuso 1994). In some jurisdictions (for example, New York State), murder is the modal cause of death in the workplace. Because of statistics such as these, workplace violence has attracted considerable attention recently. However, early indications suggest that those acts of workplace violence with the highest visibility (for example, murder, shootings) attract the greatest research scrutiny, but also occur with the least frequency. In contrast, verbal and psychological aggression against supervisors, subordinates and co-workers are far more common, but gather less attention. Supporting the notion of a close integration between definitional and prevalence issues, this would suggest that what is being studied in most cases is aggression rather than violence in the workplace.
Predictors of Workplace Violence
A reading of the literature on the predictors of workplace violence would reveal that most of the attention has been focused on the development of a “profile” of the potentially violent or “disgruntled” employee (for example, Mantell and Albrecht 1994; Slora, Joy and Terris 1991), most of which would identify the following as the salient personal characteristics of a disgruntled employee: white, male, aged 20-35, a “loner”, probable alcohol problem and a fascination with guns. Aside from the problem of the number of false-positive identifications this would lead to, this strategy is also based on identifying individuals predisposed to the most extreme forms of violence, and ignores the larger group involved in most of the aggressive and less violent workplace incidents.
Going beyond “demographic” characteristics, there are suggestions that some of the personal factors implicated in violence outside of the workplace would extend to the workplace itself. Thus, inappropriate use of alcohol, general history of aggression in one’s current life or family of origin, and low self-esteem have been implicated in workplace violence.
A more recent strategy has been to identify the workplace conditions under which workplace violence is most likely to occur: identifying the physical and psychosocial conditions in the workplace. While the research on psychosocial factors is still in its infancy, it would appear as though feelings of job insecurity, perceptions that organizational policies and their implementation are unjust, harsh management and supervision styles, and electronic monitoring are associated with workplace aggression and violence (United States House of Representatives 1992; Fox and Levin 1994).
Cox and Leather (1994) look to the predictors of aggression and violence in general in their attempt to understand the physical factors that predict workplace violence. In this respect, they suggest that workplace violence may be associated with perceived crowding, and extreme heat and noise. However, these suggestions about the causes of workplace violence await empirical scrutiny.
Consequences of workplace violence
The research to date suggests that there are primary and secondary victims of workplace violence, both of which are worthy of research attention. Bank tellers or store clerks who are held up and employees who are assaulted at work by current or former co-workers are the obvious or direct victims of violence at work. However, consistent with the literature showing that much human behaviour is learned from observing others, witnesses to workplace violence are secondary victims. Both groups might be expected to suffer negative effects, and more research is needed to focus on the way in which both aggression and violence at work affect primary and secondary victims.
Prevention of workplace violence
Most of the literature on the prevention of workplace violence focuses at this stage on prior selection, i.e., the prior identification of potentially violent individuals for the purpose of excluding them from employment in the first instance (for example, Mantell and Albrecht 1994). Such strategies are of dubious utility, for ethical and legal reasons. From a scientific perspective, it is equally doubtful whether we could identify potentially violent employees with sufficient precision (e.g., without an unacceptably high number of false-positive identifications). Clearly, we need to focus on workplace issues and job design for a preventive approach. Following Fox and Levin’s (1994) reasoning, ensuring that organizational policies and procedures are characterized by perceived justice will probably constitute an effective prevention technique.
Conclusion
Research on workplace violence is in its infancy, but gaining increasing attention. This bodes well for the further understanding, prediction and control of workplace aggression and violence.
Historically, the sexual harassment of female workers has been ignored, denied, made to seem trivial, condoned and even implicitly supported, with women themselves being blamed for it (MacKinnon 1978). Its victims are almost entirely women, and it has been a problem since females first sold their labour outside the home.
Although sexual harassment also exists outside the workplace, here it will be taken to denote harassment in the workplace.
Sexual harassment is not an innocent flirtation nor the mutual expression of attraction between men and women. Rather, sexual harassment is a workplace stressor that poses a threat to a woman’s psychological and physical integrity and security, in a context in which she has little control because of the risk of retaliation and the fear of losing her livelihood. Like other workplace stressors, sexual harassment may have adverse health consequences for women that can be serious and, as such, qualifies as a workplace health and safety issue (Bernstein 1994).
In the United States, sexual harassment is viewed primarily as a discrete case of wrongful conduct to which one may appropriately respond with blame and recourse to legal measures for the individual. In the European Community it tends to be viewed rather as a collective health and safety issue (Bernstein 1994).
Because the manifestations of sexual harassment vary, people may not agree on its defining qualities, even where it has been set forth in law. Still, there are some common features of harassment that are generally accepted by those doing work in this area:
When directed towards a specific woman it can involve sexual comments and seductive behaviours, “propositions” and pressure for dates, touching, sexual coercion through the use of threats or bribery and even physical assault and rape. In the case of a “hostile environment”, which is probably the more common state of affairs, it can involve jokes, taunts and other sexually charged comments that are threatening and demeaning to women; pornographic or sexually explicit posters; and crude sexual gestures, and so forth. One can add to these characteristics what is sometimes called “gender harassment”, which more involves sexist remarks that demean the dignity of women.
Women themselves may not label unwanted sexual attention or sexual remarks as harassing because they accept it as “normal” on the part of males (Gutek 1985). In general, women (especially if they have been harassed) are more likely to identify a situation as sexual harassment than men, who tend rather to make light of the situation, to disbelieve the woman in question or to blame her for “causing” the harassment (Fitzgerald and Ormerod 1993). People also are more likely to label incidents involving supervisors as sexually harassing than similar behaviour by peers (Fitzgerald and Ormerod 1993). This tendency reveals the significance of the differential power relationship between the harasser and the female employee (MacKinnon 1978.) As an example, a comment that a male supervisor may believe is complimentary may still be threatening to his female employee, who may fear that it will lead to pressure for sexual favours and that there will be retaliation for a negative response, including the potential loss of her job or negative evaluations.
Even when co-workers are involved, sexual harassment can be difficult for women to control and can be very stressful for them. This situation can occur where there are many more men than women in a work group, a hostile work environment is created and the supervisor is male (Gutek 1985; Fitzgerald and Ormerod 1993).
National data on sexual harassment are not collected, and it is difficult to obtain accurate numbers on its prevalence. In the United States, it has been estimated that 50% of all women will experience some form of sexual harassment during their working lives (Fitzgerald and Ormerod 1993). These numbers are consistent with surveys conducted in Europe (Bustelo 1992), although there is variation from country to country (Kauppinen-Toropainen and Gruber 1993). The extent of sexual harassment is also difficult to determine because women may not label it accurately and because of underreporting. Women may fear that they will be blamed, humiliated and not believed, that nothing will be done and that reporting problems will result in retaliation (Fitzgerald and Ormerod 1993). Instead, they may try to live with the situation or leave their jobs and risk serious financial hardship, a disruption of their work histories and problems with references (Koss et al. 1994).
Sexual harassment reduces job satisfaction and increases turnover, so that it has costs for the employer (Gutek 1985; Fitzgerald and Ormerod 1993; Kauppinen-Toropainen and Gruber 1993). Like other workplace stressors, it also can have negative effects on health that are sometimes quite serious. When the harassment is severe, as with rape or attempted rape, women are seriously traumatized. Even where sexual harassment is less severe, women can have psychological problems: they may become fearful, guilty and ashamed, depressed, nervous and less self-confident. They may have physical symptoms such as stomach-aches, headaches or nausea. They may have behavioural problems such as sleeplessness, over- or undereating, sexual problems and difficulties in their relations with others (Swanson et al. 1997).
Both the formal American and informal European approaches to combating harassment provide illustrative lessons (Bernstein 1994). In Europe, sexual harassment is sometimes dealt with by conflict resolution approaches that bring in third parties to help eliminate the harassment (e.g., England’s “challenge technique”). In the United States, sexual harassment is a legal wrong that provides victims with redress through the courts, although success is difficult to achieve. Victims of harassment also need to be supported through counselling, where needed, and helped to understand that they are not to blame for the harassment.
Prevention is the key to combating sexual harassment. Guidelines encouraging prevention have been promulgated through the European Commission Code of Practice (Rubenstein and DeVries 1993). They include the following: clear anti-harassment policies that are effectively communicated; special training and education for managers and supervisors; a designated ombudsperson to deal with complaints; formal grievance procedures and alternatives to them; and disciplinary treatment of those who violate the policies. Bernstein (1994) has suggested that mandated self-regulation may be a viable approach.
Finally, sexual harassment needs to be openly discussed as a workplace issue of legitimate concern to women and men. Trade unions have a critical role to play in helping place this issue on the public agenda. Ultimately, an end to sexual harassment requires that men and women reach social and economic equality and full integration in all occupations and workplaces.
Roles represent sets of behaviours that are expected of employees. To understand how organizational roles develop, it is particularly informative to see the process through the eyes of a new employee. Starting with the first day on the job, a new employee is presented with considerable information designed to communicate the organization’s role expectations. Some of this information is presented formally through a written job description and regular communications with one’s supervisor. Hackman (1992), however, states that workers also receive a variety of informal communications (termed discretionary stimuli) designed to shape their organizational roles. For example, a junior school faculty member who is too vocal during a departmental meeting may receive looks of disapproval from more senior colleagues. Such looks are subtle, but communicate much about what is expected of a junior colleague.
Ideally, the process of defining each employee’s role should proceed such that each employee is clear about his or her role. Unfortunately, this is often not the case and employees experience a lack of role clarity or, as it is commonly called, role ambiguity. According to Breaugh and Colihan (1994), employees are often unclear about how to do their jobs, when certain tasks should be performed and the criteria by which their performance will be judged. In some cases, it is simply difficult to provide an employee with a crystal-clear picture of his or her role. For example, when a job is relatively new, it is still “evolving” within the organization. Furthermore, in many jobs the individual employee has tremendous flexibility regarding how to get the job done. This is particularly true of highly complex jobs. In many other cases, however, role ambiguity is simply due to poor communication between either supervisors and subordinates or among members of work groups.
Another problem that can arise when role-related information is communicated to employees is role overload. That is, the role consists of too many responsibilities for an employee to handle in a reasonable amount of time. Role overload can occur for a number of reasons. In some occupations, role overload is the norm. For example, physicians in training experience tremendous role overload, largely as preparation for the demands of medical practice. In other cases, it is due to temporary circumstances. For example, if someone leaves an organization, the roles of other employees may need to be temporarily expanded to make up for the missing worker’s absence. In other instances, organizations may not anticipate the demands of the roles they create, or the nature of an employee’s role may change over time. Finally, it is also possible that an employee may voluntarily take on too many role responsibilities.
What are the consequences to workers in circumstances characterized by either role ambiguity, role overload or role clarity? Years of research on role ambiguity has shown that it is a noxious state which is associated with negative psychological, physical and behavioural outcomes (Jackson and Schuler 1985). That is, workers who perceive role ambiguity in their jobs tend to be dissatisfied with their work, anxious, tense, report high numbers of somatic complaints, tend to be absent from work and may leave their jobs. The most common correlates of role overload tend to be physical and emotional exhaustion. In addition, epidemiological research has shown that overloaded individuals (as measured by work hours) may be at greater risk for coronary heart disease. In considering the effects of both role ambiguity and role overload, it must be kept in mind that most studies are cross-sectional (measuring role stressors and outcomes at one point in time) and have examined self-reported outcomes. Thus, inferences about causality must be somewhat tentative.
Given the negative effects of role ambiguity and role overload, it is important for organizations to minimize, if not eliminate, these stressors. Since role ambiguity, in many cases, is due to poor communication, it is necessary to take steps to communicate role requirements more effectively. French and Bell (1990), in a book entitled Organization Development, describe interventions such as responsibility charting, role analysis and role negotiation. (For a recent example of the application of responsibility charting, see Schaubroeck et al. 1993). Each of these is designed to make employees’ role requirements explicit and well defined. In addition, these interventions allow employees input into the process of defining their roles.
When role requirements are made explicit, it may also be revealed that role responsibilities are not equitably distributed among employees. Thus, the previously mentioned interventions may also prevent role overload. In addition, organizations should keep up to date regarding individuals’ role responsibilities by reviewing job descriptions and carrying out job analyses (Levine 1983). It may also help to encourage employees to be realistic about the number of role responsibilities they can handle. In some cases, employees who are under pressure to take on too much may need to be more assertive when negotiating role responsibilities.
As a final comment, it must be remembered that role ambiguity and role overload are subjective states. Thus, efforts to reduce these stressors must consider individual differences. Some workers may in fact enjoy the challenge of these stressors. Others, however, may find them aversive. If this is the case, organizations have a moral, legal and financial interest in keeping these stressors at manageable levels.
" DISCLAIMER: The ILO does not take responsibility for content presented on this web portal that is presented in any language other than English, which is the language used for the initial production and peer-review of original content. Certain statistics have not been updated since the production of the 4th edition of the Encyclopaedia (1998)."