34. Psychosocial and Organizational Factors
Chapter Editors: Steven L. Sauter, Lawrence R. Murphy, Joseph J. Hurrell and Lennart Levi
Psychosocial and Organizational Factors
Steven L. Sauter, Joseph J. Hurrell Jr., Lawrence R. Murphy and Lennart Levi
Psychosocial Factors, Stress and Health
Lennart Levi
Demand/Control Model: A Social, Emotional, and Physiological Approach to Stress Risk and Active Behaviour Development
Robert Karasek
Social Support: An Interactive Stress Model
Kristina Orth-Gomér
Person - Environment Fit
Robert D. Caplan
Workload
Marianne Frankenhaeuser
Hours of Work
Timothy H. Monk
Environmental Design
Daniel Stokols
Ergonomic Factors
Michael J. Smith
Autonomy and Control
Daniel Ganster
Work Pacing
Gavriel Salvendy
Electronic Work Monitoring
Lawrence M. Schleifer
Role Clarity and Role Overload
Steve M. Jex
Sexual Harassment
Chaya S. Piotrkowski
Workplace Violence
Julian Barling
Job Future Ambiguity
John M. Ivancevich
Unemployment
Amiram D. Vinokur
Total Quality Management
Dennis Tolsma
Managerial Style
Cary L. Cooper and Mike Smith
Organizational Structure
Lois E. Tetrick
Organizational Climate and Culture
Denise M. Rousseau
Performance Measures and Compensation
Richard L. Shell
Staffing Issues
Marilyn K. Gowing
Socialization
Debra L. Nelson and James Campbell Quick
Career Stages
Kari Lindström
Type A/B Behaviour Pattern
C. David Jenkins
Hardiness
Suzanne C. Ouellette
Self-Esteem
John M. Schaubroeck
Locus of Control
Lawrence R. Murphy and Joseph J. Hurrell, Jr.
Coping Styles
Ronald J. Burke
Social Support
D. Wayne Corneil
Gender, Job Stress and Illness
Rosalind C. Barnett
Ethnicity
Gwendolyn Puryear Keita
Selected Acute Physiological Outcomes
Andrew Steptoe and Tessa M. Pollard
Behavioural Outcomes
Arie Shirom
Well-Being Outcomes
Peter Warr
Immunological Reactions
Holger Ursin
Cardiovascular Diseases
Töres Theorell and Jeffrey V. Johnson
Gastrointestinal Problems
Jerry Suls
Cancer
Bernard H. Fox
Musculoskeletal Disorders
Soo-Yee Lim, Steven L. Sauter and Naomi G. Swanson
Mental Illness
Carles Muntaner and William W. Eaton
Burnout
Christina Maslach
Summary of Generic Prevention and Control Strategies
Cary L. Cooper and Sue Cartwright
Click a link below to view table in the article context.
Point to a thumbnail to see figure caption, click to see figure in article context.
35. Organizations and Health and Safety
Chapter Editor: Gunnela Westlander
Psychosocial Factors and Organizational Management
Gunnela Westlander
Case Study: Organizational Change as the Method--Health at Work as the Main Goal
Case Study: Applying Organizational Psychology
Point to a thumbnail to see figure caption, click to see figure in article context.
The computerization of work has made possible the development of a new approach to work monitoring called electronic performance monitoring (EPM). EPM has been defined as the “computerized collection, storage, analysis, and reporting of information about employees’ activities on a continuous basis” (USOTA 1987). Although banned in many European countries, electronic performance monitoring is increasing throughout the world on account of intense competitive pressures to improve productivity in a global economy.
EPM has changed the psychosocial work environment. This application of computer technology has significant implications for work supervision, workload demands, performance appraisal, performance feedback, rewards, fairness and privacy. As a result, occupational health researchers, worker representatives, government agencies and the public news media have expressed concern about the stress-health effects of electronic performance monitoring (USOTA 1987).
Traditional approaches to work monitoring include direct observation of work behaviours, examination of work samples, review of progress reports and analysis of performance measures (Larson and Callahan 1990). Historically, employers have always attempted to improve on these methods of monitoring worker performance. Considered as part of a continuing monitoring effort across the years, then, EPM is not a new development. What is new, however, is the use of EPM, particularly in office and service work, to capture employee performance on a second-by-second, keystroke-by-keystroke basis so that work management in the form of corrective action, performance feedback, delivery of incentive pay, or disciplinary measures can be taken at any time (Smith 1988). In effect, the human supervisor is being replaced by an electronic supervisor.
EPM is used in office work such as word processing and data entry to monitor keystroke production and error rates. Airline reservation clerks and directory assistance operators are monitored by computers to determine how long it takes to service customers and to measure the time interval between calls. EPM also is used in more traditional economic sectors. Freight haulers, for example, are using computers to monitor driver speed and fuel consumption, and tire manufacturers are electronically monitoring the productivity of rubber workers. In sum, EPM is used to establish performance standards, track employee performance, compare actual performance with predetermined standards and administer incentive pay programmes based on these standards (USOTA 1987).
Advocates of EPM assert that continuous electronic work monitoring is essential to high performance and productivity in the contemporary workplace. It is argued that EPM enables managers and supervisors to organize and control human, material and financial resources. Specifically, EPM provides for:
Supporters of electronic monitoring also claim that, from the worker’s perspective, there are several benefits. Electronic monitoring, for example, can provide regular feedback of work performance, which enables workers to take corrective action when necessary. It also satisfies the worker’s need for self-evaluation and reduces performance uncertainty.
Despite the possible benefits of EPM, there is concern that certain monitoring practices are abusive and constitute an invasion of employee privacy (USOTA 1987). Privacy has become an issue particularly when workers do not know when or how often they are being monitored. Since work organizations often do not share performance data with workers, a related privacy issue is whether workers should have access to their own performance records or the right to question possible wrong information.
Workers also have raised objections to the manner in which monitoring systems have been implemented (Smith, Carayon and Miezio 1986; Westin 1986). In some workplaces, monitoring is perceived as an unfair labour practice when it is used to measure individual, as opposed to group, performance. In particular, workers have taken exception to the use of monitoring to enforce compliance with performance standards that impose excessive workload demands. Electronic monitoring also can make the work process more impersonal by replacing a human supervisor with an electronic supervisor. In addition, the overemphasis on increased production may encourage workers to compete instead of cooperate with one another.
Various theoretical paradigms have been postulated to account for the possible stress-health effects of EPM (Amick and Smith 1992; Schleifer and Shell 1992; Smith et al. 1992b). A fundamental assumption made by many of these models is that EPM indirectly influences stress-health outcomes by intensifying workload demands, diminishing job control and reducing social support. In effect, EPM mediates changes in the psychosocial work environment that result in an imbalance between the demands of the job and the worker’s resources to adapt.
The impact of EPM on the psychosocial work environment is felt at three levels of the work system: the organization-technology interface, the job-technology interface and the human-technology interface (Amick and Smith 1992). The extent of work system transformation and the subsequent implications for stress outcomes are contingent upon the inherent characteristics of the EPM process; that is, the type of information gathered, the method of gathering the information and the use of the information (Carayon 1993). These EPM characteristics can interact with various job design factors and increase stress-health risks.
An alternative theoretical perspective views EPM as a stressor that directly results in strain independent of other job-design stress factors (Smith et al. 1992b; Carayon 1994). EPM, for example, can generate fear and tension as a result of workers being constantly watched by “Big Brother”. EPM also may be perceived by workers as an invasion of privacy that is highly threatening.
With respect to the stress effects of EPM, empirical evidence obtained from controlled laboratory experiments indicates that EPM can produce mood disturbances (Aiello and Shao 1993; Schleifer, Galinsky and Pan 1995) and hyperventilatory stress reactions (Schleifer and Ley 1994). Field studies have also reported that EPM alters job-design stress factors (for example, workload), which, in turn, generate tension or anxiety together with depression (Smith, Carayon and Miezio 1986; Ditecco et al. 1992; Smith et al. 1992b; Carayon 1994). In addition, EPM is associated with symptoms of musculoskeletal discomfort among telecommunication workers and data-entry office workers (Smith et al. 1992b; Sauter et al. 1993; Schleifer, Galinsky and Pan 1995).
The use of EPM to enforce compliance with performance standards is perhaps one of the most stressful aspects of this approach to work monitoring (Schleifer and Shell 1992). Under these conditions, it may be useful to adjust performance standards with a stress allowance (Schleifer and Shell 1992): a stress allowance would be applied to the normal cycle time, as is the case with other more conventional work allowances such as rest breaks and machine delays. Particularly among workers who have difficulty meeting EPM performance standards, a stress allowance would optimize workload demands and promote well-being by balancing the productivity benefits of electronic performance monitoring against the stress effects of this approach to work monitoring.
Beyond the question of how to minimize or prevent the possible stress-health effects of EPM, a more fundamental issue is whether this “Tayloristic” approach to work monitoring has any utility in the modern workplace. Work organizations are increasingly utilizing sociotechnical work-design methods, “total quality management” practices, participative work groups, and organizational, as opposed to individual, measures of performance. As a result, electronic work monitoring of individual workers on a continuous basis may have no place in high-performance work systems. In this regard, it is interesting to note that those countries (for example, Sweden and Germany) that have banned EPM are the same countries which have most readily embraced the principles and practices associated with high-performance work systems.
Roles represent sets of behaviours that are expected of employees. To understand how organizational roles develop, it is particularly informative to see the process through the eyes of a new employee. Starting with the first day on the job, a new employee is presented with considerable information designed to communicate the organization’s role expectations. Some of this information is presented formally through a written job description and regular communications with one’s supervisor. Hackman (1992), however, states that workers also receive a variety of informal communications (termed discretionary stimuli) designed to shape their organizational roles. For example, a junior school faculty member who is too vocal during a departmental meeting may receive looks of disapproval from more senior colleagues. Such looks are subtle, but communicate much about what is expected of a junior colleague.
Ideally, the process of defining each employee’s role should proceed such that each employee is clear about his or her role. Unfortunately, this is often not the case and employees experience a lack of role clarity or, as it is commonly called, role ambiguity. According to Breaugh and Colihan (1994), employees are often unclear about how to do their jobs, when certain tasks should be performed and the criteria by which their performance will be judged. In some cases, it is simply difficult to provide an employee with a crystal-clear picture of his or her role. For example, when a job is relatively new, it is still “evolving” within the organization. Furthermore, in many jobs the individual employee has tremendous flexibility regarding how to get the job done. This is particularly true of highly complex jobs. In many other cases, however, role ambiguity is simply due to poor communication between either supervisors and subordinates or among members of work groups.
Another problem that can arise when role-related information is communicated to employees is role overload. That is, the role consists of too many responsibilities for an employee to handle in a reasonable amount of time. Role overload can occur for a number of reasons. In some occupations, role overload is the norm. For example, physicians in training experience tremendous role overload, largely as preparation for the demands of medical practice. In other cases, it is due to temporary circumstances. For example, if someone leaves an organization, the roles of other employees may need to be temporarily expanded to make up for the missing worker’s absence. In other instances, organizations may not anticipate the demands of the roles they create, or the nature of an employee’s role may change over time. Finally, it is also possible that an employee may voluntarily take on too many role responsibilities.
What are the consequences to workers in circumstances characterized by either role ambiguity, role overload or role clarity? Years of research on role ambiguity has shown that it is a noxious state which is associated with negative psychological, physical and behavioural outcomes (Jackson and Schuler 1985). That is, workers who perceive role ambiguity in their jobs tend to be dissatisfied with their work, anxious, tense, report high numbers of somatic complaints, tend to be absent from work and may leave their jobs. The most common correlates of role overload tend to be physical and emotional exhaustion. In addition, epidemiological research has shown that overloaded individuals (as measured by work hours) may be at greater risk for coronary heart disease. In considering the effects of both role ambiguity and role overload, it must be kept in mind that most studies are cross-sectional (measuring role stressors and outcomes at one point in time) and have examined self-reported outcomes. Thus, inferences about causality must be somewhat tentative.
Given the negative effects of role ambiguity and role overload, it is important for organizations to minimize, if not eliminate, these stressors. Since role ambiguity, in many cases, is due to poor communication, it is necessary to take steps to communicate role requirements more effectively. French and Bell (1990), in a book entitled Organization Development, describe interventions such as responsibility charting, role analysis and role negotiation. (For a recent example of the application of responsibility charting, see Schaubroeck et al. 1993). Each of these is designed to make employees’ role requirements explicit and well defined. In addition, these interventions allow employees input into the process of defining their roles.
When role requirements are made explicit, it may also be revealed that role responsibilities are not equitably distributed among employees. Thus, the previously mentioned interventions may also prevent role overload. In addition, organizations should keep up to date regarding individuals’ role responsibilities by reviewing job descriptions and carrying out job analyses (Levine 1983). It may also help to encourage employees to be realistic about the number of role responsibilities they can handle. In some cases, employees who are under pressure to take on too much may need to be more assertive when negotiating role responsibilities.
As a final comment, it must be remembered that role ambiguity and role overload are subjective states. Thus, efforts to reduce these stressors must consider individual differences. Some workers may in fact enjoy the challenge of these stressors. Others, however, may find them aversive. If this is the case, organizations have a moral, legal and financial interest in keeping these stressors at manageable levels.
Historically, the sexual harassment of female workers has been ignored, denied, made to seem trivial, condoned and even implicitly supported, with women themselves being blamed for it (MacKinnon 1978). Its victims are almost entirely women, and it has been a problem since females first sold their labour outside the home.
Although sexual harassment also exists outside the workplace, here it will be taken to denote harassment in the workplace.
Sexual harassment is not an innocent flirtation nor the mutual expression of attraction between men and women. Rather, sexual harassment is a workplace stressor that poses a threat to a woman’s psychological and physical integrity and security, in a context in which she has little control because of the risk of retaliation and the fear of losing her livelihood. Like other workplace stressors, sexual harassment may have adverse health consequences for women that can be serious and, as such, qualifies as a workplace health and safety issue (Bernstein 1994).
In the United States, sexual harassment is viewed primarily as a discrete case of wrongful conduct to which one may appropriately respond with blame and recourse to legal measures for the individual. In the European Community it tends to be viewed rather as a collective health and safety issue (Bernstein 1994).
Because the manifestations of sexual harassment vary, people may not agree on its defining qualities, even where it has been set forth in law. Still, there are some common features of harassment that are generally accepted by those doing work in this area:
When directed towards a specific woman it can involve sexual comments and seductive behaviours, “propositions” and pressure for dates, touching, sexual coercion through the use of threats or bribery and even physical assault and rape. In the case of a “hostile environment”, which is probably the more common state of affairs, it can involve jokes, taunts and other sexually charged comments that are threatening and demeaning to women; pornographic or sexually explicit posters; and crude sexual gestures, and so forth. One can add to these characteristics what is sometimes called “gender harassment”, which more involves sexist remarks that demean the dignity of women.
Women themselves may not label unwanted sexual attention or sexual remarks as harassing because they accept it as “normal” on the part of males (Gutek 1985). In general, women (especially if they have been harassed) are more likely to identify a situation as sexual harassment than men, who tend rather to make light of the situation, to disbelieve the woman in question or to blame her for “causing” the harassment (Fitzgerald and Ormerod 1993). People also are more likely to label incidents involving supervisors as sexually harassing than similar behaviour by peers (Fitzgerald and Ormerod 1993). This tendency reveals the significance of the differential power relationship between the harasser and the female employee (MacKinnon 1978.) As an example, a comment that a male supervisor may believe is complimentary may still be threatening to his female employee, who may fear that it will lead to pressure for sexual favours and that there will be retaliation for a negative response, including the potential loss of her job or negative evaluations.
Even when co-workers are involved, sexual harassment can be difficult for women to control and can be very stressful for them. This situation can occur where there are many more men than women in a work group, a hostile work environment is created and the supervisor is male (Gutek 1985; Fitzgerald and Ormerod 1993).
National data on sexual harassment are not collected, and it is difficult to obtain accurate numbers on its prevalence. In the United States, it has been estimated that 50% of all women will experience some form of sexual harassment during their working lives (Fitzgerald and Ormerod 1993). These numbers are consistent with surveys conducted in Europe (Bustelo 1992), although there is variation from country to country (Kauppinen-Toropainen and Gruber 1993). The extent of sexual harassment is also difficult to determine because women may not label it accurately and because of underreporting. Women may fear that they will be blamed, humiliated and not believed, that nothing will be done and that reporting problems will result in retaliation (Fitzgerald and Ormerod 1993). Instead, they may try to live with the situation or leave their jobs and risk serious financial hardship, a disruption of their work histories and problems with references (Koss et al. 1994).
Sexual harassment reduces job satisfaction and increases turnover, so that it has costs for the employer (Gutek 1985; Fitzgerald and Ormerod 1993; Kauppinen-Toropainen and Gruber 1993). Like other workplace stressors, it also can have negative effects on health that are sometimes quite serious. When the harassment is severe, as with rape or attempted rape, women are seriously traumatized. Even where sexual harassment is less severe, women can have psychological problems: they may become fearful, guilty and ashamed, depressed, nervous and less self-confident. They may have physical symptoms such as stomach-aches, headaches or nausea. They may have behavioural problems such as sleeplessness, over- or undereating, sexual problems and difficulties in their relations with others (Swanson et al. 1997).
Both the formal American and informal European approaches to combating harassment provide illustrative lessons (Bernstein 1994). In Europe, sexual harassment is sometimes dealt with by conflict resolution approaches that bring in third parties to help eliminate the harassment (e.g., England’s “challenge technique”). In the United States, sexual harassment is a legal wrong that provides victims with redress through the courts, although success is difficult to achieve. Victims of harassment also need to be supported through counselling, where needed, and helped to understand that they are not to blame for the harassment.
Prevention is the key to combating sexual harassment. Guidelines encouraging prevention have been promulgated through the European Commission Code of Practice (Rubenstein and DeVries 1993). They include the following: clear anti-harassment policies that are effectively communicated; special training and education for managers and supervisors; a designated ombudsperson to deal with complaints; formal grievance procedures and alternatives to them; and disciplinary treatment of those who violate the policies. Bernstein (1994) has suggested that mandated self-regulation may be a viable approach.
Finally, sexual harassment needs to be openly discussed as a workplace issue of legitimate concern to women and men. Trade unions have a critical role to play in helping place this issue on the public agenda. Ultimately, an end to sexual harassment requires that men and women reach social and economic equality and full integration in all occupations and workplaces.
The nature, prevalence, predictors and possible consequences of workplace violence have begun to attract the attention of labour and management practitioners, and researchers. The reason for this is the increasing occurrence of highly visible workplace murders. Once the focus is placed on workplace violence, it becomes clear that there are several issues, including the nature (or definition), prevalence, predictors, consequences and ultimately prevention of workplace violence.
Definition and Prevalence of Workplace Violence
The definition and prevalence of workplace violence are integrally related.
Consistent with the relative recency with which workplace violence has attracted attention, there is no uniform definition. This is an important issue for several reasons. First, until a uniform definition exists, any estimates of prevalence remain incomparable across studies and sites. Secondly, the nature of the violence is linked to strategies for prevention and interventions. For example, focusing on all instances of shootings within the workplace includes incidents that reflect the continuation of family conflicts, as well as those that reflect work-related stressors and conflicts. While employees would no doubt be affected in both situations, the control the organization has over the former is more limited, and hence the implications for interventions are different from those situations in which workplace shootings are a direct function of workplace stressors and conflicts.
Some statistics suggest that workplace murders are the fastest growing form of murder in the United States (for example, Anfuso 1994). In some jurisdictions (for example, New York State), murder is the modal cause of death in the workplace. Because of statistics such as these, workplace violence has attracted considerable attention recently. However, early indications suggest that those acts of workplace violence with the highest visibility (for example, murder, shootings) attract the greatest research scrutiny, but also occur with the least frequency. In contrast, verbal and psychological aggression against supervisors, subordinates and co-workers are far more common, but gather less attention. Supporting the notion of a close integration between definitional and prevalence issues, this would suggest that what is being studied in most cases is aggression rather than violence in the workplace.
Predictors of Workplace Violence
A reading of the literature on the predictors of workplace violence would reveal that most of the attention has been focused on the development of a “profile” of the potentially violent or “disgruntled” employee (for example, Mantell and Albrecht 1994; Slora, Joy and Terris 1991), most of which would identify the following as the salient personal characteristics of a disgruntled employee: white, male, aged 20-35, a “loner”, probable alcohol problem and a fascination with guns. Aside from the problem of the number of false-positive identifications this would lead to, this strategy is also based on identifying individuals predisposed to the most extreme forms of violence, and ignores the larger group involved in most of the aggressive and less violent workplace incidents.
Going beyond “demographic” characteristics, there are suggestions that some of the personal factors implicated in violence outside of the workplace would extend to the workplace itself. Thus, inappropriate use of alcohol, general history of aggression in one’s current life or family of origin, and low self-esteem have been implicated in workplace violence.
A more recent strategy has been to identify the workplace conditions under which workplace violence is most likely to occur: identifying the physical and psychosocial conditions in the workplace. While the research on psychosocial factors is still in its infancy, it would appear as though feelings of job insecurity, perceptions that organizational policies and their implementation are unjust, harsh management and supervision styles, and electronic monitoring are associated with workplace aggression and violence (United States House of Representatives 1992; Fox and Levin 1994).
Cox and Leather (1994) look to the predictors of aggression and violence in general in their attempt to understand the physical factors that predict workplace violence. In this respect, they suggest that workplace violence may be associated with perceived crowding, and extreme heat and noise. However, these suggestions about the causes of workplace violence await empirical scrutiny.
Consequences of workplace violence
The research to date suggests that there are primary and secondary victims of workplace violence, both of which are worthy of research attention. Bank tellers or store clerks who are held up and employees who are assaulted at work by current or former co-workers are the obvious or direct victims of violence at work. However, consistent with the literature showing that much human behaviour is learned from observing others, witnesses to workplace violence are secondary victims. Both groups might be expected to suffer negative effects, and more research is needed to focus on the way in which both aggression and violence at work affect primary and secondary victims.
Prevention of workplace violence
Most of the literature on the prevention of workplace violence focuses at this stage on prior selection, i.e., the prior identification of potentially violent individuals for the purpose of excluding them from employment in the first instance (for example, Mantell and Albrecht 1994). Such strategies are of dubious utility, for ethical and legal reasons. From a scientific perspective, it is equally doubtful whether we could identify potentially violent employees with sufficient precision (e.g., without an unacceptably high number of false-positive identifications). Clearly, we need to focus on workplace issues and job design for a preventive approach. Following Fox and Levin’s (1994) reasoning, ensuring that organizational policies and procedures are characterized by perceived justice will probably constitute an effective prevention technique.
Conclusion
Research on workplace violence is in its infancy, but gaining increasing attention. This bodes well for the further understanding, prediction and control of workplace aggression and violence.
Downsizing, layoffs, re-engineering, reshaping, reduction in force (RIF), mergers, early retirement, and outplacement—the description of these increasingly familiar changes has become a matter of commonplace jargon around the world in the past two decades. As companies have fallen on hard times, workers at all organizational levels have been expended and many remaining jobs have been altered. The job loss count in a single year (1992–93) includes Eastman Kodak, 2,000; Siemens, 13,000; Daimler-Benz, 27,000; Phillips, 40,000; and IBM, 65,000 (The Economist 1993, extracted from “Job Future Ambiguity” (John M. Ivancevich)). Job cuts have occurred at companies earning healthy profits as well as at firms faced with the need to cut costs. The trend of cutting jobs and changing the way remaining jobs are performed is expected to continue even after worldwide economic growth returns.
Why has losing and changing jobs become so widespread? There is no simple answer that fits every organization or situation. However, one or more of a number of factors is usually implicated, including lost market share, increasing international and domestic competition, increasing labour costs, obsolete plant and technologies and poor managerial practices. These factors have resulted in managerial decisions to slim down, re-engineer jobs and alter the psychological contract between the employer and the worker.
A work situation in which an employee could count on job security or the opportunity to hold multiple positions via career-enhancing promotions in a single firm has changed drastically. Similarly, the binding power of the traditional employer-worker psychological contract has weakened as millions of managers and non-managers have been let go. Japan was once famous for providing “lifetime” employment to individuals. Today, even in Japan, a growing number of workers, especially in large firms, are not assured of lifetime employment. The Japanese, like their counterparts across the world, are facing what can be referred to as increased job insecurity and an ambiguous picture of what the future holds.
Job Insecurity: An Interpretation
Maslow (1954), Herzberg, Mausner and Snyderman (1959) and Super (1957) have proposed that individuals have a need for safety or security. That is, individual workers sense security when holding a permanent job or when being able to control the tasks performed on the job. Unfortunately, there has been a limited number of empirical studies that have thoroughly examined the job security needs of workers (Kuhnert and Pulmer 1991; Kuhnert, Sims and Lahey 1989).
On the other hand, with the increased attention that is being paid to downsizing, layoffs and mergers, more researchers have begun to investigate the notion of job insecurity. The nature, causes and consequences of job insecurity have been considered by Greenhalgh and Rosenblatt (1984) who offer a definition of job insecurity as “perceived powerlessness to maintain desired continuity in a threatened job situation”. In Greenhalgh and Rosenblatt’s framework, job insecurity is considered a part of a person’s environment. In the stress literature, job insecurity is considered to be a stressor that introduces a threat that is interpreted and responded to by an individual. An individual’s interpretation and response could possibly include the decreased effort to perform well, feeling ill or below par, seeking employment elsewhere, increased coping to deal with the threat, or seeking more colleague interaction to buffer the feelings of insecurity.
Lazarus’ theory of psychological stress (Lazarus 1966; Lazarus and Folkman 1984) is centred on the concept of cognitive appraisal. Regardless of the actual severity of the danger facing a person, the occurrence of psychological stress depends upon the individual’s own evaluation of the threatening situation (here, job insecurity).
Selected Research on Job Insecurity
Unfortunately, like the research on job security, there is a paucity of well-designed studies of job insecurity. Furthermore, the majority of job insecurity studies incorporate unitary measurement methods. Few researchers examining stressors in general or job insecurity specifically have adopted a multiple-level approach to assessment. This is understandable because of the limitations of resources. However, the problems created by unitary assessments of job insecurity have resulted in a limited understanding of the construct. There are available to researchers four basic methods of measuring job insecurity: self-report, performance, psychophysiological and biochemical. It is still debatable whether these four types of measure assess different aspects of the consequences of job insecurity (Baum, Grunberg and Singer 1982). Each type of measure has limitations that must be recognized.
In addition to measurement problems in job insecurity research, it must be noted that there is a predominance of concentration in imminent or actual job loss. As noted by researchers (Greenhalgh and Rosenblatt 1984; Roskies and Louis-Guerin 1990), there should be more attention paid to “concern about a significant deterioration in terms and conditions of employment.” The deterioration of working conditions would logically seem to play a role in a person’s attitudes and behaviours.
Brenner (1987) has discussed the relationship between a job insecurity factor, unemployment, and mortality. He proposed that uncertainty, or the threat of instability, rather than unemployment itself causes higher mortality. The threat of being unemployed or losing control of one’s job activities can be powerful enough to contribute to psychiatric problems.
In a study of 1,291 managers, Roskies and Louis-Guerin (1990) examined the perceptions of workers facing layoffs, as well as those of managerial personnel working in firms that worked in stable, growth-oriented firms. A minority of managers were stressed about imminent job loss. However, a substantial number of managers were more stressed about a deterioration in working conditions and long-term job security.
Roskies, Louis-Guerin and Fournier (1993) proposed in a research study that job insecurity may be a major psychological stressor. In this study of personnel in the airline industry, the researchers determined that personality disposition (positive and negative) plays a role in the impact of job security or the mental health of workers.
Addressing the Problem of Job Insecurity
Organizations have numerous alternatives to downsizing, layoffs and reduction in force. Displaying compassion that clearly shows that management realizes the hardships that job loss and future job ambiguity pose is an important step. Alternatives such as reduced work weeks, across-the-board salary cuts, attractive early retirement packages, retraining existing employees and voluntary layoff programmes can be implemented (Wexley and Silverman 1993).
The global marketplace has increased job demands and job skill requirements. For some people, the effect of increased job demands and job skill requirements will provide career opportunities. For others, these changes could exacerbate the feelings of job insecurity. It is difficult to pinpoint exactly how individual workers will respond. However, managers must be aware of how job insecurity can result in negative consequences. Furthermore, managers need to acknowledge and respond to job insecurity. But possessing a better understanding of the notion of job insecurity and its potential negative impact on the performance, behaviour and attitudes of workers is a step in the right direction for managers.
It will obviously require more rigorous research to better understand the full range of consequences of job insecurity among selected workers. As additional information becomes available, managers need to be open-minded about attempting to help workers cope with job insecurity. Redefining the way work is organized and executed should become a useful alternative to traditional job design methods. Managers have a responsibility:
Since job insecurity is likely to remain a perceived threat for many, but not all, workers, managers need to develop and implement strategies to address this factor. The institutional costs of ignoring job insecurity are too great for any firm to accept. Whether managers can efficiently deal with workers who feel insecure about their jobs and working conditions is fast becoming a measure of managerial competency.
The term unemployment describes the situation of individuals who desire to work but are unable to trade their skills and labour for pay. It is used to indicate either an individual’s personal experience of failure to find gainful work, or the experience of an aggregate in a community, a geographic region or a country. The collective phenomenon of unemployment is often expressed as the unemployment rate, that is, the number of people who are seeking work divided by the total number of people in the labour force, which in turn consists of both the employed and the unemployed. Individuals who desire to work for pay but have given up their efforts to find work are termed discouraged workers. These persons are not listed in official reports as members of the group of unemployed workers, for they are no longer considered to be part of the labour force.
The Organization for Economic Cooperation and Development (OECD) provides statistical information on the magnitude of unemployment in 25 countries around the world (OECD 1995). These consist mostly of the economically developed countries of Europe and North America, as well as Japan, New Zealand and Australia. According to the report for the year 1994, the total unemployment rate in these countries was 8.1% (or 34.3 million individuals). In the developed countries of central and western Europe, the unemployment rate was 9.9% (11 million), in the southern European countries 13.7% (9.2 million), and in the United States 6.1% (8 million). Of the 25 countries studied, only six (Austria, Iceland, Japan, Mexico, Luxembourg and Switzerland) had an unemployment rate below 5%. The report projected only a slight overall decrease (less than one-half of 1%) in unemployment for the years 1995 and 1996. These figures suggest that millions of individuals will continue to be vulnerable to the harmful effects of unemployment in the foreseeable future (Reich 1991).
A large number of people become unemployed at various periods during their lives. Depending on the structure of the economy and on its cycles of expansion and contraction, unemployment may strike students who drop out of school; those who have been graduated from a high school, trade school or college but find it difficult to enter the labour market for the first time; women seeking to return to gainful employment after raising their children; veterans of the armed services; and older persons who want to supplement their income after retirement. However, at any given time, the largest segment of the unemployed population, usually between 50 and 65%, consists of displaced workers who have lost their jobs. The problems associated with unemployment are most visible in this segment of the unemployed partly because of its size. Unemployment is also a serious problem for minorities and younger persons. Their unemployment rates are often two to three times higher than that of the general population (USDOL 1995).
The fundamental causes of unemployment are rooted in demographic, economic and technological changes. The restructuring of local and national economies usually gives rise to at least temporary periods of high unemployment rates. The trend towards the globalization of markets, coupled with accelerated technological changes, results in greater economic competition and the transfer of industries and services to new places that supply more advantageous economic conditions in terms of taxation, a cheaper labour force and more accommodating labour and environmental laws. Inevitably, these changes exacerbate the problems of unemployment in areas that are economically depressed.
Most people depend on the income from a job to provide themselves and their families with the necessities of life and to sustain their accustomed standard of living. When they lose a job, they experience a substantial reduction in their income. Mean duration of unemployment, in the United States for example, varies between 16 and 20 weeks, with a median between eight and ten weeks (USDOL 1995). If the period of unemployment that follows the job loss persists so that unemployment benefits are exhausted, the displaced worker faces a financial crisis. That crisis plays itself out as a cascading series of stressful events that may include loss of a car through repossession, foreclosure on a house, loss of medical care, and food shortages. Indeed, an abundance of research in Europe and the United States shows that economic hardship is the most consistent outcome of unemployment (Fryer and Payne 1986), and that economic hardship mediates the adverse impact of unemployment on various other outcomes, in particular, on mental health (Kessler, Turner and House 1988).
There is a great deal of evidence that job loss and unemployment produce significant deterioration in mental health (Fryer and Payne 1986). The most common outcomes of job loss and unemployment are increases in anxiety, somatic symptoms and depression symptomatology (Dooley, Catalano and Wilson 1994; Hamilton et al. 1990; Kessler, House and Turner 1987; Warr, Jackson and Banks 1988). Furthermore, there is some evidence that unemployment increases by over twofold the risk of onset of clinical depression (Dooley, Catalano and Wilson 1994). In addition to the well-documented adverse effects of unemployment on mental health, there is research that implicates unemployment as a contributing factor to other outcomes (see Catalano 1991 for a review). These outcomes include suicide (Brenner 1976), separation and divorce (Stack 1981; Liem and Liem 1988), child neglect and abuse (Steinberg, Catalano and Dooley 1981), alcohol abuse (Dooley, Catalano and Hough 1992; Catalano et al. 1993a), violence in the workplace (Catalano et al. 1993b), criminal behaviour (Allan and Steffensmeier 1989), and highway fatalities (Leigh and Waldon 1991). Finally, there is also some evidence, based primarily on self-report, that unemployment contributes to physical illness (Kessler, House and Turner 1987).
The adverse effects of unemployment on displaced workers are not limited to the period during which they have no jobs. In most instances, when workers become re-employed, their new jobs are significantly worse than the jobs they lost. Even after four years in their new positions, their earnings are substantially lower than those of similar workers who were not laid off (Ruhm 1991).
Because the fundamental causes of job loss and unemployment are rooted in societal and economic processes, remedies for their adverse social effects must be sought in comprehensive economic and social policies (Blinder 1987). At the same time, various community-based programmes can be undertaken to reduce the negative social and psychological impact of unemployment at the local level. There is overwhelming evidence that re-employment reduces distress and depression symptoms and restores psychosocial functioning to pre-unemployment levels (Kessler, Turner and House 1989; Vinokur, Caplan and Williams 1987). Therefore, programmes for displaced workers or others who wish to become employed should be aimed primarily at promoting and facilitating their re-employment or new entry into the labour force. A variety of such programmes have been tried successfully. Among these are special community-based intervention programmes for creating new ventures that in turn generate job opportunities (e.g., Last et al. 1995), and others that focus on retraining (e.g., Wolf et al. 1995).
Of the various programmes that attempt to promote re-employment, the most common are job search programmes organized as job clubs that attempt to intensify job search efforts (Azrin and Beasalel 1982), or workshops that focus more broadly on enhancing job search skills and facilitating transition into re-employment in high-quality jobs (e.g., Caplan et al. 1989). Cost/benefit analyses have demonstrated that these job search programmes are cost effective (Meyer 1995; Vinokur et al. 1991). Furthermore, there is also evidence that they could prevent deterioration in mental health and possibly the onset of clinical depression (Price, van Ryn and Vinokur 1992).
Similarly, in the case of organizational downsizing, industries can reduce the scope of unemployment by devising ways to involve workers in the decision-making process regarding the management of the downsizing programme (Kozlowski et al. 1993; London 1995; Price 1990). Workers may choose to pool their resources and buy out the industry, thus avoiding layoffs; to reduce working hours to spread and even out the reduction in force; to agree to a reduction in wages to minimize layoffs; to retrain and/or relocate to take new jobs; or to participate in outplacement programmes. Employers can facilitate the process by timely implementation of a strategic plan that offers the above-mentioned programmes and services to workers at risk of being laid off. As has been indicated already, unemployment leads to pernicious outcomes at both the personal and societal level. A combination of comprehensive government policies, flexible downsizing strategies by business and industry, and community-based programmes can help to mitigate the adverse consequences of a problem that will continue to affect the lives of millions of people for years to come.
One of the more remarkable social transformations of this century was the emergence of a powerful Japanese economy from the debris of the Second World War. Fundamental to this climb to global competitiveness were a commitment to quality and a determination to prove false the then-common belief that Japanese goods were shoddy and worthless. Guided by the innovative teachings of Deming (1993), Juran (1988) and others, Japanese managers and engineers adopted practices that have ultimately evolved into a comprehensive management system rooted in the basic concept of quality. Fundamentally, this system represents a shift in thinking. The traditional view was that quality had to be balanced against the cost of attaining it. The view that Deming and Juran urged was that higher quality led to lower total cost and that a systems approach to improving work processes would help in attaining both of these objectives. Japanese managers adopted this management philosophy, engineers learned and practised statistical quality control, workers were trained and involved in process improvement, and the outcome was dramatic (Ishikawa 1985; Imai 1986).
By 1980, alarmed at the erosion of their markets and seeking to broaden their reach in the global economy, European and American managers began to search for ways to regain a competitive position. In the ensuing 15 years, more and more companies came to understand the principles underlying quality management and to apply them, initially in industrial production and later in the service sector as well. While there are a variety of names for this management system, the most commonly used is total quality management or TQM; an exception is the health care sector, which more frequently uses the term continuous quality improvement, or CQI. Recently, the term business process reengineering (BPR) has also come into use, but this tends to mean an emphasis on specific techniques for process improvement rather than on the adoption of a comprehensive management system or philosophy.
TQM is available in many “flavours,” but it is important to understand it as a system that includes both a management philosophy and a powerful set of tools for improving the efficiency of work processes. Some of the common elements of TQM include the following (Feigenbaum 1991; Mann 1989; Senge 1991):
Typically, organizations successfully adopting TQM find they must make changes on three fronts.
One is transformation. This involves such actions as defining and communicating a vision of the organization’s future, changing the management culture from top-down oversight to one of employee involvement, fostering collaboration instead of competition and refocusing the purpose of all work on meeting customer requirements. Seeing the organization as a system of interrelated processes is at the core of TQM, and is an essential means of securing a totally integrated effort towards improving performance at all levels. All employees must know the vision and the aim of the organization (the system) and understand where their work fits in it, or no amount of training in applying TQM process improvement tools can do much good. However, lack of genuine change of organizational culture, particularly among lower echelons of managers, is frequently the downfall of many nascent TQM efforts; Heilpern (1989) observes, “We have come to the conclusion that the major barriers to quality superiority are not technical, they are behavioural.” Unlike earlier, flawed “quality circle” programmes, in which improvement was expected to “convect” upward, TQM demands top management leadership and the firm expectation that middle management will facilitate employee participation (Hill 1991).
A second basis for successful TQM is strategic planning. The achievement of an organization’s vision and goals is tied to the development and deployment of a strategic quality plan. One corporation defined this as “a customer-driven plan for the application of quality principles to key business objectives and the continuous improvement of work processes” (Yarborough 1994). It is senior management’s responsibility—indeed, its obligation to workers, stockholders and beneficiaries alike—to link its quality philosophy to sound and feasible goals that can reasonably be attained. Deming (1993) called this “constancy of purpose” and saw its absence as a source of insecurity for the workforce of the organization. The fundamental intent of strategic planning is to align the activities of all of the people throughout the company or organization so that it can achieve its core goals and can react with agility to a changing environment. It is evident that it both requires and reinforces the need for widespread participation of supervisors and workers at all levels in shaping the goal-directed work of the company (Shiba, Graham and Walden 1994).
Only when these two changes are adequately carried out can one hope for success in the third: the implementation of continuous quality improvement. Quality outcomes, and with them customer satisfaction and improved competitive position, ultimately rest on widespread deployment of process improvement skills. Often, TQM programmes accomplish this through increased investments in training and through assignment of workers (frequently volunteers) to teams charged with addressing a problem. A basic concept of TQM is that the person most likely to know how a job can be done better is the person who is doing it at a given moment. Empowering these workers to make useful changes in their work processes is a part of the cultural transformation underlying TQM; equipping them with knowledge, skills and tools to do so is part of continuous quality improvement.
The collection of statistical data is a typical and basic step taken by workers and teams to understand how to improve work processes. Deming and others adapted their techniques from the seminal work of Shewhart in the 1920s (Schmidt and Finnigan 1992). Among the most useful TQM tools are: (a) the Pareto Chart, a graphical device for identifying the more frequently occurring problems, and hence the ones to be addressed first; (b) the statistical control chart, an analytic tool for ascertaining the degree of variability in the unimproved process; and (c) flow charting, a means to document exactly how the process is carried out at present. Possibly the most ubiquitous and important tool is the Ishikawa Diagram (or “fishbone” diagram), whose invention is credited to Kaoru Ishikawa (1985). This instrument is a simple but effective way by which team members can collaborate on identifying the root causes of the process problem under study, and thus point the path to process improvement.
TQM, effectively implemented, may be important to workers and worker health in many ways. For example, the adoption of TQM can have an indirect influence. In a very basic sense, an organization that makes a quality transformation has arguably improved its chances of economic survival and success, and hence those of its employees. Moreover, it is likely to be one where respect for people is a basic tenet. Indeed, TQM experts often speak of “shared values”, those things that must be exemplified in the behaviour of both management and workers. These are often publicized throughout the organization as formal values statements or aspiration statements, and typically include such emotive language as “trust”, “respecting each other”, “open communications”, and “valuing our diversity” (Howard 1990).
Thus, it is tempting to suppose that quality workplaces will be “worker-friendly”—where worker-improved processes become less hazardous and where the climate is less stressful. The logic of quality is to build quality into a product or service, not to detect failures after the fact. It can be summed up in a word—prevention (Widfeldt and Widfeldt 1992). Such a logic is clearly compatible with the public health logic of emphasizing prevention in occupational health. As Williams (1993) points out in a hypothetical example, “If the quality and design of castings in the foundry industry were improved there would be reduced exposure ... to vibration as less finishing of castings would be needed.” Some anecdotal support for this supposition comes from satisfied employers who cite trend data on job health measures, climate surveys that show better employee satisfaction, and more numerous safety and health awards in facilities using TQM. Williams further presents two case studies in UK settings that exemplify such employer reports (Williams 1993).
Unfortunately, virtually no published studies offer firm evidence on the matter. What is lacking is a research base of controlled studies that document health outcomes, consider the possibility of detrimental as well as positive health influences, and link all of this causally to measurable factors of business philosophy and TQM practice. Given the significant prevalence of TQM enterprises in the global economy of the 1990s, this is a research agenda with genuine potential to define whether TQM is in fact a supportive tool in the prevention armamentarium of occupational safety and health.
We are on somewhat firmer ground to suggest that TQM can have a direct influence on worker health when it explicitly focuses quality improvement efforts on safety and health. Obviously, like all other work in an enterprise, occupational and environmental health activity is made up of interrelated processes, and the tools of process improvement are readily applied to them. One of the criteria against which candidates are examined for the Baldridge Award, the most important competitive honour granted to US organizations, is the competitor’s improvements in occupational health and safety. Yarborough has described how the occupational and environmental health (OEH) employees of a major corporation were instructed by senior management to adopt TQM with the rest of the company and how OEH was integrated into the company’s strategic quality plan (Yarborough 1994). The chief executive of a US utility that was the first non-Japanese company ever to win Japan’s coveted Deming Prize notes that safety was accorded a high priority in the TQM effort: “Of all the company’s major quality indicators, the only one that addresses the internal customer is employee safety.” By defining safety as a process, subjecting it to continuous improvement, and tracking lost-time injuries per 100 employees as a quality indicator, the utility reduced its injury rate by half, reaching the lowest point in the history of the company (Hudiberg 1991).
In summary, TQM is a comprehensive management system grounded in a management philosophy that emphasizes the human dimensions of work. It is supported by a powerful set of technologies that use data derived from work processes to document, analyse and continuously improve these processes.
Selye (1974) suggested that having to live with other people is one of the most stressful aspects of life. Good relations between members of a work group are considered a central factor in individual and organizational health (Cooper and Payne 1988) particularly in terms of the boss–subordinate relationship. Poor relationships at work are defined as having “low trust, low levels of supportiveness and low interest in problem solving within the organization” (Cooper and Payne 1988). Mistrust is positively correlated with high role ambiguity, which leads to inadequate interpersonal communications between individuals and psychological strain in the form of low job satisfaction, decreased well-being and a feeling of being threatened by one’s superior and colleagues (Kahn et al. 1964; French and Caplan 1973).
Supportive social relationships at work are less likely to create the interpersonal pressures associated with rivalry, office politics and unconstructive competition (Cooper and Payne 1991). McLean (1979) suggests that social support in the form of group cohesion, interpersonal trust and liking for a superior is associated with decreased levels of perceived job stress and better health. Inconsiderate behaviour on the part of a supervisor appears to contribute significantly to feelings of job pressure (McLean 1979). Close supervision and rigid performance monitoring also have stressful consequences—in this connection a great deal of research has been carried out which indicates that a managerial style characterized by lack of effective consultation and communication, unjustified restrictions on employee behaviour, and lack of control over one’s job is associated with negative psychological moods and behavioural responses (for example, escapist drinking and heavy smoking) (Caplan et al. 1975), increased cardiovascular risk (Karasek 1979) and other stress-related manifestations. On the other hand, offering broader opportunities to employees to participate in decision making at work can result in improved performance, lower staff turnover and improved levels of mental and physical well-being. A participatory style of management should also extend to worker involvement in the improvement of safety in the workplace; this could help to overcome apathy among blue-collar workers, which is acknowledged as a significant factor in the cause of accidents (Robens 1972; Sutherland and Cooper 1986).
Early work in the relationship between managerial style and stress was carried out by Lewin (for example, in Lewin, Lippitt and White 1939), in which he documented the stressful and unproductive effects of authoritarian management styles. More recently, Karasek’s (1979) work highlights the importance of managers’ providing workers with greater control at work or a more participative management style. In a six-year prospective study he demonstrated that job control (i.e., the freedom to use one’s intellectual discretion) and work schedule freedom were significant predictors of risk of coronary heart disease. Restriction of opportunity for participation and autonomy results in increased depression, exhaustion, illness rates and pill consumption. Feelings of being unable to make changes concerning a job and lack of consultation are commonly reported stressors among blue-collar workers in the steel industry (Kelly and Cooper 1981), oil and gas workers on rigs and platforms in the North Sea (Sutherland and Cooper 1986) and many other blue-collar workers (Cooper and Smith 1985). On the other hand, as Gowler and Legge (1975) indicate, a participatory management style can create its own potentially stressful situations, for example, a mismatch of formal and actual power, resentment of the erosion of formal power, conflicting pressures both to be participative and to meet high production standards, and subordinates’ refusal to participate.
Although there has been a substantial research focus on the differences between authoritarian versus participatory management styles on employee performance and health, there have also been other, idiosyncratic approaches to managerial style (Jennings, Cox and Cooper 1994). For example, Levinson (1978) has focused on the impact of the “abrasive” manager. Abrasive managers are usually achievement-oriented, hard-driving and intelligent (similar to the type A personality), but function less well at the emotional level. As Quick and Quick (1984) point out, the need for perfection, the preoccupation with self and the condescending, critical style of the abrasive manager induce feelings of inadequacy among their subordinates. As Levinson suggests, the abrasive personality as a peer is both difficult and stressful to deal with, but as a superior, the consequences are potentially very damaging to interpersonal relationships and highly stressful for subordinates in the organization.
In addition, there are theories and research which suggest that the effect on employee health and safety of managerial style and personality can only be understood in the context of the nature of the task and the power of the manager or leader. For example, Fiedler’s (1967) contingency theory suggests that there are eight main group situations based upon combinations of dichotomies: (a) the warmth of the relations between the leader and follower; (b) the level structure imposed by the task; and (c) the power of the leader. The eight combinations could be arranged in a continuum with, at one end (octant one) a leader who has good relations with members, facing a highly structured task and possessing strong power; and, at the other end (octant eight), a leader who has poor relations with members, facing a loosely structured task and having low power. In terms of stress, it could be argued that the octants formed a continuum from low stress to high stress. Fiedler also examined two types of leader: the leader who would value negatively most of the characteristics of the member he liked least (the lower LPC leader) and the leader who would see many positive qualities even in the members whom he disliked (the high LPC leader). Fiedler made specific predictions about the performance of the leader. He suggested that the low LPC leader (who had difficulty in seeing merits in subordinates he disliked) would be most effective in octants one and eight, where there would be very low and very high levels of stress, respectively. On the other hand, a high LPC leader (who is able to see merits even in those he disliked) would be more effective in the middle octants, where moderate stress levels could be expected. In general, subsequent research (for example, Strube and Garcia 1981) has supported Fiedler’s ideas.
Additional leadership theories suggest that task-oriented managers or leaders create stress. Seltzer, Numerof and Bass (1989) found that intellectually stimulating leaders increased perceived stress and “burnout” among their subordinates. Misumi (1985) found that production-oriented leaders generated physiological symptoms of stress. Bass (1992) finds that in laboratory experiments, production-oriented leadership causes higher levels of anxiety and hostility. On the other hand, transformational and charismatic leadership theories (Burns 1978) focus upon the effect which those leaders have upon their subordinates who are generally more self-assured and perceive more meaning in their work. It has been found that these types of leader or manager reduce the stress levels of their subordinates.
On balance, therefore, managers who tend to demonstrate “considerate” behaviour, to have a participative management style, to be less production- or task-oriented and to provide subordinates with control over their jobs are likely to reduce the incidence of ill health and accidents at work.
Most of the articles in this chapter deal with aspects of the work environment that are proximal to the individual employee. The focus of this article, however, is to examine the impact of more distal, macrolevel characteristics of organizations as a whole that may affect employees’ health and well-being. That is, are there ways in which organizations structure their internal environments that promote health among the employees of that organization or, conversely, place employees at greater risk of experiencing stress? Most theoretical models of occupational or job stress incorporate organizational structural variables such as organizational size, lack of participation in decision making, and formalization (Beehr and Newman 1978; Kahn and Byosiere 1992).
Organizational structure refers to the formal distribution of work roles and functions within an organization coordinating the various functions or subsystems within the organization to efficiently attain the organization’s goals (Porras and Robertson 1992). As such, structure represents a coordinated set of subsystems to facilitate the accomplishment of the organization’s goals and mission and defines the division of labour, the authority relationships, formal lines of communication, the roles of each organizational subsystem and the interrelationships among these subsystems. Therefore, organizational structure can be viewed as a system of formal mechanisms to enhance the understandability of events, predictability of events and control over events within the organization which Sutton and Kahn (1987) proposed as the three work-relevant antidotes against the stress-strain effect in organizational life.
One of the earliest organizational characteristics examined as a potential risk factor was organizational size. Contrary to the literature on risk of exposure to hazardous agents in the work environment, which suggests that larger organizations or plants are safer, being less hazardous and better equipped to handle potential hazards (Emmett 1991), larger organizations originally were hypothesized to put employees at greater risk of occupational stress. It was proposed that larger organizations tend to adapt a bureaucratic organizational structure to coordinate the increased complexity. This bureaucratic structure would be characterized by a division of labour based on functional specialization, a well-defined hierarchy of authority, a system of rules covering the rights and duties of job incumbents, impersonal treatment of workers and a system of procedures for dealing with work situations (Bennis 1969). On the surface, it would appear that many of these dimensions of bureaucracy would actually improve or maintain the predictability and understandability of events in the work environment and thus serve to reduce stress within the work environment. However, it also appears that these dimensions can reduce employees’ control over events in the work environment through a rigid hierarchy of authority.
Given these characteristics of bureaucratic structure, it is not surprising that organizational size, per se, has received no consistent support as a macro-organization risk factor (Kahn and Byosiere 1992). Payne and Pugh’s (1976) review, however, provides some evidence that organizational size indirectly increases the risk of stress. They report that larger organizations suffered a reduction in the amount of communication, an increase in the amount of job and task specifications and a decrease in coordination. These effects could lead to less understanding and predictability of events in the work environment as well as a decrease in control over work events, thus increasing experienced stress (Tetrick and LaRocco 1987).
These findings on organizational size have led to the supposition that the two aspects of organizational structure that seem to pose the most risk for employees are formalization and centralization. Formalization refers to the written procedures and rules governing employees’ activities, and centralization refers to the extent to which the decision-making power in the organization is narrowly distributed to higher levels in the organization. Pines (1982) pointed out that it is not formalization within a bureaucracy that results in experienced stress or burnout but the unnecessary red tape, paperwork and communication problems that can result from formalization. Rules and regulations can be vague creating ambiguity or contradiction resulting in conflict or lack of understanding concerning appropriate actions to be taken in specific situations. If the rules and regulations are too detailed, employees may feel frustrated in their ability to achieve their goals especially in customer or client-oriented organizations. Inadequate communication can result in employees feeling isolated and alienated based on the lack of predictability and understanding of events in the work environment.
While these aspects of the work environment appear to be accepted as potential risk factors, the empirical literature on formalization and centralization are far from consistent. The lack of consistent evidence may stem from at least two sources. First, in many of the studies, there is an assumption of a single organizational structure having a consistent level of formalization and centralization throughout the entire organization. Hall (1969) concluded that organizations can be meaningfully studied as totalities; however, he demonstrated that the degree of formalization as well as decision-making authority can differ within organizational units. Therefore, if one is looking at an individual level phenomenon such as occupational stress, it may be more meaningful to look at the structure of smaller organizational units than that of the whole organization. Secondly, there is some evidence suggesting that there are individual differences in response to structural variables. For example, Marino and White (1985) found that formalization was positively related to job stress among individuals with an internal locus of control and negatively related to stress among individuals who generally believe that they have little control over their environments. Lack of participation, on the other hand, was not moderated by locus of control and resulted in increased levels of job stress. There also appear to be some cultural differences affecting individual responses to structural variables, which would be important for multinational organizations having to operate across national boundaries (Peterson et al. 1995). These cultural differences also may explain the difficulty in adopting organizational structures and procedures from other nations.
Despite the rather limited empirical evidence implicating structural variables as psychosocial risk factors, it has been recommended that organizations should change their structures to be flatter with fewer levels of hierarchy or number of communication channels, more decentralized with more decision- making authority at lower levels in the organization and more integrated with less job specialization (Newman and Beehr 1979). These recommendations are consistent with organizational theorists who have suggested that traditional bureaucratic structure may not be the most efficient or healthiest form of organizational structure (Bennis 1969). This may be especially true in light of technological advances in production and communication that characterize the postindustrial workplace (Hirschhorn 1991).
The past two decades have seen considerable interest in the redesign of organizations to deal with external environmental threats resulting from increased globalization and international competition in North America and Western Europe (Whitaker 1991). Straw, Sandelands and Dutton (1988) proposed that organizations react to environmental threats by restricting information and constricting control. This can be expected to reduce the predictability, understandability and control of work events thereby increasing the stress experienced by the employees of the organization. Therefore, structural changes that prevent these threat-ridigity effects would appear to be beneficial to both the organization’s and employees’ health and well-being.
The use of a matrix organizational structure is one approach for organizations to structure their internal environments in response to greater environmental instability. Baber (1983) describes the ideal type of matrix organization as one in which there are two or more intersecting lines of authority, organizational goals are achieved through the use of task-oriented work groups which are cross-functional and temporary, and functional departments continue to exist as mechanisms for routine personnel functions and professional development. Therefore, the matrix organization provides the organization with the needed flexibility to be responsive to environmental instability if the personnel have sufficient flexibility gained from the diversification of their skills and an ability to learn quickly.
While empirical research has yet to establish the effects of this organizational structure, several authors have suggested that the matrix organization may increase the stress experienced by employees. For example, Quick and Quick (1984) point out that the multiple lines of authority (task and functional supervisors) found in matrix organizations increase the potential for role conflict. Also, Hirschhorn (1991) suggests that with postindustrial work organizations, workers frequently face new challenges requiring them to take a learning role. This results in employees having to acknowledge their own temporary incompetencies and loss of control which can lead to increased stress. Therefore, it appears that new organizational structures such as the matrix organization also have potential risk factors associated with them.
Attempts to change or redesign organizations, regardless of the particular structure that an organization chooses to adopt, can have stress-inducing properties by disrupting security and stability, generating uncertainty for people’s position, role and status, and exposing conflict which must be confronted and resolved (Golembiewski 1982). These stress-inducing properties can be offset, however, by the stress-reducing properties of organizational development which incorporate greater empowerment and decision making across all levels in the organization, enhanced openness in communication, collaboration and training in team building and conflict resolution (Golembiewski 1982; Porras and Robertson 1992).
Conclusion
While the literature suggests that there are occupational risk factors associated with various organizational structures, the impact of these macrolevel aspects of organizations appear to be indirect. Organizational structure can provide a framework to enhance the predictability, understandability and control of events in the work environment; however, the effect of structure on employees’ health and well-being is mediated by more proximal work-environment characteristics such as role characteristics and interpersonal relations. Structuring organizations for healthy employees as well as healthy organizations requires organizational flexibility, worker flexibility and attention to the sociotechnical systems that coordinate the technological demands and the social structure within the organization.
The organizational context in which people work is characterized by numerous features (e.g., leadership, structure, rewards, communication) subsumed under the general concepts of organizational climate and culture. Climate refers to perceptions of organizational practices reported by people who work there (Rousseau 1988). Studies of climate include many of the most central concepts in organizational research. Common features of climate include communication (as describable, say, by openness), conflict (constructive or dysfunctional), leadership (as it involves support or focus) and reward emphasis (i.e., whether an organization is characterized by positive versus negative feedback, or reward- or punishment-orientation). When studied together, we observe that organizational features are highly interrelated (e.g., leadership and rewards). Climate characterizes practices at several levels in organizations (e.g., work unit climate and organizational climate). Studies of climate vary in the activities they focus upon, for example, climates for safety or climates for service. Climate is essentially a description of the work setting by those directly involved with it.
The relationship of climate to employee well-being (e.g., satisfaction, job stress and strain) has been widely studied. Since climate measures subsume the major organizational characteristics workers experience, virtually any study of employee perceptions of their work setting can be thought of as a climate study. Studies link climate features (particularly leadership, communication openness, participative management and conflict resolution) with employee satisfaction and (inversely) stress levels (Schneider 1985). Stressful organizational climates are characterized by limited participation in decisions, use of punishment and negative feedback (rather than rewards and positive feedback), conflict avoidance or confrontation (rather than problem solving), and nonsupportive group and leader relations. Socially supportive climates benefit employee mental health, with lower rates of anxiety and depression in supportive settings (Repetti 1987). When collective climates exist (where members who interact with each other share common perceptions of the organization) research observes that shared perceptions of undesirable organizational features are linked with low morale and instances of psychogenic illness (Colligan, Pennebaker and Murphy 1982). When climate research adopts a specific focus, as in the study of climate for safety in an organization, evidence is provided that lack of openness in communication regarding safety issues, few rewards for reporting occupational hazards, and other negative climate features increase the incidence of work-related accidents and injury (Zohar 1980).
Since climates exist at many levels in organizations and can encompass a variety of practices, assessment of employee risk factors needs to systematically span the relationships (whether in the work unit, the department or the entire organization) and activities (e.g., safety, communication or rewards) in which employees are involved. Climate-based risk factors can differ from one part of the organization to another.
Culture constitutes the values, norms and ways of behaving which organization members share. Researchers identify five basic elements of culture in organizations: fundamental assumptions (unconscious beliefs that shape member’s interpretations, e.g., views regarding time, environmental hostility or stability), values (preferences for certain outcomes over others, e.g., service or profit), behavioural norms (beliefs regarding appropriate and inappropriate behaviours, e.g., dress codes and teamwork), patterns of behaviours (observable recurrent practices, e.g., structured performance feedback and upward referral of decisions) and artefacts (symbols and objects used to express cultural messages, e.g., mission statements and logos). Cultural elements which are more subjective (i.e., assumptions, values and norms) reflect the way members think about and interpret their work setting. These subjective features shape the meaning that patterns of behaviours and artefacts take on within the organization. Culture, like climate, can exist at many levels, including:
Cultures can be strong (widely shared by members), weak (not widely shared), or in transition (characterized by gradual replacement of one culture by another).
In contrast with climate, culture is less frequently studied as a contributing factor to employee well-being or occupational risk. The absence of such research is due both to the relatively recent emergence of culture as a concept in organizational studies and to ideological debates regarding the nature of culture, its measurement (quantitative versus qualitative), and the appropriateness of the concept for cross-sectional study (Rousseau 1990). According to quantitative culture research focusing on behavioural norms and values, team-oriented norms are associated with higher member satisfaction and lower strain than are control- or bureaucratically -oriented norms (Rousseau 1989). Furthermore, the extent to which the worker’s values are consistent with those of the organization affects stress and satisfaction (O’Reilly and Chatman 1991). Weak cultures and cultures fragmented by role conflict and member disagreement are found to provoke stress reactions and crises in professional identities (Meyerson 1990). The fragmentation or breakdown of organizational cultures due to economic or political upheavals affects the well-being of members psychologically and physically, particular in the wake of downsizings, plant closings and other effects of concurrent organizational restructurings (Hirsch 1987). The appropriateness of particular cultural forms (e.g., hierarchic or militaristic) for modern society has been challenged by several culture studies (e.g., Hirschhorn 1984; Rousseau 1989) concerned with the stress and health-related outcomes of operators (e.g., nuclear power technicians and air traffic controllers) and subsequent risks for the general public.
Assessing risk factors in the light of information about organizational culture requires first attention to the extent to which organization members share or differ in basic beliefs, values and norms. Differences in function, location and education create subcultures within organizations and mean that culture-based risk factors can vary within the same organization. Since cultures tend to be stable and resistant to change, organizational history can aid assessment of risk factors both in terms of stable and ongoing cultural features as well as recent changes that can create stressors associated with turbulence (Hirsch 1987).
Climate and culture overlap to a certain extent, with perceptions of culture’s patterns of behaviour being a large part of what climate research addresses. However, organization members may describe organizational features (climate) in the same way but interpret them differently due to cultural and subcultural influences (Rosen, Greenlagh and Anderson 1981). For example, structured leadership and limited participation in decision making may be viewed as negative and controlling from one perspective or as positive and legitimate from another. Social influence reflecting the organization’s culture shapes the interpretation members make of organizational features and activities. Thus, it would seem appropriate to assess both climate and culture simultaneously in investigating the impact of the organization on the well-being of members.
There are many forms of compensation used in business and government organizations throughout the world to pay workers for their physical and mental contribution. Compensation provides money for human effort and is necessary for individual and family existence in most societies. Trading work for money is a long-established practice.
The health-stressor aspect of compensation is most closely linked with compensation plans that offer incentives for extra or sustained human effort. Job stress can certainly exist in any work setting where compensation is not based on incentives. However, physical and mental performance levels that are well above normal and that could lead to physical injury or injurious mental stress is more likely to be found in environments with certain kinds of incentive compensation.
Performance Measures and Stress
Performance measurements in one form or another are used by most organizations, and are essential for incentive programmes. Performance measures (standards) can be established for output, quality, throughput time, or any other productivity measure. Lord Kelvin in 1883 had this to say about measurements: “I often say that when you can measure what you are speaking about, and express it in numbers, you know something about it; but when you cannot measure it, when you cannot express it in numbers, your knowledge is a meagre and unsatisfactory kind; it may be the beginning of knowledge, but you have scarcely, in your thoughts, advanced to the stage of science, whatever the matter may be.”
Performance measures should be carefully linked to the fundamental goals of the organization. Inappropriate performance measurements have often had little or no effect on goal attainment. Some common criticisms of performance measures include unclear purpose, vagueness, lack of connection (or even opposition, for that matter) to the business strategy, unfairness or inconsistency, and their liability to be used chiefly for “punishing” people. But measurements can serve as indispensable benchmarks: remember the saying, “If you don’t know where you are, you can’t get to where you want to be”. The bottom line is that workers at all levels in an organization demonstrate more of the behaviours that they are measured on and rewarded to evince. What gets measured and rewarded gets done.
Performance measures must be fair and consistent to minimize stress among the workforce. There are several methods utilised to establish performance measures ranging from judgement estimation (guessing) to engineered work measurement techniques. Under the work measurement approach to setting performance measures, 100% performance is defined as a “fair day’s work pace”. This is the work effort and skill at which an average well-trained employee can work without undue fatigue while producing an acceptable quality of work over the course of a work shift. A 100% performance is not maximum performance; it is the normal or average effort and skill for a group of workers. By way of comparison, the 70% benchmark is generally regarded as the minimum tolerable level of performance, while the 120% benchmark is the incentive effort and skill that the average worker should be able to attain when provided with a bonus of at least 20% above the base rate of pay. While a number of incentive plans have been established using the 120% benchmark, this value varies among plans. The general design criteria recommended for wage incentive plans provide workers the opportunity to earn approximately 20 to 35% above base rate if they are normally skilled and execute high effort continuously.
Despite the inherent appeal of a “fair day’s work for a fair day’s pay”, some possible stress problems exist with a work measurement approach to setting performance measures. Performance measures are fixed in reference to the normal or average performance of a given work group (i.e., work standards based on group as opposed to individual performance). Thus, by definition, a large segment of those working at a task will fall below average (i.e., the 100% performance benchmark) generating a demand–resource imbalance that exceeds physical or mental stress limits. Workers who have difficulty meeting performance measures are likely to experience stress through work overload, negative supervisor feedback, and threat of job loss if they consistently perform below the 100% performance benchmark.
Incentive Programmes
In one form or another, incentives have been used for many years. For example, in the New Testament (II Timothy 2:6) Saint Paul declares, “It is the hard-working farmer who ought to have the first share of the crops”. Today, most organizations are striving to improve productivity and quality in order to maintain or improve their position in the business world. Most often workers will not give extra or sustained effort without some form of incentive. Properly designed and implemented financial incentive programmes can help. Before any incentive programme is implemented, some measure of performance must be established. All incentive programmes can be categorized as follows: direct financial, indirect financial, and intangible (non-financial).
Direct financial programmes may be applied to individuals or groups of workers. For individuals, each employee’s incentive is governed by his or her performance relative to a standard for a given time period. Group plans are applicable to two or more individuals working as a team on tasks that are usually interdependent. Each employee’s group incentive is usually based on his or her base rate and the group performance during the incentive period.
The motivation to sustain higher output levels is usually greater for individual incentives because of the opportunity for the high-performing worker to earn a greater incentive. However, as organizations move toward participative management and empowered work groups and teams, group incentives usually provide the best overall results. The group effort makes overall improvements to the total system as compared to optimizing individual outputs. Gainsharing (a group incentive system that has teams for continuous improvement and provides a share, usually 50%, of all productivity gains above a benchmark standard) is one form of a direct group incentive programme that is well suited for the continuous improvement organization.
Indirect financial programmes are usually less effective than direct financial programmes because direct financial incentives are stronger motivators. The principal advantage of indirect plans is that they require less detailed and accurate performance measures. Organizational policies that favourably affect morale, result in increased productivity and provide some financial benefit to employees are considered to be indirect incentive programmes. It is important to note that for indirect financial programmes no exact relationship exists between employee output and financial incentives. Examples of indirect incentive programmes include relatively high base rates, generous fringe benefits, awards programmes, year-end bonuses and profit-sharing.
Intangible incentive programmes include rewards that do not have any (or very little) financial impact on employees. These programmes, however, when viewed as desirable by the employees, can improve productivity. Examples of intangible incentive programmes include job enrichment (adding challenge and intrinsic satisfaction to the specific task assignments), job enlargement (adding tasks to complete a “whole” piece or unit of work output), nonfinancial suggestion plans, employee involvement groups and time off without any reduction in pay.
Summary and Conclusions
Incentives in some form are an integral part of many compensation plans. In general, incentive plans should be carefully evaluated to make sure that workers are not exceeding safe ergonomic or mental stress limits. This is particularly important for individual direct financial plans. It is usually a lesser problem in group direct, indirect or intangible plans.
Incentives are desirable because they enhance productivity and provide workers an opportunity to earn extra income or other benefits. Gainsharing is today one of the best forms of incentive compensation for any work group or team organization that wishes to offer bonus earnings and to achieve improvement in the workplace without risking the imposition of negative health-stressors by the incentive plan itself.
Contingent Workforce
The nations of the world vary dramatically in both their use and treatment of employees in their contingent workforce. Contingent workers include temporary workers hired through temporary help agencies, temporary workers hired directly, voluntary and “non-voluntary” part-timers (the non-voluntary would prefer full-time work) and the self-employed. International comparisons are difficult due to differences in the definitions of each of these categories of worker.
Overman (1993) stated that the temporary help industry in Western Europe is about 50% larger than it is in the United States, where about 1% of the workforce is made up of temporary workers. Temporary workers are almost non-existent in Italy and Spain.
While the subgroups of contingent workers vary considerably, the majority of part-time workers in all European countries are women at low salary levels. In the United States, contingent workers also tend to be young, female and members of minority groups. Countries vary considerably in the degree to which they protect contingent workers with laws and regulations covering their working conditions, health and other benefits. The United Kingdom, the United States, Korea, Hong Kong, Mexico and Chile are the least regulated, with France, Germany, Argentina and Japan having fairly rigid requirements (Overman 1993). A new emphasis on providing contingent workers with greater benefits through increased legal and regulatory requirements will help to alleviate occupational stress among those workers. However, those increased regulatory requirements may result in employers’ hiring fewer workers overall due to increased benefit costs.
Job Sharing
An alternative to contingent work is “job sharing,” which can take three forms: two employees share the responsibilities for one full-time job; two employees share one full-time position and divide the responsibilities, usually by project or client group; or two employees perform completely separate and unrelated tasks but are matched for purposes of headcount (Mattis 1990). Research has indicated that most job sharing, like contingent work, is done by women. However, unlike contingent work, job sharing positions are often subject to the protection of wage and hour laws and may involve professional and even managerial responsibilities. Within the European Community, job sharing is best known in Britain, where it was first introduced in the public sector (Lewis, Izraeli and Hootsmans 1992). The United States Federal Government, in the early 1990s, implemented a nationwide job sharing programme for its employees; in contrast, many state governments have been establishing job sharing networks since 1983 (Lee 1983). Job sharing is viewed as one way to balance work and family responsibilities.
Flexiplace and Home Work
Many alternative terms are used to denote flexiplace and home work: telecommuting, the alternative worksite, the electronic cottage, location-independent work, the remote workplace and work-at-home. For our purposes, this category of work includes “work performed at one or more ‘predetermined locations’ such as the home or a satellite work space away from the conventional office where at least some of the communications maintained with the employer occur through the use of telecommunications equipment such as computers, telephones and fax machines” (Pitt-Catsouphes and Marchetta 1991).
LINK Resources, Inc., a private-sector firm monitoring worldwide telecommuting activity, has estimated that there were 7.6 million telecommuters in 1993 in the United States out of the over 41.1 million work-at-home households. Of these telecommuters 81% worked part-time for employers with less than 100 employees in a wide array of industries across many geographical locations. Fifty-three% were male, in contrast to figures showing a majority of females in contingent and job-sharing work. Research with fifty US companies also showed that the majority of telecommuters were male with successful flexible work arrangements including supervisory positions (both line and staff), client-centred work and jobs that included travel (Mattis 1990). In 1992, 1.5 million Canadian households had at least one person who operated a business from home.
Lewis, Izraeli and Hootsman(1992) reported that, despite earlier predictions, telecommuting has not taken over Europe. They added that it is best established in the United Kingdom and Germany for professional jobs including computer specialists, accountants and insurance agents.
In contrast, some home-based work in both the United States and Europe pays by the piece and involves short deadlines. Typically, while telecommuters tend to be male, homeworkers in low-paid, piece-work jobs with no benefits tend to be female (Hall 1990).
Recent research has concentrated on identifying; (a) the type of person best suited for home work; (b) the type of work best accomplished at home; (c) procedures to ensure successful home work experiences and (d) reasons for organizational support (Hall 1990; Christensen 1992).
Welfare Facilities
The general approach to social welfare issues and programmes varies throughout the world depending upon the culture and values of the nation studied. Some of the differences in welfare facilities in the United States, Canada and Western Europe are documented by Ferber, O’Farrell and Allen (1991).
Recent proposals for welfare reform in the United States suggest overhauling traditional public assistance in order to make recipients work for their benefits. Cost estimates for welfare reform range from US$15 billion to $20 billion over the next five years, with considerable cost savings projected for the long term. Welfare administration costs in the United States for such programmes as food stamps, Medicaid and Aid to Families with Dependent Children have risen 19% from 1987 to 1991, the same percentage as the increase in the number of beneficiaries.
Canada has instituted a “work sharing” programme as an alternative to layoffs and welfare. The Canada Employment and Immigration Commission (CEIC) programme enables employers to face cutbacks by shortening the work week by one to three days and paying reduced wages accordingly. For the days not worked, the CEIC arranges for the workers to draw normal unemployment insurance benefits, an arrangement that helps to compensate them for the lower wages received from their employer and to relieve the hardships of being laid off. The duration of the programme is 26 weeks, with a 12-week extension. Workers can use work-sharing days for training and the federal Canadian government may reimburse the employer for a major portion of the direct training costs through the “Canadian Jobs Strategy”.
Child Care
The degree of child-care support is dependent upon the sociological underpinnings of the nation’s culture (Scharlach, Lowe and Schneider 1991). Cultures that:
will devote greater resources to supporting those programmes. Thus, international comparisons are complicated by these four factors and “high quality care” may be dependent on the needs of children and families in specific cultures.
Within the European Community, France provides the most comprehensive child-care programme. The Netherlands and the United Kingdom were late in addressing this issue. Only 3% of British employers provided some form of child care in 1989. Lamb et al. (1992) present nonparental child-care case studies from Sweden, the Netherlands, Italy, the United Kingdom, the United States, Canada, Israel, Japan, the People’s Republic of China, Cameroon, East Africa and Brazil. In the United States, approximately 3,500 private companies of the 17 million firms nationwide offer some type of child-care assistance to their employees. Of those firms, approximately 1,100 offer flexible spending accounts, 1,000 offer information and referral services and fewer than 350 have onsite or near-site child-care centres (Bureau of National Affairs 1991).
In a research study in the United States, 44% of men and 76% of women with children under six missed work in the previous three months for a family-related reason. The researchers estimated that the organizations they studied paid over $4 million in salary and benefits to employees who were absent because of child-care problems (see study by Galinsky and Hughes in Fernandez 1990). A study by the United States General Accounting Office in 1981 showed that American companies lose over $700 million a year because of inadequate parental leave policies.
Elder Care
It will take only 30 years (from the time of this writing, 1994) for the proportion of elderly in Japan to climb from 7% to 14%, while in France it took over 115 years and in Sweden 90 years. Before the end of the century, one out of every four persons in many member States of the Commission of the European Communities will be over 60 years old. Yet, until recently in Japan, there were few institutions for the elderly and the issue of eldercare has found scant attention in Britain and other European countries (Lewis, Izraeli and Hootsmans 1992). In America, there are approximately five million older Americans who require assistance with day-to-day tasks in order to remain in the community, and 30 million who are currently age 65 or older. Family members provide more than 80% of the assistance that these elderly people need (Scharlach, Lowe and Schneider 1991).
Research has shown that those employees who have elder-care responsibilities report significantly greater overall job stress than do other employees (Scharlach, Lowe and Schneider 1991). These caretakers often experience emotional stress and physical and financial strain. Fortunately, global corporations have begun to recognize that difficult family situations can result in absenteeism, decreased productivity and lower morale, and they are beginning to provide an array of “cafeteria benefits” to assist their employees. (The name “cafeteria” is intended to suggest that employees may select the benefits that would be most helpful to them from an array of benefits.) Benefits might include flexible work hours, paid “family illness” hours, referral services for family assistance, or a dependent-care salary-reduction account that allows employees to pay for elder care or day care with pre-tax dollars.
The author wishes to acknowledge the assistance of Charles Anderson of the Personnel Resources and Development Center of the United States Office of Personnel Management, Tony Kiers of the C.A.L.L. Canadian Work and Family Service, and Ellen Bankert and Bradley Googins of the Center on Work and Family of Boston University in acquiring and researching many of the references cited in this article.
The process by which outsiders become organizational insiders is known as organizational socialization. While early research on socialization focused on indicators of adjustment such as job satisfaction and performance, recent research has emphasized the links between organizational socialization and work stress.
Socialization as a Moderator of Job Stress
Entering a new organization is an inherently stressful experience. Newcomers encounter a myriad of stressors, including role ambiguity, role conflict, work and home conflicts, politics, time pressure and work overload. These stressors can lead to distress symptoms. Studies in the 1980s, however, suggest that a properly managed socialization process has the potential for moderating the stressor-strain connection.
Two particular themes have emerged in the contemporary research on socialization:
Information acquired by newcomers during socialization helps alleviate the considerable uncertainty in their efforts to master their new tasks, roles and interpersonal relationships. Often, this information is provided via formal orientation-cum-socialization programmes. In the absence of formal programmes, or (where they exist) in addition to them, socialization occurs informally. Recent studies have indicated that newcomers who proactively seek out information adjust more effectively (Morrison l993). In addition, newcomers who underestimate the stressors in their new job report higher distress symptoms (Nelson and Sutton l99l).
Supervisory support during the socialization process is of special value. Newcomers who receive support from their supervisors report less stress from unmet expectations (Fisher l985) and fewer psychological symptoms of distress (Nelson and Quick l99l). Supervisory support can help newcomers cope with stressors in at least three ways. First, supervisors may provide instrumental support (such as flexible work hours) that helps alleviate a particular stressor. Secondly, they may provide emotional support that leads a newcomer to feel more efficacy in coping with a stressor. Thirdly, supervisors play an important role in helping newcomers make sense of their new environment (Louis l980). For example, they can frame situations for newcomers in a way that helps them appraise situations as threatening or nonthreatening.
In summary, socialization efforts that provide necessary information to newcomers and support from supervisors can prevent the stressful experience from becoming distressful.
Evaluating Organizational Socialization
The organizational socialization process is dynamic, interactive and communicative, and it unfolds over time. In this complexity lies the challenge of evaluating socialization efforts. Two broad approaches to measuring socialization have been proposed. One approach consists of the stage models of socialization (Feldman l976; Nelson l987). These models portray socialization as a multistage transition process with key variables at each of the stages. Another approach highlights the various socialization tactics that organizations use to help newcomers become insiders (Van Maanen and Schein l979).
With both approaches, it is contended that there are certain outcomes that mark successful socialization. These outcomes include performance, job satisfaction, organizational commit-ment, job involvement and intent to remain with the organization. If socialization is a stress moderator, then distress symptoms (specifically, low levels of distress symptoms) should be included as an indicator of successful socialization.
Health Outcomes of Socialization
Because the relationship between socialization and stress has only recently received attention, few studies have included health outcomes. The evidence indicates, however, that the socialization process is linked to distress symptoms. Newcomers who found interactions with their supervisors and other newcomers helpful reported lower levels of psychological distress symptoms such as depression and inability to concentrate (Nelson and Quick l99l). Further, newcomers with more accurate expectations of the stressors in their new jobs reported lower levels of both psychological symptoms (e.g., irritability) and physiological symptoms (e.g., nausea and headaches).
Because socialization is a stressful experience, health outcomes are appropriate variables to study. Studies are needed that focus on a broad range of health outcomes and that combine self-reports of distress symptoms with objective health measures.
Organizational Socialization as Stress Intervention
The contemporary research on organizational socialization suggests that it is a stressful process that, if not managed well, can lead to distress symptoms and other health problems. Organizations can take at least three actions to ease the transition by way of intervening to ensure positive outcomes from socialization.
First, organizations should encourage realistic expectations among newcomers of the stressors inherent in the new job. One way of accomplishing this is to provide a realistic job preview that details the most commonly experienced stressors and effective ways of coping (Wanous l992). Newcomers who have an accurate view of what they will encounter can preplan coping strategies and will experience less reality shock from those stressors about which they have been forewarned.
Secondly, organizations should make numerous sources of accurate information available to newcomers in the form of booklets, interactive information systems or hotlines (or all of these). The uncertainty of the transition into a new organization can be overwhelming, and multiple sources of informational support can aid newcomers in coping with the uncertainty of their new jobs. In addition, newcomers should be encouraged to seek out information during their socialization experiences.
Thirdly, emotional support should be explicitly planned for in designing socialization programmes. The supervisor is a key player in the provision of such support and may be most helpful by being emotionally and psychologically available to newcomers (Hirshhorn l990). Other avenues for emotional support include mentoring, activities with more senior and experienced co-workers, and contact with other newcomers.
Introduction
The career stage approach is one way to look at career development. The way in which a researcher approaches the issue of career stages is frequently based on Levinson’s life stage development model (Levinson 1986). According to this model, people grow through specific stages separated by transition periods. At each stage a new and crucial activity and psychological adjustment may be completed (Ornstein, Cron and Slocum 1989). In this way, defined career stages can be, and usually are, based on chronological age. The age ranges assigned for each stage have varied considerably between empirical studies, but usually the early career stage is considered to range from the ages of 20 to 34 years, the mid-career from 35 to 50 years and the late career from 50 to 65 years.
According to Super’s career development model (Super 1957; Ornstein, Cron and Slocum 1989) the four career stages are based on the qualitatively different psychological task of each stage. They can be based either on age or on organizational, positional or professional tenure. The same people can recycle several times through these stages in their work career. For example, according to the Career Concerns Inventory Adult Form, the actual career stage can be defined at an individual or group level. This instrument assesses an individual’s awareness of and concerns with various tasks of career development (Super, Zelkowitz and Thompson 1981). When tenure measures are used, the first two years are seen as a trial period. The establishment period from two to ten years means career advancement and growth. After ten years comes the maintenance period, which means holding on to the accomplishments achieved. The decline stage implies the development of one’s self-image independently of one’s career.
Because the theoretical bases of the definition of the career stages and the sorts of measure used in practice differ from one study to another, it is apparent that the results concerning the health- and job-relatedness of career development vary, too.
Career Stage as a Moderator of Work-Related Health and Well-Being
Most studies of career stage as a moderator between job characteristics and the health or well-being of employees deal with organizational commitment and its relation to job satisfaction or to behavioural outcomes such as performance, turnover and absenteeism (Cohen 1991). The relationship between job characteristics and strain has also been studied. The moderating effect of career stage means statistically that the average correlation between measures of job characteristics and well-being varies from one career stage to another.
Work commitment usually increases from early career stages to later stages, although among salaried male professionals, job involvement was found to be lowest in the middle stage. In the early career stage, employees had a stronger need to leave the organization and to be relocated (Morrow and McElroy 1987). Among hospital staff, nurses’ measures of well-being were most strongly associated with career and affective-organizational commitment (i.e., emotional attachment to the organization). Continuance commitment (this is a function of perceived number of alternatives and degree of sacrifice) and normative commitment (loyalty to organization) increased with career stage (Reilly and Orsak 1991).
A meta-analysis was carried out of 41 samples dealing with the relationship between organizational commitment and outcomes indicating well-being. The samples were divided into different career stage groups according to two measures of career stage: age and tenure. Age as a career stage indicator significantly affected turnover and turnover intentions, while organizational tenure was related to job performance and absenteeism. Low organizational commitment was related to high turnover, especially in the early career stage, whereas low organizational commitment was related to high absenteeism and low job performance in the late career stage (Cohen 1991).
The relationship between work attitudes, for instance job satisfaction and work behaviour, has been found to be moderated by career stage to a considerable degree (e.g., Stumpf and Rabinowitz 1981). Among employees of public agencies, career stage measured with reference to organizational tenure was found to moderate the relationship between job satisfaction and job performance. Their relation was strongest in the first career stage. This was supported also in a study among sales personnel. Among academic teachers, the relationship between satisfaction and performance was found to be negative during the first two years of tenure.
Most studies of career stage have dealt with men. Even many early studies in the 1970s, in which the sex of the respondents was not reported, it is apparent that most of the subjects were men. Ornstein and Lynn (1990) tested how the career stage models of Levinson and Super described differences in the career attitudes and intentions among professional women. The results suggest that career stages based on age were related to organizational commitment, intention to leave the organization and a desire for promotion. These findings were, in general, similar to the ones found among men (Ornstein, Cron and Slocum 1989). However, no support was derived for the predictive value of career stages as defined on a psychological basis.
Studies of stress have generally either ignored age, and consequently career stage, in their study designs or treated it as a confounding factor and controlled its effects. Hurrell, McLaney and Murphy (1990) contrasted the effects of stress in mid-career to its effects in early and late career using age as a basis for their grouping of US postal workers. Perceived ill health was not related to job stressors in mid-career, but work pressure and underutilization of skills predicted it in early and late career. Work pressure was related also to somatic complaints in the early and late career group. Underutilization of abilities was more strongly related to job satisfaction and somatic complaints among mid-career workers. Social support had more influence on mental health than physical health, and this effect is more pronounced in mid-career than in early or late career stages. Because the data were taken from a cross sectional study, the authors mention that cohort explanation of the results might also be possible (Hurrell, McLaney and Murphy 1990).
When adult male and female workers were grouped according to age, the older workers more frequently reported overload and responsibility as stressors at work, whereas the younger workers cited insufficiency (e.g., not challenging work), boundary-spanning roles and physical environment stressors (Osipow, Doty and Spokane 1985). The older workers reported fewer of all kinds of strain symptoms: one reason for this may be that older people used more rational-cognitive, self-care and recreational coping skills, evidently learned during their careers, but selection that is based on symptoms during one’s career may also explain these differences. Alternatively it might reflect some self-selection, when people leave jobs that stress them excessively over time.
Among Finnish and US male managers, the relationship between job demands and control on the one hand, and psychosomatic symptoms on the other, was found in the studies to vary according to career stage (defined on the basis of age) (Hurrell and Lindström 1992, Lindström and Hurrell 1992). Among US managers, job demands and control had a significant effect on symptom reporting in the middle career stage, but not in the early and late stage, while among Finnish managers, the long weekly working hours and low job control increased stress symptoms in the early career stage, but not in the later stages. Differences between the two groups might be due to the differences in the two samples studied. The Finnish managers, being in the construction trades, had high workloads already in their early career stage, whereas US managers—these were public sector workers—had the highest workloads in their middle career stage.
To sum up the results of research on the moderating effects of career stage: early career stage means low organizational commitment related to turnover as well as job stressors related to perceived ill health and somatic complaints. In mid-career the results are conflicting: sometimes job satisfaction and performance are positively related, sometimes negatively. In mid-career, job demands and low control are related to frequent symptom reporting among some occupational groups. In late career, organizational commitment is correlated to low absenteeism and good performance. Findings on relations between job stressors and strain are inconsistent for the late career stage. There are some indications that more effective coping decreases work-related strain symptoms in late career.
Interventions
Practical interventions to help people to cope better with the specific demands of each career stage would be beneficial. Vocational counselling at the entry stage of one’s work life would be especially useful. Interventions for minimizing the negative impact of career plateauing are suggested because this can be either a time of frustration or an opportunity to face new challenges or to reappraise one’s life goals (Weiner, Remer and Remer 1992). Results of age-based health examinations in occupational health services have shown that job-related problems lowering working ability gradually increase and qualitatively change with age. In early and mid-career they are related to coping with work overload, but in later middle and late career they are gradually accompanied by declining psychological condition and physical health, facts that indicate the importance of early institutional intervention at an individual level (Lindström, Kaihilahti and Torstila 1988). Both in research and in practical interventions, mobility and turnover pattern should be taken into account, as well as the role played by one’s occupation (and situation within that occupation) in one’s career development.
" DISCLAIMER: The ILO does not take responsibility for content presented on this web portal that is presented in any language other than English, which is the language used for the initial production and peer-review of original content. Certain statistics have not been updated since the production of the 4th edition of the Encyclopaedia (1998)."