Print this page
Friday, 04 March 2011 17:57

Causality Assessment and Ethics in Epidemiological Research

Rate this item
(0 votes)

The preceding articles of this chapter have shown the need for a careful evaluation of the study design in order to draw credible inferences from epidemiological observations. Although it has been claimed that inferences in observational epidemiology are weak because of the non-experimental nature of the discipline, there is no built-in superiority of randomized controlled trials or other types of experimental design over well-planned observation (Cornfield 1954). However, to draw sound inferences implies a thorough analysis of the study design in order to identify potential sources of bias and confounding. Both false positive and false negative results can originate from different types of bias.

In this article, some of the guidelines that have been proposed to assess the causal nature of epidemiological observations are discussed. In addition, although good science is a premise for ethically correct epidemiological research, there are additional issues that are relevant to ethical concerns. Therefore, we have devoted some discussion to the analysis of ethical problems that may arise in doing epidemiological studies.

Causality Assessment

Several authors have discussed causality assessment in epidemiology (Hill 1965; Buck 1975; Ahlbom 1984; Maclure 1985; Miettinen 1985; Rothman 1986; Weed 1986; Schlesselman 1987; Maclure 1988; Weed 1988; Karhausen 1995). One of the main points of discussion is whether epidemiology uses or should use the same criteria for the ascertainment of cause-effect relationships as used in other sciences.

Causes should not be confused with mechanisms. For example, asbestos is a cause of mesothelioma, whereas oncogene mutation is a putative mechanism. On the basis of the existing evidence, it is likely that (a) different external exposures can act at the same mechanistic stages and (b) usually there is not a fixed and necessary sequence of mechanistic steps in the development of disease. For example, carcinogenesis is interpreted as a sequence of stochastic (probabilistic) transitions, from gene mutation to cell proliferation to gene mutation again, that eventually leads to cancer. In addition, carcinogenesis is a multifactorial process—that is, different external exposures are able to affect it and none of them is necessary in a susceptible person. This model is likely to apply to several diseases in addition to cancer.

Such a multifactorial and probabilistic nature of most exposure-disease relationships implies that disentangling the role played by one specific exposure is problematic. In addition, the observational nature of epidemiology prevents us from conducting experiments that could clarify aetiologic relationships through a wilful alteration of the course of the events. The observation of a statistical association between exposure and disease does not mean that the association is causal. For example, most epidemiologists have interpreted the association between exposure to diesel exhaust and bladder cancer as a causal one, but others have claimed that workers exposed to diesel exhaust (mostly truck and taxi drivers) are more often cigarette smokers than are non-exposed individuals. The observed association, according to this claim, thus would be “confounded” by a well-known risk factor like smoking.

Given the probabilistic-multifactorial nature of most exposure-disease associations, epidemiologists have developed guidelines to recognize relationships that are likely to be causal. These are the guidelines originally proposed by Sir Bradford Hill for chronic diseases (1965):

  • strength of the association
  • dose-response effect
  • lack of temporal ambiguity
  • consistency of the findings
  • biological plausibility
  • coherence of the evidence
  • specificity of the association.

 

These criteria should be considered only as general guidelines or practical tools; in fact, scientific causal assessment is an iterative process centred around measurement of the exposure-disease relationship. However, Hill’s criteria often are used as a concise and practical description of causal inference procedures in epidemiology.

Let us consider the example of the relationship between exposure to vinyl chloride and liver angiosarcoma, applying Hill’s criteria.

The usual expression of the results of an epidemiological study is a measure of the degree of association between exposure and disease (Hill’s first criterion). A relative risk (RR) that is greater than unity means that there is a statistical association between exposure and disease. For instance, if the incidence rate of liver angiosarcoma is usually 1 in 10 million, but it is 1 in 100,000 among those exposed to vinyl chloride, then the RR is 100 (that is, people who work with vinyl chloride have a 100 times increased risk of developing angiosarcoma compared to people who do not work with vinyl chloride).

It is more likely that an association is causal when the risk increases with increasing levels of exposure (dose-response effect, Hill’s second criterion) and when the temporal relationship between exposure and disease makes sense on biological grounds (the exposure precedes the effect and the length of this “induction” period is compatible with a biological model of disease; Hill’s third criterion). In addition, an association is more likely to be causal when similar results are obtained by others who have been able to replicate the findings in different circumstances (“consistency”, Hill’s fourth criterion).

A scientific analysis of the results requires an evaluation of biological plausibility (Hill’s fifth criterion). This can be achieved in different ways. For example, a simple criterion is to evaluate whether the alleged “cause” is able to reach the target organ (e.g., inhaled substances that do not reach the lung cannot circulate in the body). Also, supporting evidence from animal studies is helpful: the observation of liver angiosarcomas in animals treated with vinyl chloride strongly reinforces the association observed in man.

Internal coherence of the observations (for example, the RR is similarly increased in both genders) is an important scientific criterion (Hill’s sixth criterion). Causality is more likely when the relationship is very specific—that is, involves rare causes and/or rare diseases, or a specific histologic type/subgroup of patients (Hill’s seventh criterion).

“Enumerative induction” (the simple enumeration of instances of association between exposure and disease) is insufficient to describe completely the inductive steps in causal reasoning. Usually, the result of enumerative induction produces a complex and still confused observation because different causal chains or, more frequently, a genuine causal relationship and other irrelevant exposures, are entangled. Alternative explanations have to be eliminated through “eliminative induction”, showing that an association is likely to be causal because it is not “confounded” with others. A simple definition of an alternative explanation is “an extraneous factor whose effect is mixed with the effect of the exposure of interest, thus distorting the risk estimate for the exposure of interest” (Rothman 1986).

The role of induction is expanding knowledge, whereas deduction’s role is “transmitting truth” (Giere 1979). Deductive reasoning scrutinizes the study design and identifies associations which are not empirically true, but just logically true. Such associations are not a matter of fact, but logical necessities. For example, a selection bias occurs when the exposed group is selected among ill people (as when we start a cohort study recruiting as “exposed” to vinyl chloride a cluster of liver angiosarcoma cases) or when the unexposed group is selected among healthy people. In both instances the association which is found between exposure and disease is necessarily (logically) but not empirically true (Vineis 1991).

To conclude, even when one considers its observational (non-experimental) nature, epidemiology does not use inferential procedures that differ substantially from the tradition of other scientific disciplines (Hume 1978; Schaffner 1993).

Ethical Issues in Epidemiological Research

Because of the subtleties involved in inferring causation, special care has to be exercised by epidemiologists in interpreting their studies. Indeed, several concerns of an ethical nature flow from this.

Ethical issues in epidemiological research have become a subject of intense discussion (Schulte 1989; Soskolne 1993; Beauchamp et al. 1991). The reason is evident: epidemiologists, in particular occupational and environmental epidemiologists, often study issues having significant economic, social and health policy implications. Both negative and positive results concerning the association between specific chemical exposures and disease can affect the lives of thousands of people, influence economic decisions and therefore seriously condition political choices. Thus, the epidemiologist may be under pressure, and be tempted or even encouraged by others to alter—marginally or substantially—the interpretation of the results of his or her investigations.

Among the several relevant issues, transparency of data collection, coding, computerization and analysis is central as a defence against allegations of bias on the part of the researcher. Also crucial, and potentially in conflict with such transparency, is the right of the subjects enrolled in epidemiological research to be protected from the release of personal information
(confidentiality issues).

From the point of view of misconduct that can arise especially in the context of causal inference, questions that should be addressed by ethics guidelines are:

  • Who owns the data and for how long must the data be retained?
  • What constitutes a credible record of the work having been done?
  • Do public grants allow in the budget for costs associated with adequate documentation, archiving and re-analysis of data?
  • Is there a role for the primary investigator in any third party’s re-analysis of his or her data?
  • Are there standards of practice for data storage?
  • Should occupational and environmental epidemiologists be establishing a normative climate in which ready data scrutiny or audit can be accomplished?
  • How do good data storage practices serve to prevent not only misconduct, but also allegations of misconduct?
  • What constitutes misconduct in occupational and environmental epidemiology in relation to data management, interpretation of results and advocacy?
  • What is the role of the epidemiologist and/or of professional bodies in developing standards of practice and indicators/ outcomes for their assessment, and contributing expertise in any advocacy role?
  • What role does the professional body/ organization have in dealing with concerns about ethics and law? (Soskolne 1993)

 

Other crucial issues, in the case of occupational and environmental epidemiology, relate to the involvement of the workers in preliminary phases of studies, and to the release of the results of a study to the subjects who have been enrolled and are directly affected (Schulte 1989). Unfortunately, it is not common practice that workers enrolled in epidemiological studies are involved in collaborative discussions about the purposes of the study, its interpretation and the potential uses of the findings (which may be both advantageous and detrimental to the worker).

Partial answers to these questions have been provided by recent guidelines (Beauchamp et al. 1991; CIOMS 1991). However, in each country, professional associations of occupational epidemiologists should engage in a thorough discussion about ethical issues and, possibly, adopt a set of ethics guidelines appropriate to the local context while recognizing internationally accepted normative standards of practice.

 

Back

Read 5328 times Last modified on Thursday, 13 October 2011 20:24