Вы находитесь на странице: 1из 5

CLINICAL RESEARCH METHODS The Scientific Method and Causal

Theory

1. THE SCIENTIFIC METHOD AND CAUSAL THEORY

1.1 Characteristics of Scientific Inquiry

The word science originates from the Latin word scientia, which means ‘knowledge’. The purpose of
scientific inquiry is to create knowledge that clarifies the world around us. Clinical research shares
with other branches of science four major characteristics of scientific inquiry: positivism, theory,
empiricism and objectivity.

Positivism
All sciences are based on the fundamental assumption that the world is not totally chaotic, but has
logical and persistent patterns of regularity. Positivism is a particularly strong feature of biomedical
types of clinical research. The assumption of positivism is sometimes challenged, however,
especially with respect to clinical research concerning social and behavioural phenomena, such as
doctor or patient behaviours, which are arguably the product of both social laws and human volitional
action. Although, in general, social and behavioural phenomena cannot be predicted with complete
accuracy, there are nevertheless sufficient patterns of regularity in social and behavioural phenomena
to justify useful scientific investigation.

Theory
A clinical theory is a statement that seeks to explain or predict a particular clinical phenomenon.
Scientific theories are used to derive research hypotheses, plan research, make observations and
explain the patterns of regularity that are observed. Alternative theories may exist and scientific
inquiry is directed towards testing and choosing from these.
One clinical theory is judged to be superior to alternative theories if it:
1. involves the fewest number of statements and assumptions (more efficient);
2. explains the broadest range of phenomena (more comprehensive); and
3. predicts most accurately (more accurate).
There is an intimate connection between theory and performing clinical research. Theories provide
guidance for clinical research (a deductive process). Clinical research, in turn, verifies, modifies and
reconstructs theory (an inductive process).
In searching for clinical theories, scientists do not start with a clean slate, but are influenced by a
range of general perspectives about what is important, legitimate and reasonable. These normative
models are known as paradigms and often they represent the general perspective adopted by a
particular clinical discipline such as oncology, gynaecology or cardiology. Sometimes a major new
set of theories is introduced that challenges existing perspectives and becomes generally accepted as
providing a superior model to explain the world. This is the paradigm shift. The discovery that
peptic ulcers are caused by helicobacter pylori, previously though to be a harmless commensal in the
stomach, is an example of a recent paradigm shift in clinical science.

Empiricism
Empiricism is the collection of evidence from the observation of experience in the world, used to
corroborate, modify or construct theories. It is the most dominant characteristic of modern scientific
inquiry in clinical medicine and this has three implications:
1. Non-empirical ways of acquiring knowledge, such as appeals to authority, tradition, common
sense or intuition, cannot produce scientific evidence;
2. scientific researchers in clinical medicine focus on problems that can be observed; and
3. scientific inquiry cannot settle debates on ethical values, beliefs or what clinical policies ought to
be adopted. Scientific inquiry has to do with what is, not what ought to be.

A Short-Course with Professor D’Arcy Holman © Topic 1


-1
CLINICAL RESEARCH METHODS The Scientific Method and Causal
Theory

Objectivity
Empirical evidence is assumed to exist independent of researchers, and therefore it is important that
clinical researchers maintain objectivity in their observations, uninfluenced by their personal feelings,
conjectures or preferences. Research methods, properly used, strengthen the objectivity of the
observational aspects of clinical research.
It is generally impossible, however, for even the most objective of clinical scientists to conduct value-
free research. This is because the paradigms of their disciplines affect the type and scope of problems
they research, the methods adopted and the ways that finding are interpreted. Objectivity may also be
distorted, not by the clinical researchers, but by the study subjects as with the Hawthorne effect (a
reactive effect of research on the phenomenon being studied). Mechanisms used to enhance
objectivity include the concept of control in clinical research, which is the use of methods of design
or analysis to exclude or make less likely errors in interpretation due to biased observations. Peer
review is also an important measure to improve objectivity. When a group of clinical researchers can
independently agree on how to interpret results of a study, objectivity is enhanced. However,
existing paradigms may still distort objectivity, if those paradigm are in error.
Example: The increasing dependence of research into the effectiveness and safety of therapeutic substances
on funding from the pharmaceutical industry has prompted a debate, involving a number of North American
journals, about conflict of interest, and whether such research should be published in peer-reviewed journals.
It is argued that no amount of peer-review can guarantee the objectivity of scientists who have been
commissioned to do research by those who have a vested interest in the conclusions being supportive of their
products. Rothman, the Editor of the journal, Epidemiology, has argued that there is no reason to refuse to
publish such research, as all scientists are subject to influences that may detract from their objectivity.
However, it is argued that scientists with a significant conflict of interest, such as an industrial sponsor,
should declare this fact in their report so that readers may take this into account when considering the
implications of the research results and conclusions.

1.2 Steps in Conceptualising and Researching a Clinical Problem

The 10 steps in conceptualising and researching a clinical problem are as follows:


Groundwork: Specify the research topic and the existing body of theory based on literature review.

Hypotheses: Specify the testable propositions about relationships between variables.

Methods: Choose the types of research methods and data sources.

Design: Develop the overall plan or framework for investigation, including the study type.

Sampling: Select the populations to be studied.

Measurement: Define measures to link concepts to empirically observable phenomena.

Data Collection: Implement the chosen method of data collection on observable events or states.

Data Analysis: Employ statistical or other procedures to draw conclusions from the data.

Theory: Formulate propositions to explain the findings,

Application of Theory: Disseminate and apply new or updated theories in clinical practice.
Note that new clinical theories are not always deduced from existing theories in the manner set out
above. Sometimes a clinical researcher must commence an investigation without a hypothesis and
simply describes what one observes, explaining why it happened on the basis of the observations
alone. These theories are known as grounded theories.
1.3 Major Types of Relationships Between Variables

A Short-Course with Professor D’Arcy Holman © Topic 1


-2
CLINICAL RESEARCH METHODS The Scientific Method and Causal
Theory

There are two types of measurable concepts used in clinical research. A measurable concept with just
a single never-changing value is known as a constant. A measurable concept that has more than one
measurable value is a variable. A variable capable of effecting change in other variables is an
independent variable. A variable whose value is dependent upon one or more other variables is a
dependent variable. We wish to explain the value of the dependent variable, whereas the
independent variable is the hypothesised explanation.

Causal vs Spurious Relationships


When the association between two variables has been caused by a third or extraneous variable, rather
than by their interrelationship, the relationship is called spurious. The variable that causes a spurious
relationship is called a confounder. Another form of confounder is a suppressor variable, which
conceals the relationship between two variables because it directly affects one and inversely affects
the other. A relationship between two variables may also be caused by an intervening variable,
which is caused by the independent variable and affects the dependent variable. An intervening
variable is not a confounder.
If a change in the independent variable does indeed cause an effect in the dependent variable, there is
a causal relationship. There are three basic requisites to a causal relationship: statistical association,
rational sequence of influence and non-spuriousness. Establishing that these conditions exist is
known as the process of causal inference.

1.4 Schools of Thought on Causal Inference

It was the 18th Century British philosopher, David Hume, who identified the essential conundrum of
empirical science with his famous quotation (‫)اقتباس‬, known as Hume’s Problem from the Treatise (
‫ )مقالة‬of Human Nature (1739):
‘We are never able, in a single instance, to discover any power or necessary connection,
any quality which binds the effect to the cause, and renders the one an infallible ‫معصوم‬
consequence of the other. We only find that one does actually, in fact, follow the other.’
Hume's Problem with inductive reasoning, that is, reasoning that draws general conclusions from the
observation of specific phenomena limited in time and place, has led to a divergence of opinion in
scientific philosophy between two schools of thought, the verificationisits and falsificationists.

Verification School
The verificationist accepts that induction is a fact of life and that repeated observations do often lead
to useful scientific statements about the nature of the universe. Their tentative solution to Hume's
problem is to propose criteria or conditions to strengthen the validity of causal inference. There are
more than 30 sets of such criteria published to date, but probably the best known is Sir Austin
Bradford Hill's nine causal considerations published in 1965 at a time when the causal link between
lung cancer and tobacco was still somewhat controversial. The US Surgeon General also published
causal criteria in the historic inaugural volume on Smoking and Health. Verificationists tend to hold
to the view that despite the lack of guarantee of infallibility, someone must be prepared to interpret
the empirical information that has been gathered for the purpose of making policies for the benefit of
society. The scientist's intimate understanding of the processes of research make him or her the most
suitable person to interpret the implications for decision-makers in society, and in this way the
verificationist's version of science intersects with public policy.

A Short-Course with Professor D’Arcy Holman © Topic 1


-3
CLINICAL RESEARCH METHODS The Scientific Method and Causal
Theory

Falsification School
The falsificationist has a contrary view. Induction is seen as entirely a psychological process that has
no basis in logic. For example, the falsificationist argues that although the rooster crows each
morning as the sun rises, it is impossible to induce that the crow of the rooster causes the sun to
appear. Their solution to Hume's Problem, developed as recently as 1934 by the physicist and
philosopher Karl Popper, is to turn inductive reasoning on its head and to use empirical observations
to refute conjectural theories about the nature of the world. It takes only one dissonant observation to
disprove a conjectural theory considered to have general application. Regarding the problem of the
crowing rooster, Popper's hypothetico-deductive reasoning leads the scientist to wring the rooster's
neck and when the sun rises nevertheless, the conjecture that it had something to do with the poor
dead cock is falsified and a new conjecture about the reason for the sunrise is entertained. However,
under this model nothing is ever proven and the purist adherent to this philosophy will argue that
causal inference has nothing to do with empirical science, but lies strictly in the domain of policy.

Causal inference in Practice


Most clinical researchers rely on a combination of falsification and verification to make judgments
about whether observed associations are causal or spurious. Ruling out alternative explanations other
than cause, such as chance variation, confounding and other bias (selection or information bias) is an
important part of causal inference, that relies on falsification of alternative hypotheses. Verification
of an association as causal is also more readily accepted if the evidence has the following attributes:
 Statistical Significance and Power: If the criterion of statistical significance is satisfied then the
evidence is supportive. However, failure of a test of statistical significance does not detract from the
evidence unless accompanied by adequate statistical power.
 Time Order: The exposure must precede the outcome. This criterion is compatible with, but does not
necessarily support causation. Reversal of the time order is the most decisive basis available for rejection
of causation.
 Strength of Association: The greater the strength of association the more likely it is to be causal. A
strong association is highly supportive of causality, but a weak association does not necessarily detract
from the evidence of causality.
 Specificity of Cause: Limitations in the number of other known causes of the outcome are supportive.
However, the absence of specificity does detract from a causal hypothesis. Contrary to popular belief
among scientists, specificity of effect is not a causal criterion.
 Consistency on Replication: Consistency of the evidence, or lack of evidence, in the face of study
diversity in time, place, circumstances and population, as well as research design, strongly supports or
detracts from a causal hypothesis.
 Predictive Performance: This criterion requires that a secondary hypothesis is drawn from the existence
of the causal relationship that predicts an otherwise unexplained fact or consequence. An example of
predictive performance is the observation of a reduced risk of outcome in a population which has ceased a
previous exposure. Corroborating evidence of this type is strongly supportive of causality; failure of
corroboration detracts from the evidence.
 Theoretical Coherence: Findings plausible in terms of pre-existing theory are affirming. For example,
variation in the active component of the exposure should be associated with variation in risk within the
same gross exposure level. Conversely, findings that are implausible in terms of pre-existing theory
detract from the evidence.
 Biological Coherence: Pre-existing knowledge which identifies a pathway or mechanism by which the
exposure may cause the outcome is affirming. Incoherence between biological knowledge and study
observations detract from a causal hypothesis.
 Factual Coherence: Compatibility of a new result with pre-existing facts is affirming. For example,
ecological correlation may have been previously described between the drug exposure across different
populations and the corresponding rates of disease or injury. Incompatible pre-existing facts strongly
detract from evidence of causality.
 Statistical Coherence: A response proportional to exposure level is strongly persuasive of a causal
relation. However, its absence is not necessarily falsifying.

A Short-Course with Professor D’Arcy Holman © Topic 1


-4
CLINICAL RESEARCH METHODS The Scientific Method and Causal
Theory

1.5 The Divide between Clinical Science and Clinical Policy

‘In every system of morality, which I have hitherto met with, I have always remark’d,
that the author proceeds for some time in the ordinary way of reasoning, …. when of a
sudden I am surpriz’d to find, that instead of the usual copulations of propositions, is,
and is not, I meet with no proposition that is not connected with an ought, or an ought
not. …. For as this ought, or ought not, expresses some new relation or affirmation,
‘tis necessary that it shou’d be observed and explain’d; and at the same time that a
reason should be given, for what seems altogether inconceivable, how this new relation
can be a deduction from others, which are entirely different from it.’
This remark, known as Hume’s Guillotine, also taken from the writings of 18th Century British
philosopher, David Hume, draws the fundamental distinction between science and ethics, and the
hiatus in logic that exists between the indicative language of scientific discourse, which describes and
explains; and the prescriptive language of ethics, which proffers standards for acceptable behaviour
and public policy.
Merely because ‘early electroversion in people who suffer a sudden cardiac arrest due to ventricular
fibrillation is a cause of improved survival’ does not mean that ‘automated cardiac defibrillators
ought to be installed in the public interest in sports stadiums.’ The latter (ethical) statement cannot
be logically derived from the former (scientific) statement, unless one makes value judgments about
what constitutes the public interest in the context of people gathered in a sports stadium. This would
need to include considerations about the value of the survival benefit (beneficence); the desire not to
do harm if the equipment is used improperly (non-maleficence); the opportunity cost of alternative
uses of the public funds to benefit worthy social objectives (justice); and the degree to which the
stadium owners or the individuals have a duty and a right to take appropriate measures against the
risk of cardiac arrest (autonomy). The point is that ethical decisions cannot be deduced from mere
scientific facts.

References

Brandon WP. A large-scale social science experiment in health finance: findings, significance,
and value. J Health Politics, Policy and Law 1995; 20: 1051-61.
Hill AB. The environment and disease: association or causation? Proc Roy Soc Med 1965; 58:
295-300.
Hume D. Treatise of Human Nature. London: John Noon, 1739; revised and reprinted, Selby-
Bigge LA, ed, Oxford: Clarendon Press, 1985.
Popper KR. Conjectures and Refutations: The Growth of Scientific Knowledge. New York:
Harper Torchbooks, 1963.
Rothman KJ, ed. Causal Inference. Chestnut Hill: Epidemiology Resources, 1988.
Rothman KJ. Causes. Am J Epidemiol 1976; 104: 587-92.

A Short-Course with Professor D’Arcy Holman © Topic 1


-5

Вам также может понравиться