Вы находитесь на странице: 1из 57

Chemistry is the scientific study of interaction of chemical substances that are constituted of atoms or the subatomic particles:[3] protons,

electrons and neutrons.[4] Atoms combine to produce molecules or crystals. Chemistry is sometimes called "the central science" because it connects the other natural sciences such as astronomy, physics, material science, biology and geology.[5][6] The genesis of chemistry can be traced to certain practices, known as alchemy, which had been practiced for several millennia in various parts of the world, particularly the Middle East.[7] The structure of objects we commonly use and the properties of the matter we commonly interact with are a consequence of the properties of chemical substances and their interactions. For example, steel is harder than iron because its atoms are bound together in a more rigid crystalline lattice; wood burns or undergoes rapid oxidation because it can react spontaneously with oxygen in a chemical reaction above a certain temperature; sugar and salt dissolve in water because their molecular/ionic properties are such that dissolution is preferred under the ambient conditions. The transformations that are studied in chemistry are a result of interaction either between different chemical substances or between matter and energy. Traditional chemistry involves study of interactions between substances in a chemistry laboratory using various forms of laboratory glassware.

Laboratory, Institute of Biochemistry, University of Cologne A chemical reaction is a transformation of some substances into one or more other substances.[8] It can be symbolically depicted through a chemical equation. The number of atoms on the left and the right in the equation for a chemical transformation is most often equal. The nature of chemical reactions a substance may undergo and the energy changes that may accompany it are constrained by certain basic rules, known as chemical laws. Energy and entropy considerations are invariably important in almost all chemical studies. Chemical substances are classified in terms of their structure, phase as well as their chemical compositions. They can be analyzed using the tools of chemical analysis, e.g. spectroscopy and chromatography. Scientists engaged in chemical research are known as chemists.[9] Most chemists specialize in one or more sub-disciplines.

History

Main article: History of chemistry See also: Alchemy, Timeline of chemistry, and Nobel Prize in Chemistry Ancient Egyptians pioneered the art of synthetic "wet" chemistry up to 4,000 years ago.[10] By 1000 BC ancient civilizations were using technologies that formed the basis of the various branches of chemistry such as; extracting metal from their ores, making pottery and glazes, fermenting beer and wine, making pigments for cosmetics and painting, extracting chemicals from plants for medicine and perfume, making cheese, dying cloth, tanning leather, rendering fat into soap, making glass, and making alloys like bronze.

Democritus' atomist philosophy was later adopted by Epicurus (341270 BCE). The genesis of chemistry can be traced to the widely observed phenomenon of burning that led to metallurgythe art and science of processing ores to get metals (e.g. metallurgy in ancient India). The greed for gold led to the discovery of the process for its purification, even though the underlying principles were not well understoodit was thought to be a transformation rather than purification. Many scholars in those days thought it reasonable to believe that there exist means for transforming cheaper (base) metals into gold. This gave way to alchemy and the search for the Philosopher's Stone which was believed to bring about such a transformation by mere touch.[11] Greek atomism dates back to 440 BC, as what might be indicated by the book De Rerum Natura (The Nature of Things)[12] written by the Roman Lucretius in 50 BC.[13] Much of the early development of purification methods is described by Pliny the Elder in his Naturalis Historia. A tentative outline is as follows:
1. Egyptian alchemy [3,000 BCE 400 BCE], formulate early "element" theories such as

the Ogdoad.
2. Greek alchemy [332 BCE 642 CE], the Greek king Alexander the Great conquers

Egypt and founds Alexandria, having the world's largest library, where scholars and wise men gather to study.

3. Arab alchemy [642 CE 1200], the Muslim conquest of Egypt; development of alchemy

4.

5. 6. 7. 8.

by Jbir ibn Hayyn, al-Razi and others; Jbir modifies Aristotle's theories; advances in processes and apparatus.[14] European alchemy [1300 present], Pseudo-Geber builds on Arabic chemistry.[citation needed] From the 12th century, major advances in the chemical arts shifted from Arab lands to western Europe.[14] Chemistry [1661], Boyle writes his classic chemistry text The Sceptical Chymist. Chemistry [1787], Lavoisier writes his classic Elements of Chemistry. Chemistry [1803], Dalton publishes his Atomic Theory. Chemistry [1869], Dmitry Mendeleev presented his Periodic table being the framework of the modern chemistry

The earliest pioneers of Chemistry, and inventors of the modern scientific method,[15] were medieval Arab and Persian scholars. They introduced precise observation and controlled experimentation into the field and discovered numerous Chemical substances.[16][verification needed] "Chemistry as a science was almost created by the Muslims; for in this field, where the Greeks (so far as we know) were confined to industrial experience and vague hypothesis, the Saracens introduced precise observation, controlled experiment, and careful records. They invented and named the alembic (al-anbiq), chemically analyzed innumerable substances, composed lapidaries, distinguished alkalis and acids, investigated their affinities, studied and manufactured hundreds of drugs. Alchemy, which the Muslims inherited from Egypt, contributed to chemistry by a thousand incidental discoveries, and by its method, which was the most scientific of all medieval operations."
[16]

The most influential Muslim chemists were Jbir ibn Hayyn (Geber, d. 815), al-Kindi (d. 873), al-Razi (d. 925), al-Biruni (d. 1048) and Alhazen (d. 1039).[17] The works of Jbir became more widely known in Europe through Latin translations by a pseudo-Geber in 14th century Spain, who also wrote some of his own books under the pen name "Geber". The contribution of Indian alchemists and metallurgists in the development of chemistry was also quite significant.[18] The emergence of chemistry in Europe was primarily due to the recurrent incidence of the plague and blights there during the so called Dark Ages.[citation needed] This gave rise to a need for medicines. It was thought that there exists a universal medicine called the Elixir of Life that can cure all diseases[citation needed], but like the Philosopher's Stone, it was never found.

Antoine-Laurent de Lavoisier is considered the "Father of Modern Chemistry".[19] For some practitioners, alchemy was an intellectual pursuit, over time, they got better at it. Paracelsus (14931541), for example, rejected the 4-elemental theory and with only a vague understanding of his chemicals and medicines, formed a hybrid of alchemy and science in what was to be called iatrochemistry. Similarly, the influences of philosophers such as Sir Francis Bacon (15611626) and Ren Descartes (15961650), who demanded more rigor in mathematics and in removing bias from scientific observations, led to a scientific revolution. In chemistry, this began with Robert Boyle (16271691), who came up with an equation known as Boyle's Law about the characteristics of gaseous state.[20] Chemistry indeed came of age when Antoine Lavoisier (17431794), developed the theory of Conservation of mass in 1783; and the development of the Atomic Theory by John Dalton around 1800. The Law of Conservation of Mass resulted in the reformulation of chemistry based on this law[citation needed] and the oxygen theory of combustion, which was largely based on the work of Lavoisier. Lavoisier's fundamental contributions to chemistry were a result of a conscious effort[citation needed] to fit all experiments into the framework of a single theory. He established the consistent use of the chemical balance, used oxygen to overthrow the phlogiston theory, and developed a new system of chemical nomenclature and made contribution to the modern metric system. Lavoisier also worked to translate the archaic and technical language of chemistry into something that could be easily understood by the largely uneducated masses, leading to an increased public interest in chemistry. All these advances in chemistry led to what is usually called the chemical revolution. The contributions of Lavoisier led to what is now called modern chemistrythe chemistry that is studied in educational institutions all over the world. It is because of these and other contributions that Antoine Lavoisier is often celebrated as the "Father of Modern Chemistry".[21] The later discovery of Friedrich Whler that many natural substances, organic compounds, can indeed be synthesized in a chemistry laboratory also helped the modern chemistry to mature from its infancy.[22]

The discovery of the chemical elements has a long history from the days of alchemy and culminating in the discovery of the periodic table of the chemical elements by Dmitri Mendeleev (18341907)[23] and later discoveries of some synthetic elements.

Etymology
Main article: Chemistry (etymology) The word chemistry comes from the word alchemy, an earlier set of practices that encompassed elements of chemistry, metallurgy, philosophy, astrology, astronomy, mysticism and medicine; it is commonly thought of as the quest to turn lead or another common starting material into gold. [24] The word alchemy in turn is derived from the Arabic word al-km ( ,)meaning alchemy. The Arabic term is borrowed from the Greek or .[25][26] This may have Egyptian origins. Many believe that al-km is derived from , which is in turn derived from the word Chemi or Kimi, which is the ancient name of Egypt in Egyptian.[25] Alternately, al-km may be derived from , meaning "cast together".[27] An alchemist was called a 'chemist' in popular speech, and later the suffix "-ry" was added to this to describe the art of the chemist as "chemistry".

Definitions
In retrospect, the definition of chemistry has changed over time, as new discoveries and theories add to the functionality of the science. Shown below are some of the standard definitions used by various noted chemists:

Alchemy (330) the study of the composition of waters, movement, growth, embodying, disembodying, drawing the spirits from bodies and bonding the spirits within bodies (Zosimos).[28] Chymistry (1661) the subject of the material principles of mixed bodies (Boyle).[29] Chymistry (1663) a scientific art, by which one learns to dissolve bodies, and draw from them the different substances on their composition, and how to unite them again, and exalt them to a higher perfection (Glaser).[30] Chemistry (1730) the art of resolving mixt, compound, or aggregate bodies into their principles; and of composing such bodies from those principles (Stahl).[31] Chemistry (1837) the science concerned with the laws and effects of molecular forces (Dumas).[32] Chemistry (1947) the science of substances: their structure, their properties, and the reactions that change them into other substances (Pauling).[33] Chemistry (1998) the study of matter and the changes it undergoes (Chang).[34]

Basic concepts
Several concepts are essential for the study of chemistry; some of them are:[35]

Atom
Main article: Atom An atom is the basic unit of chemistry. It consists of a positively charged core (the atomic nucleus) which contains protons and neutrons, and which maintains a number of electrons to balance the positive charge in the nucleus. The atom is also the smallest entity that can be envisaged to retain some of the chemical properties of the element, such as electronegativity, ionization potential, preferred oxidation state(s), coordination number, and preferred types of bonds to form (e.g., metallic, ionic, covalent).

Element
Main article: Chemical element The concept of chemical element is related to that of chemical substance. A chemical element is specifically a substance which is composed of a single type of atom. A chemical element is characterized by a particular number of protons in the nuclei of its atoms. This number is known as the atomic number of the element. For example, all atoms with 6 protons in their nuclei are atoms of the chemical element carbon, and all atoms with 92 protons in their nuclei are atoms of the element uranium. Ninetyfour different chemical elements or types of atoms based on the number of protons exist naturally. A further 18 have been recognised by IUPAC as existing artificially only. Although all the nuclei of all atoms belonging to one element will have the same number of protons, they may not necessarily have the same number of neutrons; such atoms are termed isotopes. In fact several isotopes of an element may exist. The most convenient presentation of the chemical elements is in the periodic table of the chemical elements, which groups elements by atomic number. Due to its ingenious arrangement, groups, or columns, and periods, or rows, of elements in the table either share several chemical properties, or follow a certain trend in characteristics such as atomic radius, electronegativity, etc. Lists of the elements by name, by symbol, and by atomic number are also available.

Compound
Main article: Chemical compound A compound is a substance with a particular ratio of atoms of particular chemical elements which determines its composition, and a particular organization which determines chemical properties. For example, water is a compound containing hydrogen and oxygen in the ratio of two to one, with the oxygen atom between the two hydrogen atoms, and an angle of 104.5 between them. Compounds are formed and interconverted by chemical reactions.

Substance
Main article: Chemical substance

A chemical substance is a kind of matter with a definite composition and set of properties.[36] Strictly speaking, a mixture of compounds, elements or compounds and elements is not a chemical substance, but it may be called a chemical. Most of the substances we encounter in our daily life are some kind of mixture; for example: air, alloys, biomass, etc. Nomenclature of substances is a critical part of the language of chemistry. Generally it refers to a system for naming chemical compounds. Earlier in the history of chemistry substances were given name by their discoverer, which often led to some confusion and difficulty. However, today the IUPAC system of chemical nomenclature allows chemists to specify by name specific compounds amongst the vast variety of possible chemicals. The standard nomenclature of chemical substances is set by the International Union of Pure and Applied Chemistry (IUPAC). There are well-defined systems in place for naming chemical species. Organic compounds are named according to the organic nomenclature system.[37] Inorganic compounds are named according to the inorganic nomenclature system.[38] In addition the Chemical Abstracts Service has devised a method to index chemical substance. In this scheme each chemical substance is identifiable by a number known as CAS registry number.

Molecule
Main article: Molecule A molecule is the smallest indivisible portion of a pure chemical substance that has its unique set of chemical properties, that is, its potential to undergo a certain set of chemical reactions with other substances. Molecules can exist as electrically neutral units unlike ions. Molecules are typically a set of atoms bound together by covalent bonds, such that the structure is electrically neutral and all valence electrons are paired with other electrons either in bonds or in lone pairs.

A molecular structure depicts the bonds and relative positions of atoms in a molecule such as that in Paclitaxel shown here

Not all substances consist of discrete molecules. Most chemical elements are composed of lone atoms as their smallest discrete unit. Other types of substances, such as ionic compounds and network solids, are organized in such a way as to lack the existence of identifiable molecules per se. Instead, these substances are discussed in terms of formula units or unit cells as the smallest repeating structure within the substance; as they lack identifiable molecules. One of the main characteristic of a molecule is its geometry often called its structure. While the structure of diatomic, triatomic or tetra atomic molecules may be trivial, (linear, angular pyramidal etc.) the structure of polyatomic molecules, that are constituted of more than six atoms (of several elements) can be crucial for its chemical nature.

Mole and amount of substance


Main article: Mole (unit) Mole is a unit to measure amount of substance (also called chemical amount). A mole is the amount of a substance that contains as many elementary entities (atoms, molecules or ions) as there are atoms in 0.012 kilogram (or 12 grams) of carbon-12, where the carbon-12 atoms are unbound, at rest and in their ground state.[39] The number of entities per mole is known as the Avogadro constant, and is determined empirically. The currently accepted value is 6.02214179(30)1023 mol1 (2007 CODATA). One way to understand the meaning of the term "mole" is to compare and contrast it to terms such as dozen. Just as one dozen eggs contains 12 individual eggs, one mole contains 6.02214179(30)1023 atoms, molecules or other particles. The term is used because it is much easier to say, for example, 1 mole of carbon, than it is to say 6.02214179(30)1023 carbon atoms, and because moles of chemicals represent a scale that is easy to experience. The amount of substance of a solute per volume of solution is known as amount of substance concentration, or molarity for short. Molarity is the quantity most commonly used to express the concentration of a solution in the chemical laboratory. The most commonly used units for molarity are mol/L (the official SI units are mol/m3).

Ions and salts


Main article: Ion An ion is a charged species, an atom or a molecule, that has lost or gained one or more electrons. Positively charged cations (e.g. sodium cation Na+) and negatively charged anions (e.g. chloride Cl) can form a crystalline lattice of neutral salts (e.g. sodium chloride NaCl). Examples of polyatomic ions that do not split up during acid-base reactions are hydroxide (OH) and phosphate (PO43). Ions in the gaseous phase are often known as plasma.

Acidity and basicity

Main article: Acidbase reaction A substance can often be classified as an acid or a base. There are several different theories which explain acid-base behavior. The simplest is Arrhenius theory, which states than an acid is a substance that produces hydronium ions when it is dissolved in water, and a base is one that produces hydroxide ions when dissolved in water. According to BrnstedLowry acid-base theory, acids are substances that donate a positive hydrogen ion to another substance in a chemical reaction; by extension, a base is the substance which receives that hydrogen ion. A third common theory is Lewis acid-base theory, which is based on the formation of new chemical bonds. Lewis theory explains that an acid is a substance which is capable of accepting a pair of electrons from another substance during the process of bond formation, while a base is a substance which can provide a pair of electrons to form a new bond. According to concept as per Lewis, the crucial things being exchanged are charges.[40][unreliable source?] There are several other ways in which a substance may be classified as an acid or a base, as is evident in the history of this concept [41] Acid strength is commonly measured by two methods. One measurement, based on the Arrhenius definition of acidity, is pH, which is a measurement of the hydronium ion concentration in a solution, as expressed on a negative logarithmic scale. Thus, solutions that have a low pH have a high hydronium ion concentration, and can be said to be more acidic. The other measurement, based on the BrnstedLowry definition, is the acid dissociation constant (Ka), which measure the relative ability of a substance to act as an acid under the Brnsted Lowry definition of an acid. That is, substances with a higher Ka are more likely to donate hydrogen ions in chemical reactions than those with lower Ka values.

Phase
Main article: Phase (matter) In addition to the specific chemical properties that distinguish different chemical classifications chemicals can exist in several phases. For the most part, the chemical classifications are independent of these bulk phase classifications; however, some more exotic phases are incompatible with certain chemical properties. A phase is a set of states of a chemical system that have similar bulk structural properties, over a range of conditions, such as pressure or temperature. Physical properties, such as density and refractive index tend to fall within values characteristic of the phase. The phase of matter is defined by the phase transition, which is when energy put into or taken out of the system goes into rearranging the structure of the system, instead of changing the bulk conditions. Sometimes the distinction between phases can be continuous instead of having a discrete boundary, in this case the matter is considered to be in a supercritical state. When three states meet based on the conditions, it is known as a triple point and since this is invariant, it is a convenient way to define a set of conditions. The most familiar examples of phases are solids, liquids, and gases. Many substances exhibit multiple solid phases. For example, there are three phases of solid iron (alpha, gamma, and delta)

that vary based on temperature and pressure. A principal difference between solid phases is the crystal structure, or arrangement, of the atoms. Another phase commonly encountered in the study of chemistry is the aqueous phase, which is the state of substances dissolved in aqueous solution (that is, in water). Less familiar phases include plasmas, Bose-Einstein condensates and fermionic condensates and the paramagnetic and ferromagnetic phases of magnetic materials. While most familiar phases deal with three-dimensional systems, it is also possible to define analogs in two-dimensional systems, which has received attention for its relevance to systems in biology.

Redox
Main article: Redox It is a concept related to the ability of atoms of various substances to lose or gain electrons. Substances that have the ability to oxidize other substances are said to be oxidative and are known as oxidizing agents, oxidants or oxidizers. An oxidant removes electrons from another substance. Similarly, substances that have the ability to reduce other substances are said to be reductive and are known as reducing agents, reductants, or reducers. A reductant transfers electrons to another substance, and is thus oxidized itself. And because it "donates" electrons it is also called an electron donor. Oxidation and reduction properly refer to a change in oxidation numberthe actual transfer of electrons may never occur. Thus, oxidation is better defined as an increase in oxidation number, and reduction as a decrease in oxidation number.

Bonding
Main article: Chemical bond

Electron atomic and molecular orbitals Atoms sticking together in molecules or crystals are said to be bonded with one another. A chemical bond may be visualized as the multipole balance between the positive charges in the nuclei and the negative charges oscillating about them.[42] More than simple attraction and repulsion, the energies and distributions characterize the availability of an electron to bond to another atom. A chemical bond can be a covalent bond, an ionic bond, a hydrogen bond or just because of Van der Waals force. Each of these kind of bond is ascribed to some potential. These potentials create the interactions which hold atoms together in molecules or crystals. In many simple compounds, Valence Bond Theory, the Valence Shell Electron Pair Repulsion model (VSEPR), and the

concept of oxidation number can be used to explain molecular structure and composition. Similarly, theories from classical physics can be used to predict many ionic structures. With more complicated compounds, such as metal complexes, valence bond theory is less applicable and alternative approaches, such as the molecular orbital theory, are generally used. See diagram on electronic orbitals.

Reaction
Main article: Chemical reaction When a chemical substance is transformed as a result of its interaction with another or energy, a chemical reaction is said to have occurred. Chemical reaction is therefore a concept related to the 'reaction' of a substance when it comes in close contact with another, whether as a mixture or a solution; exposure to some form of energy, or both. It results in some energy exchange between the constituents of the reaction as well with the system environment which may be a designed vessels which are often laboratory glassware. Chemical reactions can result in the formation or dissociation of molecules, that is, molecules breaking apart to form two or more smaller molecules, or rearrangement of atoms within or across molecules. Chemical reactions usually involve the making or breaking of chemical bonds. Oxidation, reduction, dissociation, acid-base neutralization and molecular rearrangement are some of the commonly used kinds of chemical reactions. A chemical reaction can be symbolically depicted through a chemical equation. While in a nonnuclear chemical reaction the number and kind of atoms on both sides of the equation are equal, for a nuclear reaction this holds true only for the nuclear particles viz. protons and neutrons.[43] The sequence of steps in which the reorganization of chemical bonds may be taking place in the course of a chemical reaction is called its mechanism. A chemical reaction can be envisioned to take place in a number of steps, each of which may have a different speed. Many reaction intermediates with variable stability can thus be envisaged during the course of a reaction. Reaction mechanisms are proposed to explain the kinetics and the relative product mix of a reaction. Many physical chemists specialize in exploring and proposing the mechanisms of various chemical reactions. Several empirical rules, like the Woodward-Hoffmann rules often come handy while proposing a mechanism for a chemical reaction. According to the IUPAC gold book a chemical reaction is a process that results in the interconversion of chemical species".[44] Accordingly, a chemical reaction may be an elementary reaction or a stepwise reaction. An additional caveat is made, in that this definition includes cases where the interconversion of conformers is experimentally observable. Such detectable chemical reactions normally involve sets of molecular entities as indicated by this definition, but it is often conceptually convenient to use the term also for changes involving single molecular entities (i.e. 'microscopic chemical events').

Equilibrium
Main article: Chemical equilibrium

Although the concept of equilibrium is widely used across sciences, in the context of chemistry, it arises whenever a number of different states of the chemical composition are possible. For example, in a mixture of several chemical compounds that can react with one another, or when a substance can be present in more than one kind of phase. A system of chemical substances at equilibrium even though having an unchanging composition is most often not static; molecules of the substances continue to react with one another thus giving rise to a dynamic equilibrium. Thus the concept describes the state in which the parameters such as chemical composition remain unchanged over time. Chemicals present in biological systems are invariably not at equilibrium; rather they are far from equilibrium.

Energy
Main article: Energy In the context of chemistry, energy is an attribute of a substance as a consequence of its atomic, molecular or aggregate structure. Since a chemical transformation is accompanied by a change in one or more of these kinds of structure, it is invariably accompanied by an increase or decrease of energy of the substances involved. Some energy is transferred between the surroundings and the reactants of the reaction in the form of heat or light; thus the products of a reaction may have more or less energy than the reactants. A reaction is said to be exergonic if the final state is lower on the energy scale than the initial state; in the case of endergonic reactions the situation is the reverse. A reaction is said to be exothermic if the reaction releases heat to the surroundings; in the case of endothermic reactions, the reaction absorbs heat from the surroundings. Chemical reactions are invariably not possible unless the reactants surmount an energy barrier known as the activation energy. The speed of a chemical reaction (at given temperature T) is related to the activation energy E, by the Boltzmann's population factor e E / kT - that is the probability of molecule to have energy greater than or equal to E at the given temperature T. This exponential dependence of a reaction rate on temperature is known as the Arrhenius equation. The activation energy necessary for a chemical reaction can be in the form of heat, light, electricity or mechanical force in the form of ultrasound.[45] A related concept free energy, which also incorporates entropy considerations, is a very useful means for predicting the feasibility of a reaction and determining the state of equilibrium of a chemical reaction, in chemical thermodynamics. A reaction is feasible only if the total change in the Gibbs free energy is negative, ; if it is equal to zero the chemical reaction is said to be at equilibrium. There exist only limited possible states of energy for electrons, atoms and molecules. These are determined by the rules of quantum mechanics, which require quantization of energy of a bound system. The atoms/molecules in a higher energy state are said to be excited. The molecules/atoms of substance in an excited energy state are often much more reactive; that is, more amenable to chemical reactions. The phase of a substance is invariably determined by its energy and the energy of its surroundings. When the intermolecular forces of a substance are such that the energy of the

surroundings is not sufficient to overcome them, it occurs in a more ordered phase like liquid or solid as is the case with water (H2O); a liquid at room temperature because its molecules are bound by hydrogen bonds.[46] Whereas hydrogen sulfide (H2S) is a gas at room temperature and standard pressure, as its molecules are bound by weaker dipole-dipole interactions. The transfer of energy from one chemical substance to another depends on the size of energy quanta emitted from one substance. However, heat energy is often transferred more easily from almost any substance to another because the phonons responsible for vibrational and rotational energy levels in a substance have much less energy than photons invoked for the electronic energy transfer. Thus, because vibrational and rotational energy levels are more closely spaced than electronic energy levels, heat is more easily transferred between substances relative to light or other forms of electronic energy. For example, ultraviolet electromagnetic radiation is not transferred with as much efficacy from one substance to another as thermal or electrical energy. The existence of characteristic energy levels for different chemical substances is useful for their identification by the analysis of spectral lines. Different kinds of spectra are often used in chemical spectroscopy, e.g. IR, microwave, NMR, ESR, etc. Spectroscopy is also used to identify the composition of remote objects - like stars and distant galaxies - by analyzing their radiation spectra.

Emission spectrum of iron The term chemical energy is often used to indicate the potential of a chemical substance to undergo a transformation through a chemical reaction or to transform other chemical substances.

Chemical laws
Main article: Chemical law Chemical reactions are governed by certain laws, which have become fundamental concepts in chemistry. Some of them are:

Avogadro's law Beer-Lambert law Boyle's law (1662, relating pressure and volume) Charles's law (1787, relating volume and temperature) Fick's law of diffusion Gay-Lussac's law (1809, relating pressure and temperature) Le Chatelier's Principle

Henry's law Hess's Law Law of conservation of energy leads to the important concepts of equilibrium, thermodynamics, and kinetics. Law of conservation of mass, according to the modern physics it is actually energy that is conserved, and that energy and mass are related; a concept which becomes important in nuclear chemistry. Law of definite composition, although in many systems (notably biomacromolecules and minerals) the ratios tend to require large numbers, and are frequently represented as a fraction. Law of multiple proportions Raoult's Law

Subdisciplines
Chemistry is typically divided into several major sub-disciplines. There are also several main cross-disciplinary and more specialized fields of chemistry.[47]

Analytical chemistry is the analysis of material samples to gain an understanding of their chemical composition and structure. Analytical chemistry incorporates standardized experimental methods in chemistry. These methods may be used in all subdisciplines of chemistry, excluding purely theoretical chemistry. Biochemistry is the study of the chemicals, chemical reactions and chemical interactions that take place in living organisms. Biochemistry and organic chemistry are closely related, as in medicinal chemistry or neurochemistry. Biochemistry is also associated with molecular biology and genetics. Inorganic chemistry is the study of the properties and reactions of inorganic compounds. The distinction between organic and inorganic disciplines is not absolute and there is much overlap, most importantly in the sub-discipline of organometallic chemistry. Materials chemistry is the preparation, characterization, and understanding of substances with a useful function. The field is a new breadth of study in graduate programs, and it integrates elements from all classical areas of chemistry with a focus on fundamental issues that are unique to materials. Primary systems of study include the chemistry of condensed phases (solids, liquids, polymers) and interfaces between different phases. Neurochemistry is the study of neurochemicals; including transmitters, peptides, proteins, lipids, sugars, and nucleic acids; their interactions, and the roles they play in forming, maintaining, and modifying the nervous system. Nuclear chemistry is the study of how subatomic particles come together and make nuclei. Modern Transmutation is a large component of nuclear chemistry, and the table of nuclides is an important result and tool for this field.

Organic chemistry is the study of the structure, properties, composition, mechanisms, and reactions of organic compounds. An organic compound is defined as any compound based on a carbon skeleton. Physical chemistry is the study of the physical and fundamental basis of chemical systems and processes. In particular, the energetics and dynamics of such systems and processes are of interest to physical chemists. Important areas of study include chemical thermodynamics, chemical kinetics, electrochemistry, statistical mechanics, spectroscopy, and more recently, astrochemistry.[48] Physical chemistry has large overlap with molecular physics. Physical chemistry involves the use of infinitesimal calculus in deriving equations. It is usually associated with quantum chemistry and theoretical chemistry. Physical chemistry is a distinct discipline from chemical physics, but again, there is very strong overlap. Theoretical chemistry is the study of chemistry via fundamental theoretical reasoning (usually within mathematics or physics). In particular the application of quantum mechanics to chemistry is called quantum chemistry. Since the end of the Second World War, the development of computers has allowed a systematic development of computational chemistry, which is the art of developing and applying computer programs for solving chemical problems. Theoretical chemistry has large overlap with (theoretical and experimental) condensed matter physics and molecular physics.

Other fields include agrochemistry, astrochemistry (and cosmochemistry), atmospheric chemistry, chemical engineering, chemical biology, chemo-informatics, electrochemistry, environmental chemistry, femtochemistry, flavor chemistry, flow chemistry, geochemistry, green chemistry, histochemistry, history of chemistry, hydrogenation chemistry, immunochemistry, marine chemistry, materials science, mathematical chemistry, mechanochemistry, medicinal chemistry, molecular biology, molecular mechanics, nanotechnology, natural product chemistry, oenology, organometallic chemistry, petrochemistry, pharmacology, photochemistry, physical organic chemistry, phytochemistry, polymer chemistry, radiochemistry, solid-state chemistry, sonochemistry, supramolecular chemistry, surface chemistry, synthetic chemistry, thermochemistry, and many others.

Chemical industry
Main article: Chemical industry The chemical industry represents an important economic activity. The global top 50 chemical producers in 2004 had sales of 587 billion US dollars with a profit margin of 8.1% and research and development spending of 2.1% of total chemical sales.[49]

Professional societies

American Chemical Society American Society for Neurochemistry

Chemical Institute of Canada Chemical Society of Peru International Union of Pure and Applied Chemistry Royal Australian Chemical Institute Royal Netherlands Chemical Society Royal Society of Chemistry Society of Chemical Industry World Association of Theoretical and Computational Chemists List of chemistry societies

See also
Chemistry portal

Book: Chemistry
Wikipedia Books are collections of articles that can be downloaded or ordered in print.

Main article: Outline of chemistry


Common chemicals International Year of Chemistry List of chemists List of compounds List of important publications in chemistry

Standards

Denmark's national copy of the International Prototype Kilogram, delivered to Denmark in 1949 with an official mass of 1 kg+81 g.

With the exception of a few seemingly fundamental quantum constants, units of measurement are essentially arbitrary; in other words, people make them up and then agree to use them.

Nothing inherent in nature dictates that an inch has to be a certain length, or that a mile is a better measure of distance than a kilometre. Over the course of human history, however, first for convenience and then for necessity, standards of measurement evolved so that communities would have certain common benchmarks. Laws regulating measurement were originally developed to prevent fraud in commerce. Today, units of measurement are generally defined on a scientific basis, overseen by governmental or supra-governmental agencies, and established in international treaties, preeminent of which is the General Conference on Weights and Measures (CGPM), established in 1875 by the Treaty of the metre and which oversees the International System of Units (SI) and which has custody of the International Prototype Kilogram. The metre, for example, was redefined in 1983 by the CGPM as the distance traveled by light in free space in 1299,792,458 of a second while in 1960 the international yard was defined by the governments of the United States, United Kingdom, Australia and South Africa as being exactly 0.9144 metres. In the United States, the National Institute of Standards and Technology (NIST), a division of the United States Department of Commerce, regulates commercial measurements. In the United Kingdom, the role is performed by the National Physical Laboratory (NPL), in Australia by the Commonwealth Scientific and Industrial Research Organisation, in South Africa by the Council for Scientific and Industrial Research and in India the National Physical Laboratory of India.

[edit] Units and systems


Main articles: Units of measurement and Systems of measurement

A baby bottle that measures in all three measurement systems, Imperial (U.K.), U.S. customary, and metric.

[edit] Imperial system


Main article: Imperial units

Before SI units were widely adopted around the world, the British systems of English units and later imperial units were used in Britain, the Commonwealth and the United States. The system came to be known as U.S. customary units in the United States and is still in use there and in a few Caribbean countries. These various systems of measurement have at times been called footpound-second systems after the Imperial units for distance, weight and time even though the tons, hundredweights, gallons, and nautical miles, for example, are different for the U.S. units. Many Imperial units remain in use in Britain despite the fact that it has officially switched to the SI system. Road signs are still in miles, yards, miles per hour, and so on, people tend to measure their own height in feet and inches and milk is sold in pints, to give just a few examples. Imperial units are used in many other places, for example, in many Commonwealth countries that are considered metricated, land area is measured in acres and floor space in square feet, particularly for commercial transactions (rather than government statistics). Similarly, gasoline is sold by the gallon in many countries that are considered metricated.

[edit] Metric system

Four measuring devices having metric calibrations Main articles: Metric system and History of the metric system

The metric system is a decimal systems of measurement based on its units for length, the metre and for mass, the kilogram. It exists in several variations, with different choices of base units, though these do not affect its day-to-day use. Since the 1960s, the International System of Units (SI) is the internationally recognized metric system. Metric units of mass, length, and electricity are widely used around the world for both everyday and scientific purposes. The metric system features a single base unit for many physical quantities. Other quantities are derived from the standard SI units. Multiples and fractions of the units are expressed as powers of ten of each unit. Unit conversions are always simple because they are in the ratio of ten, one hundred, one thousand, etc., so that convenient magnitudes for measurements are achieved by

simply moving the decimal place: 1.234 metres is 1234 millimetres or 0.001234 kilometres. The use of fractions, such as 2/5 of a metre, is not prohibited, but uncommon. All lengths and distances, for example, are measured in metres, or thousandths of a metre (millimetres), or thousands of metres (kilometres). There is no profusion of different units with different conversion factors as in the Imperial system which uses, for example, inches, feet, yards, fathoms, rods.

[edit] International System of Units


Main article: International System of Units

The International System of Units (abbreviated as SI from the French language name Systme International d'Units) is the modern revision of the metric system. It is the world's most widely used system of units, both in everyday commerce and in science. The SI was developed in 1960 from the metre-kilogram-second (MKS) system, rather than the centimetre-gram-second (CGS) system, which, in turn, had many variants. At its development the SI also introduced several newly named units that were previously not a part of the metric system. The SI units for the four basic physical quantities: length, time, mass, and temperature are:

metre (m) :SI unit of length second (s) :SI unit of time kilogram (kg) :SI unit of mass kelvin (K) :SI unit of temperature

There are two types of SI units, base units and derived units. Base units are the simple measurements for time, length, mass, temperature, amount of substance, electric current and light intensity. Derived units are constructed from the base units, for example, the watt, i.e. the unit for power, is defined from the base units as m2kgs3. Other physical properties may be measured in compound units, such as material density, measured in kg/m3.

[edit] Converting prefixes


The SI allows easy multiplication when switching among units having the same base but different prefixes. To convert from metres to centimetres it is only necessary to multiply the number of metres by 100, since there are 100 centimetres in a metre. Inversely, to switch from centimetres to metres one multiplies the number of centimetres by 0.01 or divide centimetres by 100.

[edit] Distance

A 2-metre carpenter's ruler

A ruler or rule is a tool used in, for example, geometry, technical drawing, engineering, and carpentry, to measure distances or to draw straight lines. Strictly speaking, the ruler is the instrument used to rule straight lines and the calibrated instrument used for determining length is called a measure, however common usage calls both instruments rulers and the special name straightedge is used for an unmarked rule. The use of the word measure, in the sense of a measuring instrument, only survives in the phrase tape measure, an instrument that can be used to measure but cannot be used to draw straight lines. As can be seen in the photographs on this page, a two-metre carpenter's rule can be folded down to a length of only 20 centimetres, to easily fit in a pocket, and a five-metre long tape measure easily retracts to fit within a small housing.

[edit] Some special names


We also use some special names for some multiples of some units.

100 kilograms = 1 quintal; 1000 kilogram = 1 metric tonne; 10 years = 1 decade; 100 years = 1 century; 1000 years = 1 millennium

[edit] Building trades


The Australian building trades adopted the metric system in 1966 and the units used for measurement of length are metres (m) and millimetres (mm). Centimetres (cm) are avoided as they cause confusion when reading plans. For example, the length two and a half metres is usually recorded as 2500 mm or 2.5 m; it would be considered non-standard to record this length as 250 cm.

[edit] Time
Main article: Time

[edit] Mass
Main article: Weighing scale

Mass refers to the intrinsic property of all material objects to resist changes in their momentum. Weight, on the other hand, refers to the downward force produced when a mass is in a gravitational field. In free fall, (no net gravitational forces) objects lack weight but retain their mass. The Imperial units of mass include the ounce, pound, and ton. The metric units gram and kilogram are units of mass. One device for measuring weight or mass is called a weighing scale or, often, simply a scale. A spring scale measures force but not mass, a balance compares masses, but requires a gravitational field to operate. Some of the most accurate instruments for measuring weight or mass are based on load cells with a digital read-out, but require a gravitational field to function and would not work in free fall.

[edit] Economics
Main article: Measurement in economics

The measures used in economics are physical measures, nominal price value measures and fixed price value measures. These measures differ from one another by the variables they measure and by the variables excluded from measurements. The measurable variables in economics are quantity, quality and distribution. By excluding variables from measurement makes it possible to better focus the measurement on a given variable, yet, this means a narrower approach.

[edit] Difficulties
Since accurate measurement is essential in many fields, and since all measurements are necessarily approximations, a great deal of effort must be taken to make measurements as accurate as possible. For example, consider the problem of measuring the time it takes an object to fall a distance of one metre (39 in). Using physics, it can be shown that, in the gravitational field of the Earth, it should take any object about 0.45 second to fall one metre. However, the following are just some of the sources of error that arise. First, this computation used for the acceleration of gravity 9.8 metres per second per second (32.2 ft/s). But this measurement is not exact, but only precise to two significant digits. Also, the Earth's gravitational field varies slightly depending on height above sea level and other factors. Next, the computation of .45 seconds involved extracting a square root, a mathematical operation that required rounding off to some number of significant digits, in this case two significant digits. So far, we have only considered scientific sources of error. In actual practice, dropping an object from a height of a metre stick and using a stopwatch to time its fall, we have other sources of error. First, and most common, is simple carelessness. Then there is the problem of determining the exact time at which the object is released and the exact time it hits the ground. There is also the problem that the measurement of the height and the measurement of the time both involve some error. Finally, there is the problem of air resistance. Scientific experiments must be carried out with great care to eliminate as much error as possible, and to keep error estimates realistic.

[edit] Definitions and theories


[edit] Classical definition
In the classical definition, which is standard throughout the physical sciences, measurement is the determination or estimation of ratios of quantities.[2] Quantity and measurement are mutually defined: quantitative attributes are those possible to measure, at least in principle. The classical concept of quantity can be traced back to John Wallis and Isaac Newton, and was foreshadowed in Euclid's Elements.[3]

[edit] Representational theory


In the representational theory, measurement is defined as "the correlation of numbers with entities that are not numbers".[4] The most technically elaborate form of representational theory is also known as additive conjoint measurement. In this form of representational theory, numbers are assigned based on correspondences or similarities between the structure of number systems and the structure of qualitative systems. A property is quantitative if such structural similarities can be established. In weaker forms of representational theory, such as that implicit within the work of Stanley Smith Stevens,[5] numbers need only be assigned according to a rule. The concept of measurement is often misunderstood as merely the assignment of a value, but it is possible to assign a value in a way that is not a measurement in terms of the requirements of additive conjoint measurement. One may assign a value to a person's height, but unless it can be established that there is a correlation between measurements of height and empirical relations, it is not a measurement according to additive conjoint measurement theory. Likewise, computing and assigning arbitrary values, like the "book value" of an asset in accounting, is not a measurement because it does not satisfy the necessary criteria.

[edit] Information theory


Information theory recognizes that all data are inexact and statistical in nature. Thus the definition of measurement is: "A set of observations that reduce uncertainty where the result is expressed as a quantity."[6] This definition is implied in what scientists actually do when they measure something and report both the mean and statistics of the measurements. In practical terms, one begins with an initial guess as to the value of a quantity, and then, using various methods and instruments, reduces the uncertainty in the value. Note that in this view, unlike the positivist representational theory, all measurements are uncertain, so instead of assigning one value, a range of values is assigned to a measurement. This also implies that there is not a clear or neat distinction between estimation and measurement. Ascertaining the degree measurement error is also a basic facet of metrology, and sources of errors are divided into systematic and nonsystematic.

[edit] Quantum mechanics

In quantum mechanics, a measurement is the collapse of the wavefunction.[citation needed] The unambiguous meaning of the measurement problem is an unresolved fundamental problem in quantum mechanics.[citation needed]

Accuracy and precision


From Wikipedia, the free encyclopedia "Accuracy" redirects here. For the song by The Cure, see Three Imaginary Boys.

In the fields of science, engineering, industry and statistics, the accuracy[1] of a measurement system is the degree of closeness of measurements of a quantity to that quantity's actual (true) value. The precision[1] of a measurement system, also called reproducibility or repeatability, is the degree to which repeated measurements under unchanged conditions show the same results.[2] Although the two words can be synonymous in colloquial use, they are deliberately contrasted in the context of the scientific method.

Accuracy indicates proximity of measurement results to the true value, precision to the repeatability or reproducibility of the measurement

A measurement system can be accurate but not precise, precise but not accurate, neither, or both. For example, if an experiment contains a systematic error, then increasing the sample size generally increases precision but does not improve accuracy. The end result would be a consistent yet inaccurate string of results from the flawed experiment. Eliminating the systematic error improves accuracy but does not change precision. A measurement system is designated valid if it is both accurate and precise. Related terms include bias (non-random or directed effects caused by a factor or factors unrelated to the independent variable) and error (random variability). The terminology is also applied to indirect measurements--that is, values obtained by a computational procedure from observed data.

In addition to accuracy and precision, measurements may also have a measurement resolution, which is the smallest change in the underlying physical quantity that produces a response in the measurement. In the case of full reproducibility, such as when rounding a number to a representable floating point number, the word precision has a meaning not related to reproducibility. For example, in the IEEE 754-2008 standard it means the number of bits in the significand, so it is used as a measure for the relative accuracy with which an arbitrary number can be represented.

Contents
[hide]

1 Accuracy versus precision: the target analogy 2 Quantifying accuracy and precision 3 Accuracy and precision in binary classification 4 Accuracy and precision in psychometrics and psychophysics 5 Accuracy and precision in logic simulation 6 Accuracy and precision in information systems 7 See also 8 References 9 External links

[edit] Accuracy versus precision: the target analogy

High accuracy, but low precision

High precision, but low accuracy

Accuracy is the degree of veracity while in some contexts precision may mean the degree of reproducibility.[citation needed] The analogy used here to explain the difference between accuracy and precision is the target comparison. In this analogy, repeated measurements are compared to arrows that are shot at a target. Accuracy describes the closeness of arrows to the bullseye at the target center. Arrows that strike closer to the bullseye are considered more accurate. The closer a system's measurements to the accepted value, the more accurate the system is considered to be. To continue the analogy, if a large number of arrows are shot, precision would be the size of the arrow cluster. (When only one arrow is shot, precision is the size of the cluster one would expect if this were repeated many times under the same conditions.) When all arrows are grouped tightly together, the cluster is considered precise since they all struck close to the same spot, even if not necessarily near the bullseye. The measurements are precise, though not necessarily accurate. However, it is not possible to reliably achieve accuracy in individual measurements without precisionif the arrows are not grouped close to one another, they cannot all be close to the bullseye. (Their average position might be an accurate estimation of the bullseye, but the individual arrows are inaccurate.) See also circular error probable for application of precision to the science of ballistics.

[edit] Quantifying accuracy and precision


Ideally a measurement device is both accurate and precise, with measurements all close to and tightly clustered around the known value. The accuracy and precision of a measurement process is usually established by repeatedly measuring some traceable reference standard. Such standards are defined in the International System of Units and maintained by national standards organizations such as the National Institute of Standards and Technology. This also applies when measurements are repeated and averaged. In that case, the term standard error is properly applied: the precision of the average is equal to the known standard deviation of the process divided by the square root of the number of measurements averaged. Further, the central limit theorem shows that the probability distribution of the averaged measurements will be closer to a normal distribution than that of individual measurements. With regard to accuracy we can distinguish:

the difference between the mean of the measurements and the reference value, the bias. Establishing and correcting for bias is necessary for calibration. the combined effect of that and precision.

A common convention in science and engineering is to express accuracy and/or precision implicitly by means of significant figures. Here, when not explicitly stated, the margin of error is understood to be one-half the value of the last significant place. For instance, a recording of

843.6 m, or 843.0 m, or 800.0 m would imply a margin of 0.05 m (the last significant place is the tenths place), while a recording of 8,436 m would imply a margin of error of 0.5 m (the last significant digits are the units). A reading of 8,000 m, with trailing zeroes and no decimal point, is ambiguous; the trailing zeroes may or may not be intended as significant figures. To avoid this ambiguity, the number could be represented in scientific notation: 8.0 103 m indicates that the first zero is significant (hence a margin of 50 m) while 8.000 103 m indicates that all three zeroes are significant, giving a margin of 0.5 m. Similarly, it is possible to use a multiple of the basic measurement unit: 8.0 km is equivalent to 8.0 103 m. In fact, it indicates a margin of 0.05 km (50 m). However, reliance on this convention can lead to false precision errors when accepting data from sources that do not obey it. Looking at this in another way, a value of 8 would mean that the measurement has been made with a precision of 1 (the measuring instrument was able to measure only down to 1s place) whereas a value of 8.0 (though mathematically equal to 8) would mean that the value at the first decimal place was measured and was found to be zero. (The measuring instrument was able to measure the first decimal place.) The second value is more precise. Neither of the measured values may be accurate (the actual value could be 9.5 but measured inaccurately as 8 in both instances). Thus, accuracy can be said to be the 'correctness' of a measurement, while precision could be identified as the ability to resolve smaller differences. Precision is sometimes stratified into:

Repeatability the variation arising when all efforts are made to keep conditions constant by using the same instrument and operator, and repeating during a short time period; and Reproducibility the variation arising using the same measurement process among different instruments and operators, and over longer time periods.

[edit] Accuracy and precision in binary classification


Accuracy is also used as a statistical measure of how well a binary classification test correctly identifies or excludes a condition.
Condition as determined by Gold standard True False Positive predictive value Negative predictive value

Positiv True positive False positive Test e outcom Negati e False negative True negative ve

Sensitivity

Specificity

Accuracy

That is, the accuracy is the proportion of true results (both true positives and true negatives) in the population. It is a parameter of the test.

On the other hand, precision is defined as the proportion of the true positives against all the positive results (both true positives and false positives)

An accuracy of 100% means that the measured values are exactly the same as the given values. Also see Sensitivity and specificity. Accuracy may be determined from Sensitivity and Specificity, provided Prevalence is known, using the equation:
accuracy = (sensitivity)(prevalence) + (specificity)(1 prevalence)

The accuracy paradox for predictive analytics states that predictive models with a given level of accuracy may have greater predictive power than models with higher accuracy. It may be better to avoid the accuracy metric in favor of other metrics such as precision and recall.[citation needed] In situations where the minority class is more important, F-measure may be more appropriate, especially in situations with very skewed class imbalance. An alternate performance measure that treats both classes with equal importance is "balanced accuracy":

[edit] Accuracy and precision in psychometrics and psychophysics


In psychometrics and psychophysics, the term accuracy is interchangeably used with validity and constant error. Precision is a synonym for reliability and variable error. The validity of a measurement instrument or psychological test is established through experiment or correlation with behavior. Reliability is established with a variety of statistical techniques, classically through an internal consistency test like Cronbach's alpha to ensure sets of related questions have

related responses, and then comparison of those related question between reference and target population.[citation needed]

[edit] Accuracy and precision in logic simulation


In logic simulation, a common mistake in evaluation of accurate models is to compare a logic simulation model to a transistor circuit simulation model. This is a comparison of differences in precision, not accuracy. Precision is measured with respect to detail and accuracy is measured with respect to reality.[3][4]

[edit] Accuracy and precision in information systems


The concepts of accuracy and precision have also been studied in the context of data bases, information systems and their sociotechnical context. The necessary extension of these two concepts on the basis of theory of science suggests that they (as well as data quality and information quality) should be centered on accuracy defined as the closeness to the true value seen as the degree of agreement of readings or of calculated values of one same conceived entity, measured or calculated by different methods, in the context of maximum possible disagreement.
[5]

Scientific method
From Wikipedia, the free encyclopedia

Part of a series on Science Formal sciences[show] Physical sciences[show] Life sciences[show] Social and Behavioural sciences[show]

Applied sciences[show] Related topics[show] vde

Scientific method refers to a body of techniques for investigating phenomena, acquiring new knowledge, or correcting and integrating previous knowledge.[1] To be termed scientific, a method of inquiry must be based on gathering empirical and measurable evidence subject to specific principles of reasoning.[2] The Oxford English Dictionary says that scientific method is: "a method of procedure that has characterized natural science since the 17th century, consisting in systematic observation, measurement, and experiment, and the formulation, testing, and modification of hypotheses."[3] Although procedures vary from one field of inquiry to another, identifiable features distinguish scientific inquiry from other methods of obtaining knowledge. Scientific researchers propose hypotheses as explanations of phenomena, and design experimental studies to test these hypotheses via predictions which can be derived from them. These steps must be repeatable, to guard against mistake or confusion in any particular experimenter. Theories that encompass wider domains of inquiry may bind many independently derived hypotheses together in a coherent, supportive structure. Theories, in turn, may help form new hypotheses or place groups of hypotheses into context. Scientific inquiry is generally intended to be as objective as possible, to reduce biased interpretations of results. Another basic expectation is to document, archive and share all data and methodology so they are available for careful scrutiny by other scientists, giving them the opportunity to verify results by attempting to reproduce them. This practice, called full disclosure, also allows statistical measures of the reliability of these data to be established.

Contents
[hide]

1 Introduction to scientific method o 1.1 DNA example 2 Truth and belief o 2.1 Beliefs and biases o 2.2 Certainty and myth 3 Elements of scientific method o 3.1 Characterizations o 3.2 Hypothesis development o 3.3 Predictions from the hypothesis o 3.4 Experiments o 3.5 Evaluation and improvement o 3.6 Confirmation 4 Models of scientific inquiry

4.1 Classical model 4.2 Pragmatic model 4.3 Computational approaches 5 Communication and community o 5.1 Peer review evaluation o 5.2 Documentation and replication o 5.3 Dimensions of practice 6 Philosophy and sociology of science o 6.1 Luck and science 7 History 8 Relationship with mathematics 9 See also o 9.1 Problems and issues o 9.2 History, philosophy, sociology 10 Notes 11 References 12 Further reading
o o o

13 External links

Introduction to scientific method


See also: History of scientific method and Timeline of the history of scientific method

Ibn al-Haytham (Alhazen), 9651039, Basra.

"Modern science owes its origins and present flourishing state to a new scientific method which was fashioned almost entirely by Galileo Galilei (1564-1642)" Morris Kline[4]

Johannes Kepler (15711630). "Kepler shows his keen logical sense in detailing the whole process by which he finally arrived at the true orbit. This is the greatest piece of Retroductive reasoning ever performed." C. S. Peirce, circa 1896, on Kepler's reasoning through explanatory hypotheses[5]

Since Ibn al-Haytham (Alhazen, 9651039), one of the key figures in the development of scientific method, the emphasis has been on seeking truth:

Truth is sought for its own sake. And those who are engaged upon the quest for anything for its own sake are not interested in other things. Finding the truth is difficult, and the road to it is rough.[6]

"Light travels through transparent bodies in straight lines only" Alhazen in Book of Optics (1021 Arabic: Kitb al-Manir) as shown in a Basle 1572 Latin translation, Friedrich Risner, ed., Opticae Thesaurus Alhazeni Arabis,[7] frontispiece showing optical phenomena: transmission of light through the atmosphere, reflection of light rays from parabolic mirrors during the defense of Syracuse by Archimedes against ships of the Roman Republic, refraction of light rays by water, and the production of colors in a rainbow. How does light travel through transparent bodies? Light travels through transparent bodies in straight lines only.... We have explained this exhaustively in our Book of Optics. But let us now mention something to prove this convincingly: the fact that light travels in straight lines is clearly observed in the lights which enter into dark rooms through holes.... [T]he entering light will be clearly observable in the dust which fills the air.[8]

The conjecture that "light travels through transparent bodies in straight lines only" was corroborated by Alhazen only after years of effort. His demonstration of the conjecture was to place a straight stick or a taut thread next to the light beam,[9] to prove that light travels in a straight line. Scientific methodology has been practiced in some form for at least one thousand years.[10] There are difficulties in a formulaic statement of method, however. As William Whewell (17941866) noted in his History of Inductive Science (1837) and in Philosophy of Inductive Science (1840), "invention, sagacity, genius" are required at every step in scientific method. It is not enough to base scientific method on experience alone;[11] multiple steps are needed in scientific method, ranging from our experience to our imagination, back and forth.

In the 20th century, a hypothetico-deductive model[12] for scientific method was formulated (for a more formal discussion, see below):
1. Use your experience: Consider the problem and try to make sense of it. Look for previous explanations. If this is a new problem to you, then move to step 2. 2. Form a conjecture: When nothing else is yet known, try to state an explanation, to someone else, or to your notebook. 3. Deduce a prediction from that explanation: If you assume 2 is true, what consequences follow? 4. Test: Look for the opposite of each consequence in order to disprove 2. It is a logical error to seek 3 directly as proof of 2. This error is called affirming the consequent.[13]

This model underlies the scientific revolution. One thousand years ago, Alhazen demonstrated the importance of steps 1 and 4.[14] Galileo 1638 also showed the importance of step 4 (also called Experiment) in Two New Sciences.[15] One possible sequence in this model would be 1, 2, 3, 4. If the outcome of 4 holds, and 3 is not yet disproven, you may continue with 3, 4, 1, and so forth; but if the outcome of 4 shows 3 to be false, you will have to go back to 2 and try to invent a new 2, deduce a new 3, look for 4, and so forth. Note that this method can never absolutely verify (prove the truth of) 2. It can only falsify 2.[16] (This is what Einstein meant when he said, "No amount of experimentation can ever prove me right; a single experiment can prove me wrong."[17]) However, as pointed out by Carl Hempel (19051997) this simple view of scientific method is incomplete; the formulation of the conjecture might itself be the result of inductive reasoning. Thus the likelihood of the prior observation being true is statistical in nature[18] and would strictly require a Bayesian analysis. To overcome this uncertainty, experimental scientists must formulate a crucial experiment,[19] in order for it to corroborate a more likely hypothesis. In the 20th century, Ludwik Fleck (18961961) and others argued that scientists need to consider their experiences more carefully, because their experience may be biased, and that they need to be more exact when describing their experiences.[20]

DNA example
Four basic elements of scientific method are illustrated below, by example from the discovery of the structure of DNA:

DNA-characterizations: in this case, although the identity of DNA as at least one and possibly the only genetic substance (in which genes were to be found) had been established by Avery at Rockefeller University in the 1940s, the mechanism was unclear to anyone in 1950.

DNA-hypotheses: Crick and Watson hypothesized that the gene had a physical basis, that it was helical.[21] DNA-predictions: from earlier work on tobacco mosaic virus,[22] Watson was aware of the significance of Crick's formulation of the transform of a helix.[23] Thus he was primed for the significance of the X-shape in photo 51. DNA-experiments: Watson saw photo 51.[24]

The examples are continued in "Evaluations and iterations" with DNA-iterations.[25]

Truth and belief


Main article: Truth

In the same way that the Muslim scholar Alhazen sought truth during his pioneering studies in optics 1000 years ago, arriving at the truth is the goal of a scientific inquiry.[26]

Beliefs and biases

Flying gallop falsified; see image below.

Belief can alter observation; human confirmation bias is a heuristic that leads a person with a particular belief to see things as reinforcing their belief, even if another observer might disagree. Researchers have often noted that first observations are often somewhat imprecise, whereas the second and third were "adjusted to the facts". Eventually, factors such as openness to experience, self-esteem, time, and comfort can produce a readiness for new perception.[27]

Eadweard Muybridge's studies of a horse galloping

Needham's Science and Civilization in China uses the 'flying gallop' image as an example of observation bias:[28] In these images, the legs of a galloping horse are shown splayed, while the first stop-action pictures of a horse's gallop by Eadweard Muybridge showed this to be false. In a horse's gallop, at the moment that no hoof touches the ground, a horse's legs are gathered together -- not splayed. Earlier paintings show an incorrect flying gallop observation. This image illustrates Ludwik Fleck's suggestion that people be cautions lest they observe what is not so; people often observe what they expect to observe. Until shown otherwise; their beliefs affect their observations (and, therefore, any subsequent actions which depend on those observations, in a self-fulfilling prophecy). This isone of the reasons (mistake, confusion, inadequate instruments, etc are others) why scientific methodology directs that hypotheses be tested in controlled conditions which can be reproduced by others. The scientific community's pursuit of experimental control and reproducibility, diminishes the effects of cognitive biases

Certainty and myth


Any scientific theory is closed tied to empirical findings, and always remains subject to falsification if new experimental observation incompatible with it is found. That is, no theory can ever be seriously considered certain as new evidence falsifying it can be discovered. Most scientific theories don't result in large changes in human understanding. Improvements in theoretical scientific understanding is usually the result of a gradual synthesis of the results of different experiments, by various researchers, across different domains of science.[29] Theories vary in the extent to which they have been experimentally tested and for how long, and in their acceptance in the scientific community. In contrast to the always-provisional status of scientific theory, a myth can be believed and acted upon, or depended upon, irrespective of its truth.[30] Imre Lakatos has noted that once a narrative is constructed its elements become easier to believe (this is called the narrative fallacy).[31][32] That is, theories become accepted by a scientific community as evidence for the theory is presented, and as presumptions that are inconsistent with the evidence are falsified. -- The difference between a theory and a myth reflects a preference for a posteriori versus a priori knowledge. --[citation needed]

Thomas Brody notes that confirmed theories are subject to subsumption by other theories, as special cases of a more general theory. For example, thousands of years of scientific observations of the planets were explained by Newton's laws. Thus the body of independent, unconnected, scientific observation can diminish.[33] Yet there is a preference in the scientific community for new, surprising statements, and the search for evidence that the new is true.[34] Goldhaber & Nieto 2010, p. 941 additionally state that "If many closely neighboring subjects are described by connecting theoretical concepts, then a theoretical structure acquires a robustness which makes it increasingly hard though certainly never impossible to overturn."

Elements of scientific method


There are different ways of outlining the basic method used for scientific inquiry. The scientific community and philosophers of science generally agree on the following classification of method components. These methodological elements and organization of procedures tend to be more characteristic of natural sciences than social sciences. Nonetheless, the cycle of formulating hypotheses, testing and analyzing the results, and formulating new hypotheses, will resemble the cycle described below.
Four essential elements[35][36][37] of a scientific method[38] are iterations,[39][40] recursions,[41] interleavings, or orderings of the following: Characterizations (observations,[42] definitions, and measurements of the subject of inquiry) Hypotheses[43][44] (theoretical, hypothetical explanations of observations and measurements of the subject)[45] Predictions (reasoning including logical deduction[46] from the hypothesis or theory) Experiments[47] (tests of all of the above)

Each element of a scientific method is subject to peer review for possible mistakes. These activities do not describe all that scientists do (see below) but apply mostly to experimental sciences (e.g., physics, chemistry, and biology). The elements above are often taught in the educational system as "the scientific method".[48] The scientific method is not a single recipe: it requires intelligence, imagination, and creativity. [49] In this sense, it is not a mindless set of standards and procedures to follow, but is rather an ongoing cycle, constantly developing more useful, accurate and comprehensive models and methods. For example, when Einstein developed the Special and General Theories of Relativity, he did not in any way refute or discount Newton's Principia. On the contrary, if the astronomically large, the vanishingly small, and the extremely fast are removed from Einstein's theories all phenomena Newton could not have observed Newton's equations are what remain. Einstein's theories are expansions and refinements of Newton's theories and, thus, increase our confidence in Newton's work. A linearized, pragmatic scheme of the four points above is sometimes offered as a guideline for proceeding:[50]

1. 2. 3. 4. 5. 6.

Define a question Gather information and resources (observe) Form an explanatory hypothesis Perform an experiment and collect data, testing the hypothesis Analyze the data Interpret the data and draw conclusions that serve as a starting point for new hypothesis 7. Publish results 8. Retest (frequently done by other scientists)

The iterative cycle inherent in this step-by-step methodology goes from point 3 to 6 back to 3 again. While this schema outlines a typical hypothesis/testing method,[51] it should also be noted that a number of philosophers, historians and sociologists of science (perhaps most notably Paul Feyerabend) claim that such descriptions of scientific method have little relation to the ways science is actually practiced. The "operational" paradigm combines the concepts of operational definition, instrumentalism, and utility: The essential elements of a scientific method are operations, observations, models, and a utility function for evaluating models.[52][not in citation given]

Operation - Some action done to the system being investigated Observation - What happens when the operation is done to the system Model - A fact, hypothesis, theory, or the phenomenon itself at a certain moment Utility Function - A measure of the usefulness of the model to explain, predict, and control, and of the cost of use of it. One of the elements of any scientific utility function is the refutability of the model. Another is its simplicity, on the Principle of Parsimony more commonly known as Occam's Razor.

Characterizations
Scientific method depends upon increasingly sophisticated characterizations of the subjects of investigation. (The subjects can also be called unsolved problems or the unknowns.) For example, Benjamin Franklin conjectured, correctly, that St. Elmo's fire was electrical in nature, but it has taken a long series of experiments and theoretical changes to establish this. While seeking the pertinent properties of the subjects, careful thought may also entail some definitions and observations; the observations often demand careful measurements and/or counting. The systematic, careful collection of measurements or counts of relevant quantities is often the critical difference between pseudo-sciences, such as alchemy, and science, such as chemistry or biology. Scientific measurements are usually tabulated, graphed, or mapped, and statistical manipulations, such as correlation and regression, performed on them. The measurements might

be made in a controlled setting, such as a laboratory, or made on more or less inaccessible or unmanipulatable objects such as stars or human populations. The measurements often require specialized scientific instruments such as thermometers, spectroscopes, particle accelerators, or voltmeters, and the progress of a scientific field is usually intimately tied to their invention and improvement.
"I am not accustomed to saying anything with certainty after only one or two observations."Andreas Vesalius (1546) [53]

Uncertainty
Measurements in scientific work are also usually accompanied by estimates of their uncertainty. The uncertainty is often estimated by making repeated measurements of the desired quantity. Uncertainties may also be calculated by consideration of the uncertainties of the individual underlying quantities used. Counts of things, such as the number of people in a nation at a particular time, may also have an uncertainty due to data collection limitations. Or counts may represent a sample of desired quantities, with an uncertainty that depends upon the sampling method used and the number of samples taken.

Definition
Measurements demand the use of operational definitions of relevant quantities. That is, a scientific quantity is described or defined by how it is measured, as opposed to some more vague, inexact or "idealized" definition. For example, electrical current, measured in amperes, may be operationally defined in terms of the mass of silver deposited in a certain time on an electrode in an electrochemical device that is described in some detail. The operational definition of a thing often relies on comparisons with standards: the operational definition of "mass" ultimately relies on the use of an artifact, such as a particular kilogram of platinum-iridium kept in a laboratory in France. The scientific definition of a term sometimes differs substantially from its natural language usage. For example, mass and weight overlap in meaning in common discourse, but have distinct meanings in mechanics. Scientific quantities are often characterized by their units of measure which can later be described in terms of conventional physical units when communicating the work. New theories are sometimes after realizing certain terms had not previously been sufficiently clearly defined. For example, Albert Einstein's first paper on relativity begins by defining simultaneity and the means for determining length. These ideas were skipped over by Isaac Newton with, "I do not define time, space, place and motion, as being well known to all." Einstein's paper then demonstrates that they (viz., absolute time and length independent of motion) were approximations. Francis Crick cautions us that when characterizing a subject, however, it can be premature to define something when it remains ill-understood.[54] In Crick's

study of consciousness, he actually found it easier to study awareness in the visual system, rather than to study free will, for example. His cautionary example was the gene; the gene was much more poorly understood before Watson and Crick's pioneering discovery of the structure of DNA; it would have been counterproductive to spend much time on the definition of the gene, before them.

DNA-characterizations

The history of the discovery of the structure of DNA is a classic example of the elements of scientific method: in 1950 it was known that genetic inheritance had a mathematical description, starting with the studies of Gregor Mendel, and that DNA contained genetic information (Oswald Avery's transforming principle).[55] But the mechanism of storing genetic information (ie, genes) in DNA was unclear. Researchers in Bragg's laboratory at Cambridge University made X-ray diffraction pictures of various molecules, starting with crystals of salt, and proceeding to more complicated substances. Using clues painstakingly assembled over decades, beginning with its chemical composition, it was determined that it should be possible to characterize the physical structure of DNA, and the X-ray images would be the vehicle.[56] ..2. DNA-hypotheses

Another example: precession of Mercury

Precession of the perihelion (exaggerated)

The characterization element can require extended and extensive study, even centuries. It took thousands of years of measurements, from the Chaldean, Indian, Persian, Greek, Arabic and European astronomers, to fully record the motion of planet Earth. Newton was able to include those measurements into consequences of his laws of motion. But the perihelion of the planet Mercury's orbit exhibits a precession that cannot be fully explained by Newton's laws of motion (see diagram to the right), though it took quite some time to realize this. The observed difference for Mercury's precession between Newtonian theory and observation was one of the things that occurred to Einstein as a possible early test of his theory of General Relativity. His relativistic calculations matched observation much more closely than did Newtonian theory (the difference is approximately 43 arc-seconds per century), .

Hypothesis development

Main article: Hypothesis formation

A hypothesis is a suggested explanation of a phenomenon, or alternately a reasoned proposal suggesting a possible correlation between or among a set of phenomena. Normally hypotheses have the form of a mathematical model. Sometimes, but not always, they can also be formulated as existential statements, stating that some particular instance of the phenomenon being studied has some characteristic and causal explanations, which have the general form of universal statements, stating that every instance of the phenomenon has a particular characteristic. Scientists are free to use whatever resources they have their own creativity, ideas from other fields, induction, Bayesian inference, and so on to imagine possible explanations for a phenomenon under study. Charles Sanders Peirce, borrowing a page from Aristotle (Prior Analytics, 2.25) described the incipient stages of inquiry, instigated by the "irritation of doubt" to venture a plausible guess, as abductive reasoning. The history of science is filled with stories of scientists claiming a "flash of inspiration", or a hunch, which then motivated them to look for evidence to support or refute their idea. Michael Polanyi made such creativity the centerpiece of his discussion of methodology. William Glen observes that
the success of a hypothesis, or its service to science, lies not simply in its perceived "truth", or power to displace, subsume or reduce a predecessor idea, but perhaps more in its ability to stimulate the research that will illuminate bald suppositions and areas of vagueness.[57]

In general scientists tend to look for theories that are "elegant" or "beautiful". In contrast to the usual English use of these terms, they here refer to a theory in accordance with the known facts, which is nevertheless relatively simple and easy to handle. Occam's Razor serves as a rule of thumb for choosing the most desirable amongst a group of equally explanatory hypotheses.

DNA-hypotheses

Linus Pauling proposed that DNA might be a triple helix.[58] This hypothesis was also considered by Francis Crick and James D. Watson but discarded. When Watson and Crick learned of Pauling's hypothesis, they understood from existing data that Pauling was wrong[59] and that Pauling would soon admit his difficulties with that structure. So, the race was on to figure out the correct structure (except that Pauling did not realize at the time that he was in a racesee section on "DNA-predictions" below)

Predictions from the hypothesis


Main article: Prediction in science

Any useful hypothesis will enable predictions, by reasoning including deductive reasoning. It might predict the outcome of an experiment in a laboratory setting or the observation of a phenomenon in nature. The prediction can also be statistical and deal only with probabilities. It is essential that the outcome testing such a prediction be currently unknown. Only in this case does the eventuation increase the probability that the hypothesis be true. If the outcome is already known, it's called a consequence and should have already been considered while formulating the hypothesis. If the predictions are not accessible by observation or experience, the hypothesis is not yet testable and so will remain to that extent unscientific in a strict sense. A new technology or theory might make the necessary experiments feasible. Thus, much scientifically based speculation might convince one (or many) that the hypothesis that other intelligent species exist is true, but there being not experiment now known which can test this hypothesis, science itself can have little to say about the possibility. In future, some new technique might lead to an experimental test and the speculation become part of accepted science.

DNA-predictions

James D. Watson, Francis Crick, and others hypothesized that DNA had a helical structure. This implied that DNA's X-ray diffraction pattern would be 'x shaped'.[60][61] This prediction followed from the work of Cochran, Crick and Vand[23] (and independently by Stokes). The CochranCrick-Vand-Stokes theorem provided a mathematical explanation for the empirical observation that diffraction from helical structures produces x shaped patterns. In their first paper, Watson and Crick also noted that the double helix structure they proposed provided a simple mechanism for DNA replication, writing "It has not escaped our notice that the specific pairing we have postulated immediately suggests a possible copying mechanism for the genetic material".[62] ..4. DNA-experiments

Another example: general relativity

Einstein's prediction (1907): Light bends in a gravitational field

Einstein's theory of General Relativity makes several specific predictions about the observable structure of space-time, such as that light bends in a gravitational field, and that the amount of bending depends in a precise way on the strength of that gravitational field. Arthur Eddington's observations made during a 1919 solar eclipse supported General Relativity rather than Newtonian gravitation.[63]

Experiments
Main article: Experiment It has been suggested that this article or section be merged with Scientific control. (Discuss) Proposed since March 2011.

Once predictions are made, they can be sought by experiments. If test results contradict the predictions, the hypotheses which made them are called into question and become less tenable. Sometimes experiments are conducted incorrectly and are not very useful. If the results confirm the predictions, then the hypotheses are considered more likely to be correct, but might still be wrong and continue to be subject to further testing. The experimental control is a technique for dealing with observational error. This technique uses the contrast between multiple samples (or observations) under differing conditions to see what varies or what remains the same. We vary the conditions for each measurement, to help isolate what has changed. Mill's canons can then help us figure out what the important factor is.[64] Factor analysis is one technique for discovering the important factor in an effect. Depending on the predictions, the experiments can have different shapes. It could be a classical experiment in a laboratory setting, a double-blind study or an archaeological excavation. Even taking a plane from New York to Paris is an experiment which tests the aerodynamical hypotheses used for constructing the plane. Scientists assume an attitude of openness and accountability on the part of those conducting an experiment. Detailed record keeping is essential, to aid in recording and reporting on the experimental results, and supports the effectiveness and integrity of the procedure. They will also assist in reproducing the experimental results, likely by others. Traces of this approach can be seen in the work of Hipparchus (190-120 BCE), when determining a value for the precession of the Earth, while controlled experiments can be seen in the works of Muslim scientists such as Jbir ibn Hayyn (721-815 CE), al-Battani (853929) and Alhacen (965-1039).

DNA-experiments

Watson and Crick showed an initial (and incorrect) proposal for the structure of DNA to a team from Kings College - Rosalind Franklin, Maurice Wilkins, and Raymond Gosling. Franklin immediately spotted the flaws which concerned the water content. Later Watson saw Franklin's

detailed X-ray diffraction images which showed an X-shape and was able to confirm the structure was helical.[24][65] This rekindled Watson and Crick's model building and led to the correct structure. ..1. DNA-characterizations

Evaluation and improvement


The scientific process is iterative. At any stage it is possible to refine its accuracy and precision, so that some consideration will lead the scientist to repeat an earlier part of the process. Failure to develop an interesting hypothesis may lead a scientist to re-define the subject under consideration. Failure of a hypothesis to produce interesting and testable predictions may lead to reconsideration of the hypothesis or of the definition of the subject. Failure of an experiment to produce interesting results may lead a scientist to reconsider the experimental method, the hypothesis, or the definition of the subject. Other scientists may start their own research and enter the process at any stage. They might adopt the characterization and formulate their own hypothesis, or they might adopt the hypothesis and deduce their own predictions. Often the experiment is not done by the person who made the prediction, and the characterization is based on experiments done by someone else. Published results of experiments can also serve as a hypothesis predicting their own reproducibility.

DNA-iterations

After considerable fruitless experimentation, being discouraged by their superior from continuing, and numerous false starts,[66][67][68] Watson and Crick were able to infer the essential structure of DNA by concrete modeling of the physical shapes of the nucleotides which comprise it.[25][69] They were guided by the bond lengths which had been deduced by Linus Pauling and by Rosalind Franklin's X-ray diffraction images. ..DNA Example

Confirmation
Science is a social enterprise, and scientific work tends to be accepted by the scientific community when it has been confirmed. Crucially, experimental and theoretical results must be reproduced by others within the scientific community. Researchers have given their lives for this vision; Georg Wilhelm Richmann was killed by ball lightning (1753) when attempting to replicate the 1752 kite-flying experiment of Benjamin Franklin.[70] To protect against bad science and fraudulent data, government research-granting agencies such as the National Science Foundation, and science journals including Nature and Science, have a policy that researchers must archive their data and methods so other researchers can test the data and methods and build on the research that has gone before. Scientific data archiving can be done at a number of national archives in the U.S. or in the World Data Center.

Models of scientific inquiry


Main article: Models of scientific inquiry

Classical model
The classical model of scientific inquiry derives from Aristotle,[71] who distinguished the forms of approximate and exact reasoning, set out the threefold scheme of abductive, deductive, and inductive inference, and also treated the compound forms such as reasoning by analogy.

Pragmatic model
See also: Pragmatic theory of truth

In 1877,[72] Charles Sanders Peirce ( /prs/ like "purse"; 18391914) characterized inquiry in general not as the pursuit of truth per se but as the struggle to move from irritating, inhibitory doubts born of surprises, disagreements, and the like, and to reach a secure belief, belief being that on which one is prepared to act. He framed scientific inquiry as part of a broader spectrum and as spurred, like inquiry generally, by actual doubt, not mere verbal or hyperbolic doubt, which he held to be fruitless.[73] He outlined four methods of settling opinion, ordered from least to most successful:
1. The method of tenacity (policy of sticking to initial belief) which brings comforts and decisiveness but leads to trying to ignore contrary information and others' views as if truth were intrinsically private, not public. It goes against the social impulse and easily falters since one may well notice when another's opinion is as good as one's own initial opinion. Its successes can shine but tend to be transitory. 2. The method of authority which overcomes disagreements but sometimes brutally. Its successes can be majestic and long-lived, but it cannot operate thoroughly enough to suppress doubts indefinitely, especially when people learn of other societies present and past. 3. The method of congruity or the a priori or the dilettante or "what is agreeable to reason" which promotes conformity less brutally but depends on taste and fashion in paradigms and can go in circles over time, along with barren disputation. It is more intellectual and respectable but, like the first two methods, sustains capricious and accidental beliefs, destining some minds to doubts. 4. The scientific method the method wherein inquiry regards itself as fallible and purposely tests itself and criticizes, corrects, and improves itself.

Peirce held that slow, stumbling ratiocination can be dangerously inferior to instinct and traditional sentiment in practical matters, and that the scientific method is best suited to theoretical research,[74] which in turn should not be trammeled by the other methods and practical ends; reason's "first rule" is that, in order to learn, one must desire to learn and, as a corollary, must not block the way of inquiry.[75] The scientific method excels the others by being deliberately designed to arrive eventually at the most secure beliefs, upon which the most

successful practices can be based. Starting from the idea that people seek not truth per se but instead to subdue irritating, inhibitory doubt, Peirce showed how, through the struggle, some can come to submit to truth for the sake of belief's integrity, seek as truth the guidance of potential practice correctly to its given goal, and wed themselves to the scientific method.[72][76] For Peirce, rational inquiry implies presuppositions about truth and the real; to reason is to presuppose (and at least to hope), as a principle of the reasoner's self-regulation, that the real is discoverable and independent of our vagaries of opinion. In that vein he defined truth as the correspondence of a sign (in particular, a proposition) to its object and, pragmatically, not as actual consensus of some definite, finite community (such that to inquire would be to poll the experts), but instead as that final opinion which all investigators would reach sooner or later but still inevitably, if they were to push investigation far enough, even when they start from different points.[77] In tandem he defined the real as a true sign's object (be that object a possibility or quality, or an actuality or brute fact, or a necessity or norm or law), which is what it is independently of any finite community's opinion and, pragmatically, depends only on the final opinion destined in a sufficient investigation. That is a destination as far, or near, as the truth itself to you or me or the given finite community. Thus his theory of inquiry boils down to "Do the science." Those conceptions of truth and the real involve the idea of a community both without definite limits (and thus potentially self-correcting as far as needed) and capable of definite increase of knowledge.[78] As inference, "logic is rooted in the social principle" since it depends on a standpoint that is, in a sense, unlimited.[79] Paying special attention to the generation of explanations, Peirce outlined scientific method as a coordination of three kinds of inference in a purposeful cycle aimed at settling doubts, as follows (in IIIIV in "A Neglected Argument"[80] except as otherwise noted): 1. Abduction (or retroduction). Guessing, inference to explanatory hypotheses for selection of those best worth trying. From abduction, Peirce distinguishes induction as inferring, on the basis of tests, the proportion of truth in the hypothesis. Every inquiry, whether into ideas, brute facts, or norms and laws, arises from surprising observations in one or more of those realms (and for example at any stage of an inquiry already underway). All explanatory content of theories comes from abduction, which guesses a new or outside idea so as to account in a simple, economical way for a surprising or complicative phenomenon. Oftenest, even a well-prepared mind guesses wrong. But the modicum of success of our guesses far exceeds that of sheer luck and seems born of attunement to nature by instincts developed or inherent, especially insofar as best guesses are optimally plausible and simple in the sense, said Peirce, of the "facile and natural", as by Galileo's natural light of reason and as distinct from "logical simplicity". Abduction is the most fertile but least secure mode of inference. Its general rationale is inductive: it succeeds often enough and, without it, there is no hope of sufficiently expediting inquiry (often multigenerational) toward new truths.[81] Coordinative method leads from abducing a plausible hypothesis to judging it for its testability[82] and for how its trial would economize inquiry itself. [83] Peirce calls his pragmatism "the logic of abduction".[84] His pragmatic maxim is: "Consider what effects that might conceivably have practical bearings you conceive the objects of your conception to have. Then, your conception of those effects is the whole of your conception of the object".[77] His pragmatism is a method of reducing conceptual confusions fruitfully by equating the meaning of any conception with the conceivable practical implications of its object's

conceived effects a method of experimentational mental reflection hospitable to forming hypotheses and conducive to testing them. It favors efficiency. The hypothesis, being insecure, needs to have practical implications leading at least to mental tests and, in science, lending themselves to scientific tests. A simple but unlikely guess, if uncostly to test for falsity, may belong first in line for testing. A guess is intrinsically worth testing if it has instinctive plausibility or reasoned objective probability, while subjective likelihood, though reasoned, can be misleadingly seductive. Guesses can be chosen for trial strategically, for their caution (for which Peirce gave as example the game of Twenty Questions), breadth, and incomplexity.[85] One can hope to discover only that which time would reveal through a learner's sufficient experience anyway, so the point is to expedite it; the economy of research is what demands the "leap" of abduction and governs its art.[83] 2. Deduction. Two stages:
i. Explication. Unclearly premissed, but deductive, analysis of the hypothesis in order to render its parts as clear as possible. ii. Demonstration: Deductive Argumentation, Euclidean in procedure. Explicit deduction of hypothesis's consequences as predictions, for induction to test, about evidence to be found. Corollarial or, if needed, Theorematic.

3. Induction. The long-run validity of the rule of induction is deducible from the principle (presuppositional to reasoning in general[77]) that the real is only the object of the final opinion to which adequate investigation would lead;[86] anything to which no such process would ever lead would not be real. Induction involving ongoing tests or observations follows a method which, sufficiently persisted in, will diminish its error below any predesignate degree. Three stages:
i. Classification. Unclearly premissed, but inductive, classing of objects of experience under general ideas. ii. Probation: direct (and explicit) Inductive Argumentation. Crude (the enumeration of instances) or Gradual (new estimate of proportion of truth in the hypothesis after each test). Gradual Induction is Qualitative or Quantitative; if Qualitative, then dependent on weightings of qualities or characters;[87] if Quantitative, then dependent on measurements, or on statistics, or on countings. iii. Sentential Induction. "...which, by Inductive reasonings, appraises the different Probations singly, then their combinations, then makes selfappraisal of these very appraisals themselves, and passes final judgment on the whole result".

Computational approaches
Many subspecialties of applied logic and computer science, such as artificial intelligence, machine learning, computational learning theory, inferential statistics, and knowledge

representation, are concerned with setting out computational, logical, and statistical frameworks for the various types of inference involved in scientific inquiry. In particular, they contribute hypothesis formation, logical deduction, and empirical testing. Some of these applications draw on measures of complexity from algorithmic information theory to guide the making of predictions from prior distributions of experience, for example, see the complexity measure called the speed prior from which a computable strategy for optimal inductive reasoning can be derived.

Communication and community


Frequently a scientific method is employed not only by a single person, but also by several people cooperating directly or indirectly. Such cooperation can be regarded as one of the defining elements of a scientific community. Various techniques have been developed to ensure the integrity of that scientific method within such an environment.

Peer review evaluation


Scientific journals use a process of peer review, in which scientists' manuscripts are submitted by editors of scientific journals to (usually one to three) fellow (usually anonymous) scientists familiar with the field for evaluation. The referees may or may not recommend publication, publication with suggested modifications, or, sometimes, publication in another journal. This serves to keep the scientific literature free of unscientific or pseudoscientific work, to help cut down on obvious errors, and generally otherwise to improve the quality of the material. The peer review process can have limitations when considering research outside the conventional scientific paradigm: problems of "groupthink" can interfere with open and fair deliberation of some new research.[88]

Documentation and replication


Main article: Reproducibility

Sometimes experimenters may make systematic errors during their experiments, unconsciously veer from a scientific method (Pathological science) for various reasons, or, in rare cases, deliberately report false results. Consequently, it is a common practice for other scientists to attempt to repeat the experiments in order to duplicate the results, thus further validating the hypothesis.

Archiving
As a result, researchers are expected to practice scientific data archiving in compliance with the policies of government funding agencies and scientific journals. Detailed records of their experimental procedures, raw data, statistical analyses and source code are preserved in order to provide evidence of the effectiveness and integrity of the procedure and assist in reproduction. These procedural records may also assist in the conception of new experiments to test the

hypothesis, and may prove useful to engineers who might examine the potential practical applications of a discovery.

Data sharing
When additional information is needed before a study can be reproduced, the author of the study is expected to provide it promptly. If the author refuses to share data, appeals can be made to the journal editors who published the study or to the institution which funded the research.

Limitations
Since it is impossible for a scientist to record everything that took place in an experiment, facts selected for their apparent relevance are reported. This may lead, unavoidably, to problems later if some supposedly irrelevant feature is questioned. For example, Heinrich Hertz did not report the size of the room used to test Maxwell's equations, which later turned out to account for a small deviation in the results. The problem is that parts of the theory itself need to be assumed in order to select and report the experimental conditions. The observations are hence sometimes described as being 'theory-laden'.

Dimensions of practice
Further information: Rhetoric of science

The primary constraints on contemporary western science are:


Publication, i.e. Peer review Resources (mostly funding)

It has not always been like this: in the old days of the "gentleman scientist" funding (and to a lesser extent publication) were far weaker constraints. Both of these constraints indirectly bring in a scientific method work that too obviously violates the constraints will be difficult to publish and difficult to get funded. Journals do not require submitted papers to conform to anything more specific than "good scientific practice" and this is mostly enforced by peer review. Originality, importance and interest are more important - see for example the author guidelines for Nature.

Philosophy and sociology of science


Main articles: Philosophy of science and Sociology of science

Philosophy of science looks at the underpinning logic of the scientific method, at what separates science from non-science, and the ethic that is implicit in science. There are basic assumptions derived from philosophy that form the base of the scientific method - namely, that reality is objective and consistent, that humans have the capacity to perceive reality accurately, and that

rational explanations exist for elements of the real world. These assumptions from methodological naturalism form the basis on which science is grounded. Logical Positivist, empiricist, falsificationist, and other theories have claimed to give a definitive account of the logic of science, but each has in turn been criticized. Thomas Kuhn examined the history of science in his The Structure of Scientific Revolutions, and found that the actual method used by scientists differed dramatically from the then-espoused method. His observations of science practice are essentially sociological and do not speak to how science is or can be practiced in other times and other cultures. Norwood Russell Hanson, Imre Lakatos and Thomas Kuhn have done extensive work on the "theory laden" character of observation. Hanson (1958) first coined the term for the idea that all observation is dependent on the conceptual framework of the observer, using the concept of gestalt to show how preconceptions can affect both observation and description[89]. He opens Chapter 1 with a discussion of the Golgi bodies and their initial rejection as an artefact of staining technique, and a discussion of Brahe and Kepler observing the dawn and seeing a "different" sun rise despite the same physiological phenomenon. Kuhn [90] and Feyerabend [91] acknowledge the pioneering significance of his work. Kuhn (1961) said the scientist generally has a theory in mind before designing and undertaking experiments so as to make empirical observations, and that the "route from theory to measurement can almost never be traveled backward". This implies that the way in which theory is tested is dictated by the nature of the theory itself, which led Kuhn (1961, p. 166) to argue that "once it has been adopted by a profession ... no theory is recognized to be testable by any quantitative tests that it has not already passed".[92] Paul Feyerabend similarly examined the history of science, and was led to deny that science is genuinely a methodological process. In his book Against Method he argues that scientific progress is not the result of applying any particular method. In essence, he says that for any specific method or norm of science, one can find a historic episode where violating it has contributed to the progress of science. Thus, if believers in a scientific method wish to express a single universally valid rule, Feyerabend jokingly suggests, it should be 'anything goes'.[93] Criticisms such as his led to the strong programme, a radical approach to the sociology of science. In his 1958 book, Personal Knowledge, chemist and philosopher Michael Polanyi (18911976) criticized the common view that the scientific method is purely objective and generates objective knowledge. Polanyi cast this view as a misunderstanding of the scientific method and of the nature of scientific inquiry, generally. He argued that scientists do and must follow personal passions in appraising facts and in determining which scientific questions to investigate. He concluded that a structure of liberty is essential for the advancement of science - that the freedom to pursue science for its own sake is a prerequisite for the production of knowledge through peer review and the scientific method. The postmodernist critiques of science have themselves been the subject of intense controversy. This ongoing debate, known as the science wars, is the result of conflicting values and

assumptions between the postmodernist and realist camps. Whereas postmodernists assert that scientific knowledge is simply another discourse (note that this term has special meaning in this context) and not representative of any form of fundamental truth, realists in the scientific community maintain that scientific knowledge does reveal real and fundamental truths about reality. Many books have been written by scientists which take on this problem and challenge the assertions of the postmodernists while defending science as a legitimate method of deriving truth.[94]

Luck and science

Highly controlled experimentation allows researchers to catch their mistakes, but it also makes anomalies (which no one knew to look for) easier to see

Somewhere between 33% and 50% of all scientific discoveries are estimated to have been stumbled upon, rather than sought out. This may explain why scientists so often express that they were lucky.[95] Louis Pasteur is credited with the famous saying that "Luck favours the prepared mind", but some psychologists have begun to study what it means to be 'prepared for luck' in the scientific context. Research is showing that scientists are taught various heuristics that tend to harness chance and the unexpected.[95][96] This is what professor of economics Nassim Nicholas Taleb calls "Anti-fragility"; while some systems of investigation are fragile in the face of human error, human bias, and randomness, the scientific method is more than resistant or tough - it actually benefits from such randomness in many ways (it is anti-fragile). Taleb believes that the more anti-fragile the system, the more it will flourish in the real world.[97] Psychologist Kevin Dunbar says the process of discovery often starts with researchers finding bugs in their experiments. These unexpected results lead researchers to try and fix what they think is an error in their methodology. Eventually, the researcher decides the error is too persistent and systematic to be a coincidence. The highly controlled, cautious and curious aspects of the scientific method are thus what make it well suited for identifying such persistent systematic errors. At this point, the researcher will begin to think of theoretical explanations for the error, often seeking the help of colleagues across different domains of expertise.[95][96]

History

Main article: History of scientific method See also: Timeline of the history of scientific method

Aristotle, 384 BC322 BC. "As regards his method, Aristotle is recognized as the inventor of scientific method because of his refined analysis of logical implications contained in demonstrative discourse, which goes well beyond natural logic and does not owe anything to the ones who philosophized before him."Riccardo Pozzo[98]

The development of the scientific method is inseparable from the history of science itself. Ancient Egyptian documents describe empirical methods in astronomy,[99] mathematics,[100] and medicine.[101] The ancient Greek philosopher Thales in the 6th century BC refused to accept supernatural, religious or mythological explanations for natural phenomena, proclaiming that every event had a natural cause. The development of deductive reasoning by Plato was an important step towards the scientific method. Empiricism seems to have been formalized by Aristotle, who believed that universal truths could be reached via induction. There are hints of experimental methods from the Classical world (e.g., those reported by Archimedes in a report recovered early in the 20th century CE from an overwritten manuscript), but the first clear instances of an experimental scientific method seem to have been developed in the Arabic world, by Muslim scientists (See Alhazen),[102] who introduced the use of experimentation and quantification to distinguish between competing scientific theories set within a generally empirical orientation, perhaps by Alhazen in his optical experiments reported in his Book of Optics (1021).[103][unreliable source?] The modern scientific method crystallized no later than in the 17th and 18th centuries. In his work Novum Organum (1620) a reference to Aristotle's Organon Francis Bacon outlined a new system of logic to improve upon the old philosophical process of syllogism.[104] Then, in 1637, Ren Descartes established the framework for a scientific method's guiding principles in his treatise, Discourse on Method. The writings of Alhazen, Bacon and Descartes are considered critical in the historical development of the modern scientific method, as are those of John Stuart Mill.[105]

In the late 19th century, Charles Sanders Peirce proposed a schema that would turn out to have considerable influence in the development of current scientific method generally. Peirce accelerated the progress on several fronts. Firstly, speaking in broader context in "How to Make Our Ideas Clear" (1878), Peirce outlined an objectively verifiable method to test the truth of putative knowledge on a way that goes beyond mere foundational alternatives, focusing upon both deduction and induction. He thus placed induction and deduction in a complementary rather than competitive context (the latter of which had been the primary trend at least since David Hume, who wrote in the mid-to-late 18th century). Secondly, and of more direct importance to modern method, Peirce put forth the basic schema for hypothesis/testing that continues to prevail today. Extracting the theory of inquiry from its raw materials in classical logic, he refined it in parallel with the early development of symbolic logic to address the then-current problems in scientific reasoning. Peirce examined and articulated the three fundamental modes of reasoning that, as discussed above in this article, play a role in inquiry today, the processes that are currently known as abductive, deductive, and inductive inference. Thirdly, he played a major role in the progress of symbolic logic itself indeed this was his primary specialty. Beginning in the 1930s, Karl Popper argued that there is no such thing as inductive reasoning.[106] All inferences ever made, including in science, are purely[107] deductive according to this view. Accordingly, he claimed that the empirical character of science has nothing to do with induction but with the deductive property of falsifiability that scientific hypotheses have. Contrasting his views with inductivism and positivism, he even denied the existence of scientific method: "(1) There is no method of discovering a scientific theory (2) There is no method for ascertaining the truth of a scientific hypothesis, i.e., no method of verification; (3) There is no method for ascertaining whether a hypothesis is 'probable', or probably true".[108] Instead, he held that there is only one universal method, a method not particular to science: The negative method of criticism, or colloquially termed trial and error. It covers not only all products of the human mind, including science, mathematics, philosophy, art and so on, but also the evolution of life. Following Peirce and others, Popper argued that science is fallible and has no authority.[108] In contrast to empiricist-inductivist views, he welcomed metaphysics and philosophical discussion and even gave qualified support to myths[109] and pseudosciences.[110] Popper's view has become known as critical rationalism.

Relationship with mathematics


Science is the process of gathering, comparing, and evaluating proposed models against observables. A model can be a simulation, mathematical or chemical formula, or set of proposed steps. Science is like mathematics in that researchers in both disciplines can clearly distinguish what is known from what is unknown at each stage of discovery. Models, in both science and mathematics, need to be internally consistent and also ought to be falsifiable (capable of disproof). In mathematics, a statement need not yet be proven; at such a stage, that statement would be called a conjecture. But when a statement has attained mathematical proof, that statement gains a kind of immortality which is highly prized by mathematicians, and for which some mathematicians devote their lives.[111] Mathematical work and scientific work can inspire each other.[112] For example, the technical concept of time arose in science, and timelessness was a hallmark of a mathematical topic. But

today, the Poincar conjecture has been proven using time as a mathematical concept in which objects can flow (see Ricci flow). Nevertheless, the connection between mathematics and reality (and so science to the extent it describes reality) remains obscure. Eugene Wigner's paper, The Unreasonable Effectiveness of Mathematics in the Natural Sciences, is a very well-known account of the issue from a Nobel Prize physicist. In fact, some observers (including some well known mathematicians such as Gregory Chaitin, and others such as Lakoff and Nez) have suggested that mathematics is the result of practitioner bias and human limitation (including cultural ones), somewhat like the postmodernist view of science. George Plya's work on problem solving,[113] the construction of mathematical proofs, and heuristic[114][115] show that the mathematical method and the scientific method differ in detail, while nevertheless resembling each other in using iterative or recursive steps.
Mathematical method 1 Understanding 2 Analysis 3 Synthesis 4 Review/Extend Scientific method Characterization from experience and observation Hypothesis: a proposed explanation Deduction: prediction from the hypothesis Test and experiment

In Plya's view, understanding involves restating unfamiliar definitions in your own words, resorting to geometrical figures, and questioning what we know and do not know already; analysis, which Plya takes from Pappus,[116] involves free and heuristic construction of plausible arguments, working backward from the goal, and devising a plan for constructing the proof; synthesis is the strict Euclidean exposition of step-by-step details[117] of the proof; review involves reconsidering and re-examining the result and the path taken to it. Gauss, when asked how he came about his theorems, once replied "durch planmssiges Tattonieren" (through systematic palpable experimentation).[118] Imre Lakatos argued that mathematicians actually use contradiction, criticism and revision as principles for improving their work.[119]

Uses
In its ancient usage, hypothesis also refers to a summary of the plot of a classical drama.

In Plato's Meno (86e87b), Socrates dissects virtue with a method used by mathematicians,[1] that of "investigating from a hypothesis."[2] In this sense, 'hypothesis' refers to a clever idea or to a convenient mathematical approach that simplifies cumbersome calculations.[3] Cardinal Bellarmine gave a famous example of this usage in the warning issued to Galileo in the early 17th century: that he must not treat the motion of the Earth as a reality, but merely as a hypothesis.[4] In common usage in the 21st century, a hypothesis refers to a provisional idea whose merit requires evaluation. For proper evaluation, the framer of a hypothesis needs to define specifics in operational terms. A hypothesis requires more work by the researcher in order to either confirm or disprove it. In due course, a confirmed hypothesis may become part of a theory or occasionally may grow to become a theory itself. Normally, scientific hypotheses have the form of a mathematical model. Sometimes, but not always, one can also formulate them as existential statements, stating that some particular instance of the phenomenon under examination has some characteristic and causal explanations, which have the general form of universal statements, stating that every instance of the phenomenon has a particular characteristic. Any useful hypothesis will enable predictions by reasoning (including deductive reasoning). It might predict the outcome of an experiment in a laboratory setting or the observation of a phenomenon in nature. The prediction may also invoke statistics and only talk about probabilities. Karl Popper, following others, has argued that a hypothesis must be falsifiable, and that one cannot regard a proposition or theory as scientific if it does not admit the possibility of being shown false. Other philosophers of science have rejected the criterion of falsifiability or supplemented it with other criteria, such as verifiability (e.g., verificationism) or coherence (e.g., confirmation holism). The scientific method involves experimentation on the basis of hypotheses to answer questions and explore observations. In framing a hypothesis, the investigator must not currently know the outcome of a test or that it remains reasonably under continuing investigation. Only in such cases does the experiment, test or study potentially increase the probability of showing the truth of a hypothesis. If the researcher already knows the outcome, it counts as a "consequence" and the researcher should have already considered this while formulating the hypothesis. If one cannot assess the predictions by observation or by experience, the hypothesis classes as not yet useful, and must wait for others who might come afterward to make possible the needed observations. For example, a new technology or theory might make the necessary experiments feasible.

Scientific hypothesis
People refer to a trial solution to a problem as a hypothesis often called an "educated guess"[5] because it provides a suggested solution based on the evidence. Experimenters may test and reject several hypotheses before solving the problem. According to Schick and Vaughn,[6] researchers weighing up alternative hypotheses may take into consideration:

Testability (compare falsifiability as discussed above)

Simplicity (as in the application of "Occam's razor", discouraging the postulation of excessive numbers of entities) Scope the apparent application of the hypothesis to multiple cases of phenomena Fruitfulness the prospect that a hypothesis may explain further phenomena in the future Conservatism the degree of "fit" with existing recognized knowledge-systems.

[edit] Evaluating hypotheses


Karl Popper's formulation of hypothetico-deductive method, which he called the method of "conjectures and refutations", demands falsifiable hypotheses, framed in such a manner that the scientific community can prove them false (usually by observation). According to this view, a hypothesis cannot be "confirmed", because there is always the possibility that a future experiment will show that it is false. Hence, failing to falsify a hypothesis does not prove that hypothesis: it remains provisional. However, a hypothesis that has been rigorously tested and not falsified can form a reasonable basis for action, i.e., we can act as if it were true, until such time as it is falsified. Just because we've never observed rain falling upward, doesn't mean that we never willhowever improbable, our theory of gravity may be falsified some day. Popper's view is not the only view on evaluating hypotheses. For example, some forms of empiricism hold that under a well-crafted, well-controlled experiment, a lack of falsification does count as verification, since such an experiment ranges over the full scope of possibilities in the problem domain. Should we ever discover some place where gravity did not function, and rain fell upward, this would not falsify our current theory of gravity (which, on this view, has been verified by innumerable well-formed experiments in the past) it would rather suggest an expansion of our theory to encompass some new force or previously undiscovered interaction of forces. In other words, our initial theory as it stands is verified but incomplete. This situation illustrates the importance of having well-crafted, well-controlled experiments that range over the full scope of possibilities for applying the theory. In recent years, philosophers of science have tried to integrate the various approaches to evaluating hypotheses, and the scientific method in general, to form a more complete system that integrates the individual concerns of each approach. Notably, Imre Lakatos and Paul Feyerabend, colleague and student, respectively, of Popper, have produced novel attempts at such a synthesis.

[edit] Hypotheses, Concepts and Measurement


Concepts, as abstract units of meaning, play a key role in the development and testing of hypotheses. Concepts are the basic components of hypotheses. Most formal hypotheses connect concepts by specifying the expected relationships between concepts. For example, a simple relational hypothesis such as education increases income specifies a positive relationship between the concepts education and income. This abstract or conceptual hypothesis cannot be tested. First, it must be operationalized or situated in the real world by rules of interpretation. Consider again the simple hypothesis Education increases Income. To test the hypothesis the abstract meaning of education and income must be derived or operationalized. The concepts

should be measured. Education could be measured by years of school completed or highest degree completed etc. Income could be measured by hourly rate of pay or yearly salary etc. When a set of hypotheses are grouped together they become a type of conceptual framework. When a conceptual framework is complex and incorporates causality or explanation it is generally referred to as a theory. According to noted philosopher of science Carl Gustav Hempel An adequate empirical interpretation turns a theoretical system into a testable theory: The hypothesis whose constituent terms have been interpreted become capable of test by reference to observable phenomena. Frequently the interpreted hypothesis will be derivative hypotheses of the theory; but their confirmation or disconfirmation by empirical data will then immediately strengthen or weaken also the primitive hypotheses from which they were derived.[7] Hempel provides a useful metaphor that describes the relationship between a conceptual framework and the framework as it is observed and perhaps tested (interpreted framework). The whole system floats, as it were, above the plane of observation and is anchored to it by rules of interpretation. These might be viewed as strings which are not part of the network but link certain points of the latter with specific places in the plane of observation. By virtue of those interpretative connections, the network can function as a scientific theory[8] Hypotheses with concepts anchored in the plane of observation are ready to be tested. In actual scientific practice the process of framing a theoretical structure and of interpreting it are not always sharply separated, since the intended interpretation usually guides the construction of the theoretician.[9] It is, however, possible and indeed desirable, for the purposes of logical clarification, to separate the two steps conceptually.[9]

[edit] Statistical hypothesis testing


Main article: Statistical hypothesis testing When a possible correlation or similar relation between phenomena is investigated, such as, for example, whether a proposed remedy is effective in treating a disease, that is, at least to some extent and for some patients, the hypothesis that a relation exists cannot be examined the same way one might examine a proposed new law of nature: in such an investigation a few cases in which the tested remedy shows no effect do not falsify the hypothesis. Instead, statistical tests are used to determine how likely it is that the overall effect would be observed if no real relation as hypothesized exists. If that likelihood is sufficiently small (e.g., less than 1%), the existence of a relation may be assumed. Otherwise, any observed effect may as well be due to pure chance. In statistical hypothesis testing two hypotheses are compared, which are called the null hypothesis and the alternative hypothesis. The null hypothesis is the hypothesis that states that there is no relation between the phenomena whose relation is under investigation, or at least not of the form given by the alternative hypothesis. The alternative hypothesis, as the name suggests, is the alternative to the null hypothesis: it states that there is some kind of relation. The alternative hypothesis may take several forms, depending on the nature of the hypothesized relation; in particular, it can be two-sided (for example: there is some effect, in a yet unknown direction) or one-sided (the direction of the hypothesized relation, positive or negative, is fixed in advance).

Conventional significance levels for testing the hypotheses are .10, .05, and .01. Whether the null hypothesis is rejected and the alternative hypothesis is accepted, all must be determined in advance, before the observations are collected or inspected. If these criteria are determined later, when the data to be tested is already known, the test is invalid.[10] It is important to mention that the above procedure is actually dependent on the number of the participants (units or sample size) that is included in the study. For instance, the sample size may be too small to reject a null hypothesis and, therefore, is recommended to specify the sample size from the beginning. It is advisable to define a small, medium and large effect size for each of a number of the important statistical tests which are used to test the hypotheses.

Вам также может понравиться