Вы находитесь на странице: 1из 19

Operations Research(per webpage 2 page) Graph Statistics

http://en.wikipedia.org
OPERATIONS RESEARCH
Operations research, or operational research in British usage, is a discipline that deals with the application of advanced analytical methods to help make better decisions.[1] It is often considered to be a sub-field of mathematics.[2]The terms management science and decision science are sometimes used as synonyms.[3] Employing techniques from other mathematical sciences, such as mathematical modeling, statistical analysis, and mathematical optimization, operations research arrives at optimal or near-optimal solutions to complex decision-making problems. Because of its emphasis on human-technology interaction and because of its focus on practical applications, operations research has overlap with other disciplines, notably industrial engineering and operations management, and draws on psychology and organization science. Operations research is often concerned with determining the maximum (of profit, performance, or yield) or minimum (of loss, risk, or cost) of some real-world objective. Originating in military efforts before World War II, its techniques have grown to concern problems in a variety of industries. Overview Operational research (OR) encompasses a wide range of problem-solving techniques and methods applied in the pursuit of improved decision-making and efficiency, such as simulation, mathematical optimization, queueing theory and other stochasticprocess models, Markov decision processes, econometric methods, data envelopment analysis, neural networks, expert systems, decision analysis, and the analytic hierarchy process.[5] Nearly all of these techniques involve the construction of mathematical models that attempt to describe the system. Because of the computational and statistical nature of most of these fields, OR also has strong ties to computer science and analytics. Operational researchers faced with a new problem must determine which of these techniques are most appropriate given the nature of the system, the goals for improvement, and constraints on time and computing power.

Computing and information technologies Environment, energy, and natural resources Financial engineering Manufacturing, service sciences, and supply chain management

Marketing Engineering[7] Policy modeling and public sector work Revenue management Simulation Stochastic models Transportation

History As a formal discipline, operational research originated in the efforts of military planners during World War II. In the decades after the war, the techniques began to be applied more widely to problems in business, industry and society. Since that time, operational research has expanded into a field widely used in industries ranging from petrochemicals to airlines, finance, logistics, and government, moving to a focus on the development of mathematical models that can be used to analyse and optimize complex systems, and has become an area of active academic and industrial research. Historical origins In the World War II era, operational research was defined as "a scientific method of providing executive departments with a quantitative basis for decisions regarding the operations under their control."[8] Other names for it included operational analysis (UK Ministry of Defence from 1962)[9] and quantitative management.[10] Prior to the formal start of the field, early work in operational research was carried out by individuals such as Charles Babbage. His research into the cost of transportation and sorting of mail led to England's universal "Penny Post" in 1840, and studies into the dynamical behaviour of railway vehicles in defence of the GWR's broad gauge.[11] Percy Bridgman brought operational research to bear on problems in physics in the 1920s and would later attempt to extend these to the social sciences.[12] The modern field of operational research arose during World War II. Modern operational research originated at the Bawdsey Research Station in the UK in 1937 and was the result of an initiative of the station's superintendent, A. P. Rowe. Rowe conceived the idea as a means to analyse and improve the working of the UK's early warning radar system, Chain Home (CH). Initially, he analysed the operating of the radar equipment and its communication networks, expanding later to include the operating personnel's behaviour. This revealed unappreciated limitations of the CH network and allowed remedial action to be taken. Second World War

http://www.scienceofbetter.org
OPERATIONS RESEARCH
What Operations Research Is
In a nutshell, operations research (O.R.) is the discipline of applying advanced analytical methods to help make better decisions. By using techniques such as mathematical modeling to analyze complex situations, operations research gives executives the power to make more effective decisions and build more productive systems based on: More complete data Consideration of all available options Careful predictions of outcomes and estimates of risk The latest decision tools and techniques

A uniquely powerful approach to decision making Youve probably seen dozens of articles and ads about solutions that claim to enhance your decision-making capabilities. O.R. is unique. It's best of breed, employing highly developed methods practiced by specially trained professionals. Its powerful, usi ng advanced tools and technologies to provide analytical power that no ordinary software or spreadsheet can deliver out of the box. And its tailored to you, because an O.R. professional offers you the ability to define your specific challenge in ways that make the most of your data and uncover your most beneficial options. To achieve these results, O.R. professionals draw upon the latest analytical technologies, including: Simulation Giving you the ability to try out approaches and test ideas for improvement Optimization Narrowing your choices to the very best when there are virtually innumerable feasible options and comparing them is difficult Probability and Statistics Helping you measure risk, mine data to find valuable connections and insights, test conclusions, and make reliable forecasts

Problems addressed with operational research

Critical path analysis or project planning: identifying those processes in a complex project which affect the overall duration of the project

Floorplanning: designing the layout of equipment in a factory or components on a computer chip to reduce manufacturing time (therefore reducing cost) Network optimization: for instance, setup of telecommunications networks to maintain quality of service during outages Allocation problems Facility location Assignment Problems:

Assignment problem Generalized assignment problem Quadratic assignment problem Weapon target assignment problem

Bayesian search theory : looking for a target Optimal search Routing, such as determining the routes of buses so that as few buses are needed as possible Supply chain management: managing the flow of raw materials and products based on uncertain demand for the finished products Efficient messaging and customer response tactics Automation: automating or integrating robotic systems in human-driven operations processes Globalization: globalizing operations processes in order to take advantage of cheaper materials, labor, land or other productivity inputs Transportation: managing freight transportation and delivery systems (Examples: LTL Shipping, intermodal freight transport) Scheduling:

Personnel staffing Manufacturing steps Project tasks Network data traffic: these are known as queueing models or queueing systems. Sports events and their television coverage

Blending of raw materials in oil refineries Determining optimal prices, in many retail and B2B settings, within the disciplines of pricing science

Operational research is also used extensively in government where evidence-based policy is used.

https://www.informs.org OPERATIONS RESEARCH


Operations Research (O.R.), or operational research in the U.K, is a discipline that deals with the application of advanced analytical methods to help make better decisions. The terms management science and analyticsare sometimes used as synonyms for operations research. Employing techniques from other mathematical sciences, such as mathematical modeling, statistical analysis, and mathematical optimization, operations research arrives at optimal or near-optimal solutions to complex decision-making problems. Operations research overlaps with other disciplines, notably industrial engineering and operations management. It is often concerned with determining a maximum (such as profit, performance, or yield) or minimum (such as loss, risk, or cost.) Operations research encompasses a wide range of problem-solving techniques and methods applied in the pursuit of improved decision-making and efficiency, such as simulation, mathematical optimization, queuing theory, Markov decision processes, economic methods, data analysis, statistics, neural networks, expert systems, and decision analysis. Nearly all of these techniques involve the construction of mathematical models that attempt to describe the system. Because of the computational and statistical nature of most of these fields, O.R. also has strong ties to computer science. Operations researchers faced with a new problem must determine which of these techniques are most appropriate given the nature of the system, the goals for improvement, and constraints on time and computing power. The major sub-disciplines in modern operations research, as identified by the INFORMS journal Operations Research, are:

Computing and information technologies Environment, energy, and natural resources Financial Engineering Manufacturing, service science, and supply chain management Marketing Science Policy modeling and public sector work Revenue management Simulation Stochastic models Transportation.

Related fields Some of the fields that have considerable overlap with Operations Research and Management Science include:

Business Analytics Data mining Decision analysis Engineering Financial engineering Forecasting Game theory Graph theory Industrial engineering

Logistics Mathematical modeling Mathematical optimization Probability and statistics Project management Policy analysis Simulation Social network/Transportation forecasting models Stochastic processes Supply chain management

Applications of management science. Applications of management science are abundant in industry as airlines, manufacturing companies, service organizations, military branches, and in government. The range of problems and issues to which management science has contributed insights and solutions is vast. It includes:[27]

scheduling airlines, including both planes and crew, deciding the appropriate place to site new facilities such as a warehouse, factory or fire station, managing the flow of water from reservoirs, identifying possible future development paths for parts of the telecommunications industry, establishing the information needs and appropriate systems to supply them within the health service, and identifying and understanding the strategies adopted by companies for their information systems

Management science is also concerned with so-called soft-operational analysis, which concerns methods for strategic planning, strategic decision support, and Problem Structuring Methods (PSM). In dealing with these sorts of challenges mathematical modeling and simulation are not appropriate or will not suffice. Therefore, during the past 30 years, a number of nonquantified modeling methods have been developed. These include:

http://en.wikipedia.org
Graph This article is about sets of vertices connected by edges. For graphs of mathematical functions, see Graph of a function. For other uses, see Graph (disambiguation). A drawing of a labeled graph on 6 vertices and 7 edges. In mathematics, and more specifically in graph theory, a graph is a representation of a set of objects where some pairs of objects are connected by links. The interconnected objects are represented by mathematical abstractions called vertices, and the links that connect some pairs of vertices are called edges.[1] Typically, a graph is depicted in diagrammatic form as a set of dots for the vertices, joined by lines or curves for the edges. Graphs are one of the objects of study in discrete mathematics. The edges may be directed or undirected. For example, if the vertices represent people at a party, and there is an edge between two people if they shake hands, then this is an undirected graph, because if person A shook hands with person B, then person B also shook hands with person A. In contrast, if there is an edge from person A to person B when person A knows of person B, then this graph is directed, because knowledge of someone is not necessarily a symmetric relation (that is, one person knowing another person does not necessarily imply the reverse; for example, many fans may know of a celebrity, but the celebrity is unlikely to know of all their fans). This latter type of graph is called a directed graph and the edges are calleddirected edges or arcs. Vertices are also called nodes or points, and edges are also called arcs or lines. Graphs are the basic subject studied by graph theory. The word "graph" was first used in this sense byJ.J. Sylvester in 1878.[2][3] Definitions Definitions in graph theory vary. The following are some of the more basic ways of defining graphs and related mathematical structures. Graph In the most common sense of the term,[4] a graph is an ordered pair G = (V, E) comprising a set V of vertices or nodes together with a set E of edges or lines, which are 2-element subsets of V (i.e., an edge is related with two vertices, and the relation is represented as an unordered pair of the vertices with respect to the particular edge). To avoid ambiguity, this type of graph may be described precisely as undirected and simple. Other senses of graph stem from different conceptions of the edge set. In one more generalized notion,[5] E is a set together with a relation of incidence that associates with each edge two

vertices. In another generalized notion, E is amultiset of unordered pairs of (not necessarily distinct) vertices. Many authors call this type of object a multigraph or pseudograph. All of these variants and others are described more fully below. The vertices belonging to an edge are called the ends, endpoints, or end vertices of the edge. A vertex may exist in a graph and not belong to an edge. V and E are usually taken to be finite, and many of the well-known results are not true (or are rather different) for infinite graphs because many of the arguments fail in the infinite case. The order of a graph is (the number of vertices). A graph's size is , the number of edges. The degree of a vertex is the number of edges that connect to it, where an edge that connects to the vertex at both ends (a loop) is counted twice. For an edge {u, v}, graph theorists usually use the somewhat shorter notation uv. Adjacency relation The edges E of an undirected graph G induce a symmetric binary relation ~ on V that is called the adjacency relation of G. Specifically, for each edge {u, v} the vertices u and v are said to be adjacent to one another, which is denotedu ~ v. Types of graphs Distinction in terms of the main definition As stated above, in different contexts it may be useful to define the term graph with different degrees of generality. Whenever it is necessary to draw a strict distinction, the following terms are used. Most commonly, in modern texts in graph theory, unless stated otherwise, graph means "undirected simple finite graph" (see the definitions below).

A simple undirected graph with three vertices and three edges. Each vertex has degree two, so this is also a regular graph. Undirected graph An undirected graph is one in which edges have no orientation. The edge (a, b) is identical to the edge (b, a), i.e., they are not ordered pairs, but sets {u, v} (or 2-multisets) of vertices. The maximum number of edges in an undirected graph without a self-loop is n(n - 1)/2.

http://nces.ed.gov Graph
The paper written by Leonhard Euler on the Seven Bridges of Knigsberg and published in 1736 is regarded as the first paper in the history of graph theory.[6] This paper, as well as the one written by Vandermonde on the knight problem, carried on with the analysis situs initiated by Leibniz. Euler's formula relating the number of edges, vertices, and faces of a convex polyhedron was studied and generalized by Cauchy[7] and L'Huillier,[8] and is at the origin of topology. More than one century after Euler's paper on the bridges of Knigsberg and

while Listing introduced topology, Cayley was led by the study of particular analytical forms arising from differential calculus to study a particular class of graphs, the trees.[9] This study had many implications in theoretical chemistry. The involved techniques mainly concerned the enumeration of graphs having particular properties. Enumerative graph theory then rose from the results of Cayley and the fundamental results published by Plya between 1935 and 1937 and the generalization of these by De Bruijn in 1959. Cayley linked his results on trees with the contemporary studies of chemical composition.[10] The fusion of the ideas coming from mathematics with those coming from chemistry is at the origin of a part of the standard terminology of graph theory. In particular, the term "graph" was introduced by Sylvester in a paper published in 1878 in Nature, where he draws an analogy between "quantic invariants" and "co-variants" of algebra and molecular diagrams:[11] "[...] Every invariant and co-variant thus becomes expressible by a graph precisely identical with a Kekulan diagram or chemicograph. [...] I give a rule for the geometrical multiplication of graphs, i.e. for constructing a graph to the product of in- or co-variants whose separate graphs are given. [...]" (italics as in the original). The first textbook on graph theory was written by Dnes Knig, and published in 1936.[12] Another book by Frank Harary, published in 1969, was "considered the world over to be the definitive textbook on the subject",[13] and enabled mathematicians, chemists, electrical engineers and social scientists to talk to each other. Harary donated all of the royalties to fund the Plya Prize.[14]

One of the most famous and stimulating problems in graph theory is the four color problem: "Is it true that any map drawn in the plane may have its regions colored with four colors, in such a way that any two regions having a common border have different colors?" This problem was first posed by Francis Guthrie in 1852 and its first written record is in a letter of De Morgan addressed to Hamilton the same year. Many incorrect proofs have been proposed, including those by Cayley, Kempe, and others. The study and the generalization of this problem by Tait, Heawood, Ramsey and Hadwiger led to the study of the colorings of the graphs embedded on surfaces with arbitrary genus. Tait's reformulation generated a new class of problems, the factorization problems, particularly studied by Petersen and Knig. The works of Ramsey on colorations and more specially the results obtained by Turn in 1941 was at the origin of another branch of graph theory, extremal graph theory. The four color problem remained unsolved for more than a century. In 1969 Heinrich Heesch published a method for solving the problem using computers.[15] A computer-aided proof produced in 1976 by Kenneth Appel and Wolfgang Haken makes fundamental use of the notion of "discharging" developed by Heesch.[16][17] The proof involved checking the properties of 1,936 configurations by computer, and was not fully accepted at the time due to its complexity. A simpler proof considering only 633 configurations was given twenty years later by Robertson, Seymour, Sanders and Thomas.[18] The autonomous development of topology from 1860 and 1930 fertilized graph theory back through the works of Jordan, Kuratowski and Whitney. Another important factor of common development of graph theory and topology came from the use of the techniques of modern algebra. The first example of such a use comes from the work of the physicist Gustav Kirchhoff, who published in 1845 his Kirchhoff's circuit laws for calculating

the voltage and current in electric circuits. The introduction of probabilistic methods in graph theory, especially in the study of Erds and Rnyi of the asymptotic probability of graph connectivity, gave rise to yet another branch, known as random graph theory, which has been a fruitful source of graph-theoretic results. Graphs are represented graphically by drawing a dot or circle for every vertex, and drawing an arc between two vertices if they are connected by an edge. If the graph is directed, the direction is indicated by drawing an arrow.

http://www.padowan.dk/ Graph
Two edges of a graph are called adjacent if they share a common vertex. Two arrows of a directed graph are called consecutive if the head of the first one is at the nock (notch end) of the second one. Similarly, two vertices are calledadjacent if they share a common edge (consecutive if they are at the notch and at the head of an arrow), in which case the common edge is said to join the two vertices. An edge and a vertex on that edge are called incident. The graph with only one vertex and no edges is called the trivial graph. A graph with only vertices and no edges is known as an edgeless graph. The graph with no vertices and no edges is sometimes called the null graph or empty graph, but the terminology is not consistent and not all mathematicians allow this object. In a weighted graph or digraph, each edge is associated with some value, variously called its cost, weight, length or other term depending on the application; such graphs arise in many contexts, for example in optimal routing problemssuch as the traveling salesman problem. Normally, the vertices of a graph, by their nature as elements of a set, are distinguishable. This kind of graph may be called vertex-labeled. However, for many questions it is better to treat vertices as indistinguishable; then the graph may be called unlabeled. (Of course, the vertices may be still distinguishable by the properties of the graph itself, e.g., by the numbers of incident edges). The same remarks apply to edges, so graphs with labeled edges are callededgelabeled graphs. Graphs with labels attached to edges or vertices are more generally designated as labeled. Consequently, graphs in which vertices are indistinguishable and edges are indistinguishable are called unlabeled. (Note that in the literature the term labeled may apply to other kinds of labeling, besides that which serves only to distinguish different vertices or edges.) Basic examples are:

In a complete graph, each pair of vertices is joined by an edge; that is, the graph contains all possible edges. In a bipartite graph, the vertex set can be partitioned into two sets, W and X, so that no two vertices in W are adjacent and no two vertices in X are adjacent. Alternatively, it is a graph with a chromatic number of 2. In a complete bipartite graph, the vertex set is the union of two disjoint sets, W and X, so that every vertex in W is adjacent to every vertex in X but there are no edges within W or X. In a linear graph or path graph of length n, the vertices can be listed in order, v0, v1, ..., vn, so that the edges are vi1vi for each i = 1, 2, ..., n. If a linear graph occurs as a subgraph of another graph, it is a path in that graph.

In a cycle graph of length n 3, vertices can be named v1, ..., vn so that the edges are vi1vi for each i = 2,...,n in addition to vnv1. Cycle graphs can be characterized as connected 2-regular graphs. If a cycle graph occurs as a subgraph of another graph, it is a cycle or circuit in that graph. A planar graph is a graph whose vertices and edges can be drawn in a plane such that no two of the edges intersect (i.e., embedded in a plane). A tree is a connected graph with no cycles. A forest is a graph with no cycles (i.e. the disjoint union of one or more trees).

More advanced kinds of graphs are:


The Petersen graph and its generalizations Perfect graphs Cographs Chordal graphs Other graphs with large automorphism groups: vertex-transitive, arc-transitive, and distancetransitive graphs. Strongly regular graphs and their generalization distance-regular graphs.

There are several operations that produce new graphs from old ones, which might be classified into the following categories:

Elementary operations, sometimes called "editing operations" on graphs, which create a new graph from the original one by a simple, local change, such as addition or deletion of a vertex or an edge, merging and splitting of vertices, etc. Graph rewrite operations replacing the occurrence of some pattern graph within the host graph by an instance of the corresponding replacement graph. Unary operations, which create a significantly new graph from the old one. Examples:

Line graph Dual graph Complement graph Disjoint union of graphs Cartesian product of graphs Tensor product of graphs Strong product of graphs Lexicographic product of graphs

Binary operations, which create new graph from two initial graphs. Examples:

http://en.wikipedia.org/wiki/Statistics

Statistics
Statistics is the study of the collection, organization, analysis, interpretation and presentation of data.[1] It deals with all aspects of data including the planning of data collection in terms of the design of surveys and experiments.[1]

More probability density is found as one gets closer to the expected (mean) value in a normal distribution. Statistics used in standardized testing assessment are shown. The scales include standard deviations, cumulative percentages, percentile equivalents, Z-scores, T-scores, standard nines, and percentages in standard nines.

Scope
Statistics is described as a mathematical body of science that pertains to the collection, analysis, interpretation or explanation, and presentation of data,[2] or as a branch of mathematics[3] concerned with collecting and interpreting data. Because of its empirical roots and its focus on applications, statistics is typically considered a distinct mathematical science rather than as a branch of mathematics.[4][5] Some tasks a statistician may involve are less mathematical; for example, ensuring that data collection is undertaken in a way that produces valid conclusions, coding data, or reporting results in ways comprehensible to those who must use them. Statisticians improve data quality by developing specific experiment designs and survey samples. Statistics itself also provides tools for prediction and forecasting the use of data through statistical models. Statistics is applicable to a wide variety of academic disciplines, including natural and social sciences, government, and business. Statistical consultants can help organizations and companies that don't have in-house expertise relevant to their particular questions. Statistical methods can summarize or describe a collection of data. This is called descriptive statistics. This is particularly useful in communicating the results of experiments and research. In addition, data patterns may be modeled in a way that accounts for randomness and uncertainty in the observations. These models can be used to draw inferences about the process or population under studya practice called inferential statistics. Inference is a vital element of scientific advance, since it provides a way to draw conclusions from data that are subject to random variation. To prove the propositions being investigated further, the conclusions are tested as well, as part of the scientific method. Descriptive statistics and analysis of the new data tend to provide more information as to the truth of the proposition.

"Applied statistics" comprises descriptive statistics and the application of inferential statistics.[6][verification needed] Theoretical statistics concerns both the logical arguments underlying justification of approaches to statistical inference, as well encompassing mathematical statistics. Mathematical statistics includes not only the manipulation of probability distributions necessary for deriving results related to methods of estimation and inference, but also various aspects of computational statistics and the design of experiments. Statistics is closely related to probability theory, with which it is often grouped. The difference is, roughly, that probability theory starts from the given parameters of a total population to deduce probabilities that pertain to samples. Statistical inference, however, moves in the opposite directioninductively inferring from samples to the parameters of a larger or total population. Statistics has many ties to machine learning and data mining.

History
Statistical methods date back at least to the 5th century BC. Some scholars pinpoint the origin of statistics to 1663, with the publication of Natural and Political Observations upon the Bills of Mortality by John Graunt.[7] Early applications of statistical thinking revolved around the needs of states to base policy on demographic and economic data, hence its stat- etymology. The scope of the discipline of statistics broadened in the early 19th century to include the collection and analysis of data in general. Today, statistics is widely employed in government, business, and natural and social sciences. Its mathematical foundations were laid in the 17th century with the development of the probability theory by Blaise Pascal and Pierre de Fermat. Mathematical probability theory arose from the study of games of chance, although the concept of probability was already examined in medieval law and by philosophers such as Juan Caramuel.[8] The method of least squares was first described by Adrien-Marie Legendre in 1805.
Karl Pearson, the founder of mathematical statistics.

The modern field of statistics emerged in the late 19th and early 20th century in three stages.[9] The first wave, at the turn of the century, was led by the work of Sir Francis Galton and Karl Pearson, who transformed statistics into a rigorous mathematical discipline used for analysis, not just in science, but in industry and politics as well. Galton's contributions to the field included introducing the concepts of standard deviation, correlation, regression and the application of these methods to the study of the variety of human characteristics - height, weight, eyelash length among others.[10] Pearson developed the Correlation coefficient, defined as a product-moment,[11] the method of moments for the fitting of distributions to samples and thePearson's system of continuous curves, among many other things.[12] Galton and Pearson founded Biometrika as the first journal of mathematical statistics and biometry, and the latter founded the world's first university statistics department at University College London.

http://mospi.nic.in

Statistics
Experimental and observational studies A common goal for a statistical research project is to investigate causality, and in particular to draw a conclusion on the effect of changes in the values of predictors or independent variables on dependent variables or response. There are two major types of causal statistical studies: experimental studies and observational studies. In both types of studies, the effect of differences of an independent variable (or variables) on the behavior of the dependent variable are observed. The difference between the two types lies in how the study is actually conducted. Each can be very effective. An experimental study involves taking measurements of the system under study, manipulating the system, and then taking additional measurements using the same procedure to determine if the manipulation has modified the values of the measurements. In contrast, an observational study does not involve experimental manipulation. Instead, data are gathered and correlations between predictors and response are investigated. Experiments The basic steps of a statistical experiment are: 1. Planning the research, including finding the number of replicates of the study, using the following information: preliminary estimates regarding the size of treatment effects, alternative hypotheses, and the estimated experimental variability. Consideration of the selection of experimental subjects and the ethics of research is necessary. Statisticians recommend that experiments compare (at least) one new treatment with a standard treatment or control, to allow an unbiased estimate of the difference in treatment effects. 2. Design of experiments, using blocking to reduce the influence of confounding variables, and randomized assignment of treatments to subjects to allow unbiased estimates of treatment effects and experimental error. At this stage, the experimenters and statisticians write the experimental protocol that shall guide the performance of the experiment and that specifies the primary analysis of the experimental data. 3. Performing the experiment following the experimental protocol and analyzing the data following the experimental protocol. 4. Further examining the data set in secondary analyses, to suggest new hypotheses for future study. 5. Documenting and presenting the results of the study. Experiments on human behavior have special concerns. The famous Hawthorne study examined changes to the working environment at the Hawthorne plant of the Western Electric Company. The researchers were interested in determining whether increased illumination would increase

the productivity of the assembly line workers. The researchers first measured the productivity in the plant, then modified the illumination in an area of the plant and checked if the changes in illumination affected productivity. It turned out that productivity indeed improved (under the experimental conditions). However, the study is heavily criticized today for errors in experimental procedures, specifically for the lack of a control group and blindness. The Hawthorne effect refers to finding that an outcome (in this case, worker productivity) changed due to observation itself. Those in the Hawthorne study became more productive not because the lighting was changed but because they were being observed.[citation needed] Observational study[edit] An example of an observational study is one that explores the correlation between smoking and lung cancer. This type of study typically uses a survey to collect observations about the area of interest and then performs statistical analysis. In this case, the researchers would collect observations of both smokers and non-smokers, perhaps through a case-control study, and then look for the number of cases of lung cancer in each group. Levels of measurement[edit] Main article: Levels of measurement There are four main levels of measurement used in statistics: nominal, ordinal, interval, and ratio.[18] Each of these have different degrees of usefulness in statistical research. Ratio measurements have both a meaningful zero value and the distances between different measurements defined; they provide the greatest flexibility in statistical methods that can be used for analyzing the data.[citation needed] Interval measurements have meaningful distances between measurements defined, but the zero value is arbitrary (as in the case with longitude and temperature measurements in Celsius or Fahrenheit). Ordinal measurements have imprecise differences between consecutive values, but have a meaningful order to those values. Nominal measurements have no meaningful rank order among values. Because variables conforming only to nominal or ordinal measurements cannot be reasonably measured numerically, sometimes they are grouped together as categorical variables, whereas ratio and interval measurements are grouped together as quantitative variables, which can be either discrete or continuous, due to their numerical nature. Key terms used in statistics[edit] Statistics, estimators and pivotal quantities Consider n independent identically distributed (iid) random variables with a given probability distribution: standard statistical inference and estimation theory defines a random sample as the random vector given by the column vector of these iid variables.[19] The population being examined is described by a probability distribution which may have unknown parameters.

http://cuteessay.com Statistics
Statistics is a fast growing subject. As a discipline it is as old as human civilization. From its origin, statistics is being used as a tool of analysis. In ancient times, the state was using Statistics to keep different administrative and economic records. These records relate to population, age and sex-wise distribution of population, birth rate, death rate, stock of wealth and assets etc. These information (statistics) were of immense help to the state for its administration and policy formulations. So in those days, statistics was considered as the Science of Statecraft In ancient India we find the use of statistics in Kautilya's 'Arthashastra' and Abul Fazal's Ain-e-Akbari. The word statistics seems to have come from the Latin word 'status' or Italian word 'statista' or the French word 'statistique' or the German word 'statistick'. Each of these words means a political state. In modem times, statistics has become indispensable in almost all spheres of human' activity and knowledge. In our day today activities we are using statistics in one form or another. It has become a part and parcel of our civilization. H.G. Wells had rightly observed, "Statistical thinking will one day be as necessary for efficient citizenship as the ability to read and write". Sir Ronald A. Fisher was the pioneer in applying statistics to different branches of studies. So he is regarded as the father of Statistics. Now there is hardly any subject or branch of study which does not use statistics. Scientific, social and economic studies without statistics are inconceivable. Hence elementary ideas on the subject is imperative for the students of Economics. Generally statistics means numerical data or quantitative information in an enquiry. In ancient times, statistics was used as political arithmetic. The Government was collecting data on vital socioeconomic and political matters for the smooth administration of the state. From purely academic point of view we can discuss the meaning of statistics in two sensessingular sense and plural sense. In plural sense, statistics refers to numerical statements of facts related to each other. Non-related figures are not statistics. For example, when we say 47% of people of Orissa live below poverty line, or there are 30 districts in our state, we are using statistics in plural sense. Here the data are related, because we compare the percentage of poor people or the number of districts of Orissa with other States. So these statements are statistics. But statements like 10 students or 15 books are not statistics as they are not related.

In singular sense statistics refers to statistical methods or the theory of statistics. These methods are used in collection, presentation, analysis and interpretation of data. A study of these techniques is the science of statistics. In the above examples, we use certain methods to prepare the numerical statements. These methods are statistics. Thus in singular sense, statistics refers to a body of methods for taking decisions from the available information.
Statistics (from Italian stato state) is the science of data. Statistics is concerned with collection of data, its classification, organization, processing and analysis. The data used in statistics is a sort of periodical information characterizing quantitative and (or) qualitative patterns of processes and phenomena. In other words, statistics deals with samples of data, collected on a periodical basis, about certain event or process, which represent some characteristics of the whole set of studied units (population). The primary objective of inferential statistics (the one applied in decision making) is to draw a conclusion or an inference relying on the analysis of the collected data sets. Statistics includes many methods and tools for data analysis which allow to forecast the future state of a process with a certain probability. Moreover, statistics closely interacts with other economic sciences and many other sciences apply statistical methods. Forecasting is indispensable component in making a well-grounded business decisions which often impact companies further successes or failures in considerable way. For example, good business decision can result in increased sales volume and profit; wrong decision can cause the opposite consequences and bring company to bankruptcy. In this paper we will summarize how statistics with its tools and methods can be used in decision making process. The easiest way to answer the above question is to cite an instance. Lets imagine a company which produces 3 types of beverage (A, B and C). This is quite young company which operates a little more than 3 years. Within this period the department of statistics along with accounts department collected and stored information about total production volume, sales volume, invested capital, promotion expenses, age of customers that prefer each type of beverage, gross income of the customer groups and so on. The whole set of data is a population, which can be used for many calculations, estimations and forecasts. Companys analysts may build regression and correlation models, which can describe the dependencies between various financials or variable of the company with its products and customers. There may be a certain relationship or even dependency (correlation) between two or more variables like the amount of sugar contained in the beverage and the age of customer that likes this beverage. The inference from such analysis can be the following: beverage A is preferred by children of 6-9 years old, beverage B is preferred by youth of 13-19 years old, C by people of older ages. At the same time A contains more sugar than B and C. That does not mean adults do not drink beverage A or B, however, it describes age characteristics of the customers and company may improve the efficiency of its marketing campaign and save money. Similar analysis can be used to distinguish the tastes of people of different income level which beverage is preferred by people with gross income of $1,000-2,500 per month, $5,000-7,000 per month etc.

Вам также может понравиться