Вы находитесь на странице: 1из 12

What is research problem?

Defining a research problem is the fuel that drives the scientific process, and is the foundation of any research method and experimental design, from true experiment to case study. A research problem is the situation that causes the researcher to feel apprehensive, confused and ill at ease. It is the demarcation of a problem area within a certain context involving the WHO or WHAT, the WHERE, the WHEN and the WHY of the problem situation. The research problem should be stated in such a way that it would lead to analytical thinking on the part of the researcher with the aim of possible concluding solutions to the stated problem. Research problems can be stated in the form of either questions or statements.

The research problem should always be formulated grammatically correct and as completely as possible. You should bear in mind the wording (expressions) you use. Avoid meaningless words. There should be no doubt in the mind of the reader what your intentions are. Demarcating the research field into manageable parts by dividing the main problem into subproblems is of the utmost importance.

Formulating the research problem begins during the first steps of the scientific process. As an example, a literature review and a study of previous experiments, and research, might throw up some vague areas of interest. Many scientific researchers look at an area where a previous researcher generated some interesting results, but never followed up. It could be an interesting area of research, which nobody else has fully explored. A scientist may even review a successful experiment, disagree with the results, the tests used, or the methodology, and decide to refine the research process, retesting the hypothesis. This is called the conceptual definition, and is an overall view of the problem. A science report will generally begin with an overview of the previous research and real-world observations. The researcher will then state how this led to defining a research problem. Important of r.p It is important in a proposal that the problem stand outthat the reader can easily recognize it. Sometimes, obscure and poorly formulated problems are masked in an extended discussion. In such cases, reviewers and/or committee members will have difficulty recognizing the problem. 1. clarity and precision (a well-written statement does not make sweeping generalizations and irresponsible statements); 2. identification of what would be studied, while avoiding the use of value-laden words and terms; 3. identification of an overarching question and key factors or variables; 4. identification of key concepts and terms; 5. articulation of the study's boundaries or parameters; 6. some generalizability;

7. conveyance of the study's importance, benefits, and justification (regardless of the type of research, it is important to address the so what question and to demonstrate that the research is not trivial); 8. no use of unnecessary jargon; and 9. conveyance of more than the mere gathering of descriptive data providing a snapshot. A problem statement should be presented within a context, and that context should be provided and briefly explained, including a discussion of the conceptual or theoretical framework in which it is embedded. Clearly and succinctly identify and explain the problem within the framework of the theory or line of inquiry that undergirds the study. This is of major importance in nearly all proposals and requires careful attention. It is a key element that associations such as AERA and APA look for in proposals. It is essential in all quantitative research and much qualitative research.

r.q The research question is one of the first methodological steps the investigator has to take when undertaking research. The research question must be accurately and clearly defined. Choosing a research question is the central element of both quantitative and qualitative research and in some cases it may precede construction of the conceptual framework of study. In all cases, it makes the theoretical assumptions in the framework more explicit, most of all it indicates what the researcher wants to know most and first.

What is hypothesis
A hypothesis is a logical supposition, a reasonable guess, an educated conjecture. It provides a tentative explanation for a phenomenon under investigation." (Leedy and Ormrod, 2001).

A hypothesis states your expectations concerning the relation between two or more variables in the research problem or your area of interest. Usually, a hypothesis represents an extension of a purpose statement or research question by adding a prediction or explanation component. A research hypothesis is the statement created by researchers when they speculate upon the outcome of a research or experiment. Importance A hypothesis is important because it guides the research. An investigator may refer to the hypothesis to direct his or her thought process toward the solution of the research problem or sub

problems. The hypothesis helps an investigator to collect the right kinds of data needed for the investigation. Hypotheses are also important because they help an investigator to locate information needed to resolve the research problem or subproblems (Leedy and Ormrod, 2001).

Theoretical Framework A theoretical framework is a collection of interrelated concepts, like a theory but not necessarily so well worked-out. A theoretical framework guides your research, determining what things you will measure, and what statistical relationships you will look for. Theoretical frameworks are obviously critical in deductive, theory-testing sorts of studies (see Kinds of Research for more information). In those kinds of studies, the theoretical framework must be very specific and well-thought out. Surprisingly, theoretical frameworks are also important in exploratory studies, where you really don't know much about what is going on, and are trying to learn more. There are two reasons why theoretical frameworks are important here. First, no matter how little you think you know about a topic, and how unbiased you think you are, it is impossible for a human being not to have preconceived notions, even if they are of a very general nature. For example, some people fundamentally believe that people are basically lazy and untrustworthy, and you have keep your wits about you to avoid being conned. These fundamental beliefs about human nature affect how you look things when doing personnel research. In this sense, you are always being guided by a theoretical framework, but you don't know it. Not knowing what your real framework is can be a problem. The framework tends to guide what you notice in an organization, and what you don't notice. In other words, you don't even notice things that don't fit your framework! We can never completely get around this problem, but we can reduce the problem considerably by simply making our implicit framework explicit. Once it is explicit, we can deliberately consider other frameworks, and try to see the organizational situation through different lenses.

conceptual framework A conceptual framework is a tool researchers use to guide their inquiry; it is a set of ideas used to structure the research, a sort of map that may include the research question, the literature review, methods and data analysis. Researchers use a conceptual framework to guide their data collection and analysis. If, for example the researcher wanted to know whether boys did better than girls in a certain subject then he might look at literature on the development of both sexes, and on the methods of socialization of boys and girls as this could influence what subjects were of interest. The researcher would then look at existing literature on male and female development and socialization as this would help to clarify what questions she should ask e.g are girls more

interested in history when it is concerned with actual people or do boys prefer the history of battles etc. The ways in which boys and girls viewed a subject could influence their progress in that area. Types of scale A topic which can create a great deal of confusion in social and educational research is that of types of scales used in measuring behaviour. It is critical because it relates to the types of statistics you can use to analyse your data. An easy way to have a paper rejected is to have used either an incorrect scale/statistic combination or to have used a low powered statistic on a high powered set of data.

Nominal Ordinal Interval Ratio

Nominal The lowest measurement level you can use, from a statistical point of view, is a nominal scale. A nominal scale, as the name implies, is simply some placing of data into categories, without any order or structure. A physical example of a nominal scale is the terms we use for colours. The underlying spectrum is ordered but the names are nominal. In research activities a YES/NO scale is nominal. It has no order and there is no distance between YES and NO. and statistics The statistics which can be used with nominal scales are in the non-parametric group. The most likely ones would be: mode crosstabulation - with chi-square There are also highly sophisticated modelling techniques available for nominal data. Ordinal An ordinal scale is next up the list in terms of power of measurement.

The simplest ordinal scale is a ranking. When a market researcher asks you to rank 5 types of beer from most flavourful to least flavourful, he/she is asking you to create an ordinal scale of preference. There is no objective distance between any two points on your subjective scale. For you the top beer may be far superior to the second prefered beer but, to another respondant with the same top and second beer, the distance may be subjectively small. An ordinal scale only lets you interpret gross order and not the relative positional distances. and statistics Ordinal data would use non-parametric statistics. These would include: Median and rank order non-parametric analysis of variance Modelling techniques can also be used with ordinal data. Interval The standard survey rating scale is an interval scale. When you are asked to rate your satisfaction with a piece of software on a 7 point scale, from Dissatisfied to Satisfied, you are using an interval scale. It is an interval scale because it is assumed to have equidistant points between each of the scale elements. This means that we can interpret differences in the distance along the scale. We contrast this to an ordinal scale where we can only talk about differences in order, not differences in the degree of order. Interval scales are also scales which are defined by metrics such as logarithms. In these cases, the distances are note equal but they are strictly definable based on the metric used. and statistics Interval scale data would use parametric statistical techniques: Mean and standard deviation Correlation r Regression Analysis of variance Factor analysis Plus a whole range of advanced multivariate and modelling techniques mode correlation

Remember that you can use non-parametric techniques with interval and ratio data. But non-paramteric techniques are less powerful than the parametric ones. Ratio A ratio scale is the top level of measurement and is not often available in social research. The factor which clearly defines a ratio scale is that it has a true zero point. The simplest example of a ratio scale is the measurement of length (disregarding any philosophical points about defining how we can identify zero length). The best way to contrast interval and ratio scales is to look at temperature. The Centigrade scale has a zero point but it is an arbitrary one. The Farenheit scale has its equivalent point at -32o. (Physicists would probably argue that Absolute Zero is the zero point for temperature but this is a theoretical concept.) So, even though temperture looks as if it would be a ratio scale it is an interval scale. Currently, we cannot talk about no temperature - and this would be needed if it were a ration scale. and statistics The same as for Interval data

Reliability Reliability refers to the consistency of a measure. A test is considered reliable if we get the same result repeatedly. For example, if a test is designed to measure a trait (such as introversion), then each time the test is administered to a subject, the results should be approximately the same. Unfortunately, it is impossible to calculate reliability exactly, but there several different ways to estimate reliability. Internal consistency Internal consistency is typically a measure based on the correlations between different items on the same test (or the same subscale on a larger test). It measures whether several items that propose to measure the same general construct produce similar scores. internal consistency is usually measured with Cronbach's alpha, a statistic calculated from the pairwise correlations between items. Internal consistency ranges between zero and one. A commonly accepted rule of thumb for describing internal consistency is as follows:[1]

Internal consistency is the extent to which tests or procedures assess the same characteristic, skill or quality. It is a measure of the precision between the observers or of the measuring instruments used in a study. This type of reliability often helps researchers interpret data and predict the value of scores and the limits of the relationship among variables. Internal consistency reliability defines the consistency of the results delivered in a test, ensuring that the various items measuring the different constructs deliver consistent scores. validity Validity is the extent to which a test measures what it claims to measure. It is vital for a test to be valid in order for the results to be accurately applied and interpreted. Validity isnt determined by a single statistic, but by a body of research that demonstrates the relationship between the test and the behavior it is intended to measure. There are three types of validity: validity has no single agreed definition but generally refers to the extent to which a concept, conclusion or measurement is well-founded and corresponds accurately to the real world. The word "valid" is derived from the Latin validus, meaning strong. Validity of a measurement tool (i.e. test in education) is considered to be the degree to which the tool measures what it claims to measure. Validity is the strength of our conclusions, inferences or propositions. More formally, Cook and Campbell (1979) define it as the "best available approximation to the truth or falsity of a given inference, proposition or conclusion." In short, were we right? Let's look at a simple example. Say we are studying the effect of strict attendance policies on class participation. In our case, we saw that class participation did increase after the policy was established. Each type of validity would highlight a different aspect of the relationship between our treatment (strict attendance policy) and our observed outcome (increased class participation). Types of There are four types of validity commonly examined in social research. Validity:

1. Conclusion validity asks is there a relationship between the program and the observed outcome? Or, in our example, is there a connection between the attendance policy and the increased participation we saw? 2. Internal Validity asks if there is a relationship between the program and the outcome we saw, is it a causal relationship? For example, did the attendance policy cause class participation to increase?

3. Construct validity is the hardest to understand in my opinion. It asks if there is there a relationship between how I operationalized my concepts in this study to the actual causal relationship I'm trying to study/? Or in our example, did our treatment (attendance policy) reflect the construct of attendance, and did our measured outcome - increased class participation - reflect the construct of participation? Overall, we are trying to generalize our conceptualized treatment and outcomes to broader constructs of the same concepts. 4. External validity refers to our ability to generalize the results of our study to other settings. In our example, could we generalize our results to other classrooms?

Content validity content validity (also known as logical validity) refers to the extent to which a measure represents all facets of a given social construct. For example, a depression scale may lack content validity if it only assesses the affective dimension of depression but fails to take into account the behavioral dimension. An element of subjectivity exists in relation to determining content validity, which requires a degree of agreement about what a particular personality trait such as extraversion represents. A disagreement about a personality trait will prevent the gain of a high content validity.[1]

CONTENT VALIDITY

Content validity is a non-statistical type of validity that involves the systematic examination of the test content to determine whether it covers a representative sample of the behavior domain to be measured (Anastasi & Urbina, 1997 p. 114). For example, does an IQ questionnaire have items covering all areas of intelligence discussed in the scientific literature? Content validity evidence involves the degree to which the content of the test matches a content domain associated with the construct. For example, a test of the ability to add two numbers should include a range of combinations of digits. A test with only one-digit numbers, or only even numbers, would not have good coverage of the content domain. Content related evidence typically involves subject matter experts (SME's) evaluating test items against the test specifications. A test has content validity built into it by careful selection of which items to include (Anastasi & Urbina, 1997). Items are chosen so that they comply with the test specification which is drawn up through a thorough examination of the subject domain. Foxcraft et al. (2004, p. 49) note that

by using a panel of experts to review the test specifications and the selection of items the content validity of a test can be improved. The experts will be able to review the items and comment on whether the items cover a representative sample of the behaviour domain.

Criterion validity
Criterion validity evidence involves the correlation between the test and a criterion variable (or variables) taken as representative of the construct. In other words, it compares the test with other measures or outcomes (the criteria) already held to be valid. For example, employee selection tests are often validated against measures of job performance (the criterion), and IQ tests are often validated against measures of academic performance (the criterion). If the test data and criterion data are collected at the same time, this is referred to as concurrent validity evidence. If the test data is collected first in order to predict criterion data collected at a later point in time, then this is referred to as predictive validity evidence.

Construct validity In science (e.g. social sciences and psychometrics), construct validity refers to whether a scale measures or correlates with the theorized psychological scientific construct (e.g., "fluid intelligence") that it purports to measure. In other words, it is the extent to which what was to be measured was actually measured. It is related to the theoretical ideas behind the trait under consideration, i.e. the concepts that organize how aspects of personality, intelligence, etc. are viewed.[1] The scale seeks to operationalize the concept, typically measuring several observable phenomena that supposedly reflect the underlying psychological concept. Construct validity is a means of assessing how well this has been accomplished. In lay terms, construct validity answers the question: "Are we actually measuring (are these means a valid form for measuring) what (the construct) we think we are measuring?" A construct is not restricted to one set of observable indicators or attributes. It is common to a number of sets of indicators. Thus, "constructvalidity" can be evaluated by statistical methods that show whether or not a common factor can be shown to exist underlying several measurements using different observable indicators. This view of a construct rejects the operationist past that a construct is neither more nor less than the operations used to measure it.

Evaluation of construct validity requires that the correlations of the measure be examined in regards to variables that are known to be related to the construct (purportedly measured by the instrument being evaluated or for which there are theoretical grounds for expecting it to be related). This is consistent with the multitrait - multimethod matrix of examining construct validity described in Campbell and Fiske's landmark paper (1959).[2] Correlations that fit the expected pattern contribute evidence of construct validity. Construct validity is a judgment based on the accumulation of correlations from numerous studies using the instrument being evaluated. There are variants of construct validity:

content validity, convergent validity, discriminant validity, and nomological validity.

Population definition Successful statistical practice is based on focused problem definition. In sampling, this includes defining the population from which our sample is drawn. A population can be defined as including all people or items with the characteristic one wish to understand. Because there is very rarely enough time or money to gather information from everyone or everything in a population, the goal becomes finding a representative sample (or subset) of that population. Sometimes that which defines a population is obvious. For example, a manufacturer needs to decide whether a batch of material from production is of high enough quality to be released to the customer, or should be sentenced for scrap or rework due to poor quality. In this case, the batch is the population. Although the population of interest often consists of physical objects, sometimes we need to sample over time, space, or some combination of these dimensions. For instance, an investigation of supermarket staffing could examine checkout line length at various times, or a study on endangered penguins might aim to understand their usage of various hunting grounds over time. For the time dimension, the focus may be on periods or discrete occasions. In other cases, our 'population' may be even less tangible. For example, Joseph Jagger studied the behavior of roulette wheels at a casino in Monte Carlo, and used this to identify a biased wheel. In this case, the 'population' Jagger wanted to investigate was the overall behavior of the wheel

(i.e. the probability distribution of its results over infinitely many trials), while his 'sample' was formed from observed results from that wheel. Similar considerations arise when taking repeated measurements of some physical characteristic such as the electrical conductivity of copper. This situation often arises when we seek knowledge about the cause system of which the observed population is an outcome. In such cases, sampling theory may treat the observed population as a sample from a larger 'super population'. For example, a researcher might study the success rate of a new 'quit smoking' program on a test group of 100 patients, in order to predict the effects of the program if it were made available nationwide. Here the super population is "everybody in the country, given access to this treatment" - a group which does not yet exist, since the program isn't yet available to all. Note also that the population from which the sample is drawn may not be the same as the population about which we actually want information. Often there is large but not complete overlap between these two groups due to frame issues etc. (see below). Sometimes they may be entirely separate - for instance, we might study rats in order to get a better understanding of human health, or we might study records from people born in 2008 in order to make predictions about people born in 2009. Time spent in making the sampled population and population of concern precise is often well spent, because it raises many issues, ambiguities and questions that would otherwise have been overlooked at this stage.

Sampling Frame:
A complete list of all the units of the population is called the sampling frame. A unit of population is a relative term. If all the workers in a factory make a population, a single worker is a unit of the population. If all the factories in a country are being studied for some purpose, a single factory is a unit of the population of factories. The sampling frame contains all the units of the population. It is to be defined clearly as to which units are to be included in the frame. The frame provides a base for the selection of the sample. A sample frame is a list that includes every member of the population from which a sample is to be taken. Without some form of sample frame, a random sample of a population, other than an extremely small population, is impossible. When a list of the population of interest is not available, an alternate method for capturing the population must be found. Most surveys carried out by governmental statistical agencies rely on a sample frame that is composed of maps that partition the entire country into enumeration areas. In that case, a multi-stage sample design is required. Enumeration areas are first randomly sampled, and then individual housing units are sampled from within the enumeration areas. Finally, individuals are sampled from within the housing units.

Even though the set of maps of enumeration areas is not a list of individuals in the population, it is still considered a sample frame. In that case, however, it is a sample frame of individuals that reside in housing units, not of the total population. Any individual who does not lives in a housing unit, for example, a homeless person, is not covered by the sample frame. Sample size determination is the act of choosing the number of observations to include in a statistical sample. The sample size is an important feature of any empirical study in which the goal is to make inferences about a population from a sample. In practice, the sample size used in a study is determined based on the expense of data collection, and the need to have sufficient statistical power. In complicated studies there may be several different sample sizes involved in the study: for example, in as survey sampling involving stratified sampling there would be different sample sizes for each population. In a census, data are collected on the entire population, hence the sample size is equal to the population size. In experimental design, where a study may be divided into different treatment groups, there may be different sample sizes for each group.

Вам также может понравиться