Вы находитесь на странице: 1из 14

1. Differentiate between nominal, ordinal, interval and ratio scales with an example of each.

Measurement may be classified into four different levels, based on the characteristics of order, distance and origin. 1. Nominal measurement This level of measurement consists in assigning numerals or symbols to different categories of a variable. The example of male and female applicants to an MBA program mentioned earlier is an example of nominal measurement. The numerals or symbols are just labels and have no quantitative value. The number of cases under each category are counted. Nominal measurement is therefore the simplest level of measurement. It does not have characteristics such as order, distance or arithmetic origin. 2. Ordinal measurement In this level of measurement, persons or objects are assigned numerals which indicate ranks with respect to one or more properties, either in ascending or descending order. Example Individuals may be ranked according to their socio-economic class, which is measured by a combination of income, education, occupation and wealth. The individual with the highest score might be assigned rank 1, the next highest rank 2, and so on, or vice versa. The numbers in this level of measurement indicate only rank order and not equal distance or absolute quantities. This means that the distance between ranks 1 and 2 is not necessarily equal to the distance between ranks 2 and 3. Ordinal scales may be constructed using rank order, rating and paired comparisons. Variables that lend themselves to ordinal measurement include preferences, ratings of organizations and economic status. Statistical techniques that are commonly used to analyze ordinal scale data are the median and rank order correlation coefficients. 3. Interval measurement This level of measurement is more powerful than the nominal and ordinal levels of measurement, since it has one additional characteristic equality of distance. However, it does not have an origin or a true zero. This implies that it is not possible to multiply or divide the numbers on an interval scale.

Example The Centigrade or Fahrenheit temperature gauge is an example of the interval level of measurement. A temperature of 50 degrees is exactly 10 degrees hotter than 40 degrees and 10 degrees cooler than 60 degrees.

Since interval scales are more powerful than nominal or ordinal scales, they also lend themselves to more powerful statistical techniques, such as standard deviation, product moment correlation and t tests and F tests of significance. 4. Ratio measurement This is the highest level of measurement and is appropriate when measuring characteristics which have an absolute zero point. This level of measurement has all the three characteristics order, distance and origin. Examples Height, weight, distance and area. Since there is a natural zero, it is possible to multiply and divide the numbers on a ratio scale. Apart from being able to use all the statistical techniques that are used with the nominal, ordinal and interval scales, techniques like the geometric mean and coefficient of variation may also be used. The main limitation of ratio measurement is that it cannot be used for characteristics such as leadership quality, happiness, satisfaction and other properties which do not have natural zero points. The different levels of measurement and their characteristics may be summed up. In the table below Levels of
measurement Nominal Ordinal Interval Ratio Characteristics No order, distance or origin Order, but no distance or origin Both order and distance, but no origin Order, distance and origin

2. What are the types of Hypothesis? Explain the procedure for testing Hypothesis. Types of Hypothesis
There are many kinds of hypothesis the researcher has to be working with. One type of hypothesis asserts that something is the case in a given instance; that a particular object, person or situation has particular characteristics. Another type of hypothesis deals with the frequency of occurrence or of association among variables; this type of hypothesis may state that X is associated with Y. A certain Y proportion of items e.g. urbanism tends to be accompanied by mental disease or than something are greater or lesser than some other thing in specific settings. Yet another type of hypothesis asserts that a particular characteristics is one of the factors which determine another characteristic, i.e. X is the producer of Y. hypothesis of this type are called causal hypothesis. 4.3.1 Null Hypothesis and Alternative Hypothesis In the context of statistical analysis, we often talk null and alternative hypothesis. If we are to compare method A with method B about its superiority and if we proceed on the assumption that both methods are equally good, then this assumption is termed as null hypothesis. As against this, we may think that the method A is superior, it is alternative hypothesis. Symbolically presented as: Null hypothesis = H0 and Alternative hypothesis = Ha Suppose we want to test the hypothesis that the population mean is equal to the hypothesis mean ( H0) = 100. Then we would say that the null hypotheses are that the population mean is equal to the hypothesized mean 100 and symbolical we can express as: H0: = H0=100 If our sample results do not support these null hypotheses, we should conclude that something else is true. What we conclude rejecting the null hypothesis is known as alternative hypothesis. If we accept H0, then we are rejecting Ha and if we reject H0, then we are accepting Ha. For H0: = H0=100, we may consider three possible alternative hypotheses as follows:

Alternative Hypothesis Ha: H0

Ha: > H0 Ha: < H0

To be read as follows (The alternative hypothesis is that the population mean is not equal to 100 i.e., it may be more or less 100) (The alternative hypothesis is that the population mean is greater than 100) (The alternative hypothesis is that the population mean is less than 100)

The null hypothesis and the alternative hypothesis are chosen before the sample is drawn (the researcher must avoid the error of deriving hypothesis from the data he collects and testing the

hypothesis from the same data). In the choice of null hypothesis, the following considerations are usually kept in view: Alternative hypothesis is usually the one which wishes to prove and the null hypothesis are ones that wish to disprove. Thus a null hypothesis represents the hypothesis we are trying to reject, the alternative hypothesis represents all other possibilities. If the rejection of a certain hypothesis when it is actually true involves great risk, it is taken as null hypothesis because then the probability of rejecting it when it is true is (the level of significance) which is chosen very small. Null hypothesis should always be specific hypothesis i.e., it should not state about or approximately a certain value. Generally, in hypothesis testing we proceed on the basis of null hypothesis, keeping the alternative hypothesis in view. Why so? The answer is that on assumption that null hypothesis is true, one can assign the probabilities to different possible sample results, but this cannot be done if we proceed with alternative hypothesis. Hence the use of null hypothesis (at times also known as statistical hypothesis) is quite frequent.

Procedure for Testing Hypothesis


To test a hypothesis means to tell (on the basis of the data researcher has collected) whether or not the hypothesis seems to be valid. In hypothesis testing the main question is: whether the null hypothesis or not to accept the null hypothesis? Procedure for hypothesis testing refers to all those steps that we undertake for making a choice between the two actions i.e., rejection and acceptance of a null hypothesis. The various steps involved in hypothesis testing are stated below: 1 Making a Formal Statement The step consists in making a formal statement of the null hypothesis (Ho) and also of the alternative hypothesis (Ha). This means that hypothesis should clearly state, considering the nature of the research problem. For instance, Mr. Mohan of the Civil Engineering Department wants to test the load bearing capacity of an old bridge which must be more than 10 tons, in that case he can state his hypothesis as under: Null hypothesis HO: =10 tons Alternative hypothesis Ha: >10 tons Take another example. The average score in an aptitude test administered at the national level is 80. To evaluate a states education system, the average score of 100 of the states students selected on the random basis was 75. The state wants to know if there is a significance difference between the local scores and the national scores. In such a situation the hypothesis may be state as under: Null hypothesis HO: =80 Alternative hypothesis Ha: 80 The formulation of hypothesis is an important step which must be accomplished with due care in accordance with the object and nature of the problem under consideration. It also indicates whether we should use a tailed test or a two tailed test. If Ha is of the type greater than, we use

alone tailed test, but when Ha is of the type whether greater or smaller then we use a two-tailed test. 2 Selecting a Significant Level

The hypothesis is tested on a pre-determined level of significance and such the same should have specified. Generally, in practice, either 5% level or 1% level is adopted for the purpose. The factors that affect the level of significance are: The magnitude of the difference between sample ; The size of the sample; The variability of measurements within samples; Whether the hypothesis is directional or non directional (A directional hypothesis is one which predicts the direction of the difference between, say, means). In brief, the level of significance must be adequate in the context of the purpose and nature of enquiry.

3 Deciding the Distribution to Use After deciding the level of significance, the next step in hypothesis testing is to determine the appropriate sampling distribution. The choice generally remains between distribution and the t distribution. The rules for selecting the correct distribution are similar to those which we have stated earlier in the context of estimation. 4 Selecting A Random Sample & Computing An Appropriate Value Another step is to select a random sample(S) and compute an appropriate value from the sample data concerning the test statistic utilizing the relevant distribution. In other words, draw a sample to furnish empirical data. 5 Calculation of the Probability One has then to calculate the probability that the sample result would diverge as widely as it has from expectations, if the null hypothesis were in fact true.

3. What are the advantages and disadvantages of Case Study Method? How is Case Study method useful to Business Research? Advantages of Case Study Method
Case study of particular value when a complex set of variables may be at work in generating observed results and intensive study is needed to unravel the complexities. For example, an indepth study of a firms top sales people and comparison with worst salespeople might reveal characteristics common to stellar performers. Here again, the exploratory investigation is best served by an active curiosity and willingness to deviate from the initial plan when findings suggest new courses of inquiry might prove more productive. It is easy to see how the exploratory research objectives of generating insights and hypothesis would be well served by use of this technique.

Disadvantages of Case Study Method


Blummer points out that independently, the case documents hardly fulfil the criteria of reliability, adequacy and representativeness, but to exclude them form any scientific study of human life will be blunder in as much as these documents are necessary and significant both for theory building and practice.

Case Study as a Method of Business Research


In-depth analysis of selected cases is of particular value to business research when a complex set of variables may be at work in generating observed results and intensive study is needed to unravel the complexities. For instance, an in-depth study of a firms top sales people and comparison with the worst sales people might reveal characteristics common to stellar performers. The exploratory investigator is best served by the active curiosity and willingness to deviate from the initial plan, when the finding suggests new courses of enquiry, might prove more productive

4. What are the Primary and Secondary sources of Data? Primary Sources of Data
Primary sources are original sources from which the researcher directly collects data that have not been previously collected e.g., collection of data directly by the researcher on brand awareness, brand preference, brand loyalty and other aspects of consumer behaviour from a sample of consumers by interviewing them. Primary data are first hand information collected through various methods such as observation, interviewing, mailing etc. 1 Advantage of Primary Data It is original source of data It is possible to capture the changes occurring in the course of time. It flexible to the advantage of researcher. Extensive research study is based of primary data

2 Disadvantage of Primary Data 1. Primary data is expensive to obtain 2. It is time consuming 3. It requires extensive research personnel who are skilled. 4. It is difficult to administer. Methods of Collecting Primary Data

Primary data are directly collected by the researcher from their original sources. In this case, the researcher can collect the required date precisely according to his research needs, he can collect them when he wants them and in the form he needs them. But the collection of primary data is costly and time consuming. Yet, for several types of social science research required data are not available from secondary sources and they have to be directly gathered from the primary sources. In such cases where the available data are inappropriate, inadequate or obsolete, primary data have to be gathered. They include: socio economic surveys, social anthropological studies of rural communities and tribal communities, sociological studies of social problems and social institutions. Marketing research, leadership studies, opinion polls, attitudinal surveys, readership, radio listening and T.V. viewing surveys, knowledge-awareness practice (KAP) studies, farm managements studies, business management studies etc. There are various methods of data collection. A Method is different from a Tool while a method refers to the way or mode of gathering data, a tool is an instruments used for the method. For example, a schedule is used for interviewing. The important methods are (a) observation, (b) interviewing, (c) mail survey, (d) experimentation, (e) simulation and (f) projective technique. Each of these methods is discussed in detail in the subsequent sections in the later chapters.

Secondary Sources of Data


These are sources containing data which have been collected and compiled for another purpose. The secondary sources consists of readily compendia and already compiled statistical statements and reports whose data may be used by researchers for their studies e.g., census reports , annual reports and financial statements of companies, Statistical statement, Reports of Government Departments, Annual reports of currency and finance published by the Reserve Bank of India, Statistical statements relating to Co-operatives and Regional Banks, published by the NABARD, Reports of the National sample survey Organization, Reports of trade associations, publications of international organizations such as UNO, IMF, World Bank, ILO, WHO, etc., Trade and Financial journals newspapers etc. Secondary sources consist of not only published records and reports, but also unpublished records. The latter category includes various records and registers maintained by the firms and organizations, e.g., accounting and financial records, personnel records, register of members, minutes of meetings, inventory records etc. 1 Features of Secondary Sources Though secondary sources are diverse and consist of all sorts of materials, they have certain common characteristics. First, they are readymade and readily available, and do not require the trouble of constructing tools and administering them. Second, they consist of data which a researcher has no original control over collection and classification. Both the form and the content of secondary sources are shaped by others. Clearly, this is a feature which can limit the research value of secondary sources. Finally, secondary sources are not limited in time and space. That is, the researcher using them need not have been present when and where they were gathered.

2 Use of Secondary Data The second data may be used in three ways by a researcher. First, some specific information from secondary sources may be used for reference purpose. For example, the general statistical information in the number of co-operative credit societies in the country, their coverage of villages, their capital structure, volume of business etc., may be taken from published reports and quoted as background information in a study on the evaluation of performance of cooperative credit societies in a selected district/state. Second, secondary data may be used as bench marks against which the findings of research may be tested, e.g., the findings of a local or regional survey may be compared with the national averages; the performance indicators of a particular bank may be tested against the corresponding indicators of the banking industry as a whole; and so on. Finally, secondary data may be used as the sole source of information for a research project. Such studies as securities Market Behaviour, Financial Analysis of companies, Trade in credit allocation in commercial banks, sociological studies on crimes, historical studies, and the like, depend primarily on secondary data. Year books, statistical reports of government departments, report of public organizations of Bureau of Public Enterprises, Censes Reports etc, serve as major data sources for such research studies.

4 Advantages of Secondary Data


Secondary sources have some advantages: 1. Secondary data, if available can be secured quickly and cheaply. Once their source of documents and reports are located, collection of data is just matter of desk work. Even the tediousness of copying the data from the source can now be avoided, thanks to Xeroxing facilities. 2. Wider geographical area and longer reference period may be covered without much cost. Thus, the use of secondary data extends the researchers space and time reach. 3. The use of secondary data broadens the data base from which scientific generalizations can be made. 4. Environmental and cultural settings are required for the study. 5. The use of secondary data enables a researcher to verify the findings bases on primary data. It readily meets the need for additional empirical support. The researcher need not wait the time when additional primary data can be collected.

5 Disadvantages of Secondary Data


The use of a secondary data has its own limitations.

1. The most important limitation is the available data may not meet our specific needs. The definitions adopted by those who collected those data may be different; units of measure may not match; and time periods may also be different. 2. The available data may not be as accurate as desired. To assess their accuracy we need to know how the data were collected. 3. The secondary data are not up-to-date and become obsolete when they appear in print, because of time lag in producing them. For example, population census data are published tow or three years later after compilation, and no new figures will be available for another ten years. 4. Finally, information about the whereabouts of sources may not be available to all social scientists. Even if the location of the source is known, the accessibility depends primarily on proximity. For example, most of the unpublished official records and compilations are located in the capital city, and they are not within the easy reach of researchers based in far off places.

5. Differentiate between Schedules and Questionnaire. What are the alternative modes of sending Questionnaires? Differentiate between Schedules and Questionnaire

Questionnaires are mailed to the respondent whereas schedules are carried by the investigator himself. Questionnaires can be filled by the respondent only if he is able to understand the language in which it is written and he is supposed to be a literate. This problem can be overcome in case of schedule since the investigator himself carries the schedules and the respondents response is accordingly taken. A questionnaire is filled by the respondent himself whereas the schedule is filled by the investigator. Alternative modes of sending Questionnaires

There are some alternative methods of distributing questionnaires to the respondents. They are: (1) personal delivery, (2) attaching questionnaire to a product (3) advertising questionnaire in a newspaper of magazine, and (4) news stand insets. Personal Delivery The researcher or his assistant may deliver the questionnaires to the potential respondents with a request to complete them at their convenience. After a day or two he can collect the completed questionnaires from them. Often referred to as the self-administered questionnaire method, it combines the advantages of the personal interview and the mail survey. Alternatively, the questionnaires may be delivered in person and the completed questionnaires may be returned by mail by the respondents. Attaching Questionnaire to a Product

A firm test marketing a product may attach a questionnaire to a product and request the buyer to complete it and mail it back to the firm. The respondent is usually rewarded by a gift or a discount coupon. Advertising the Questionnaires The questionnaire with the instructions for completion may be advertised on a page of magazine or in section of newspapers. The potential respondent completes it tears it out and mails it to the advertiser. For example, the committee of Banks customer services used this method. Management studies for collecting information from the customers of commercial banks in India. This method may be useful for large-scale on topics of common interest. News-Stand Inserts This method involves inserting the covering letter, questionnaire and self-addressed reply-paid envelope into a random sample of news-stand copies Improving the Response Rate in a Mail survey The response rate in mail surveys is generally very low more so in developing countries like India. Certain techniques have to be adopted to increase the response rate. They are: 1. Quality Printing: The questionnaire may be neatly printed in quality light colored paper, so as to attract the attention of the respondent. 2. Covering Letter: The covering letter should be couched in a pleasant style so as to attract and hold the interest of the respondent. It must anticipate objections and answer them briefly. It is a desirable to address the respondent by name. 3. Advance Information: Advance information can be provided to potential respondents by a telephone call or advance notice in the newsletter of the concerned organization or by a letter. Such preliminary contact with potential respondents is more successful than follow up efforts. 4. Incentives: Money, stamps for collection and other incentives are also used to induce respondents to complete and return mail questionnaire. 5. Follow-up-contacts: In the case of respondents belonging to an organization, they may be approached through someone in that organization known as the researcher. 6. Larger sample size: A larger sample may be drawn than the estimated sample size. For example, if the required sample size is 1000, a sample of 1500 may be drawn. This may help the researcher to secure an effective sample size closer to the required size.

6. Explain the various steps in processing of Data.


Data in the real world often comes with a large quantum and in a variety of formats that any meaningful interpretation of data cannot be achieved straightaway. Social science researches, to be very specific, draw conclusions using both primary and secondary data. To arrive at a meaningful interpretation on the research hypothesis, the researcher has to prepare his data for this purpose. This preparation involves the identification of data structures, the coding of data and the grouping of data for preliminary research interpretation. This data preparation for research analysis is teamed as processing of data. Further selections of tools for analysis would to a large extent depend on the results of this data processing.

Data processing is an intermediary stage of work between data collections and data interpretation. The data gathered in the form of questionnaires/interview schedules/field notes/data sheets is mostly in the form of a large volume of research variables. The research variables recognized is the result of the preliminary research plan, which also sets out the data processing methods beforehand. Processing of data requires advanced planning and this planning may cover such aspects as identification of variables, hypothetical relationship among the variables and the tentative research hypothesis. The various steps in processing of data may be stated as: Identifying the data structures Editing the data Coding and classifying the data Transcription of data Tabulation of data.

Objectives: After studying this lesson you should be able to understand: Checking for analysis Editing Coding

Classification Transcription of data Tabulation Construction of Frequency Table Components of a table Principles of table construction Frequency distribution and class intervals Graphs, charts and diagrams Types of graphs and general rules Quantitative and qualitative analysis Measures of central tendency Dispersion Correlation analysis Coefficient of determination

Checking for Analysis


In the data preparation step, the data are prepared in a data format, which allows the analyst to use modern analysis software such as SAS or SPSS. The major criterion in this is to define the data structure. A data structure is a dynamic collection of related variables and can be conveniently represented as a graph where nodes are labelled by variables. The data structure also defines and stages of the preliminary relationship between variables/groups that have been

pre-planned by the researcher. Most data structures can be graphically presented to give clarity as to the frames researched hypothesis. A sample structure could be a linear structure, in which one variable leads to the other and finally, to the resultant end variable. The identification of the nodal points and the relationships among the nodes could sometimes be a complex task than estimated. When the task is complex, which involves several types of instruments being collected for the same research question, the procedures for drawing the data structure would involve a series of steps. In several intermediate steps, the heterogeneous data structure of the individual data sets can be harmonized to a common standard and the separate data sets are then integrated into a single data set. However, the clear definition of such data structures would help in the further processing of data.

Editing
The next step in the processing of data is editing of the data instruments. Editing is a process of checking to detect and correct errors and omissions. Data editing happens at two stages, one at the time of recording of the data and second at the time of analysis of data. 1 Data Editing at the Time of Recording of Data Document editing and testing of the data at the time of data recording is done considering the following questions in mind. Do the filters agree or are the data inconsistent? Have missing values been set to values, which are the same for all research questions? Have variable descriptions been specified? Have labels for variable names and value labels been defined and written?

All editing and cleaning steps are documented, so that, the redefinition of variables or later analytical modification requirements could be easily incorporated into the data sets. 2 Data Editing at the Time of Analysis of Data Data editing is also a requisite before the analysis of data is carried out. This ensures that the data is complete in all respect for subjecting them to further analysis. Some of the usual check list questions that can be had by a researcher for editing data sets before analysis would be: 1. Is the coding frame complete? 2. Is the documentary material sufficient for the methodological description of the study? 3. Is the storage medium readable and reliable. 4. Has the correct data set been framed? 5. Is the number of cases correct? 6. Are there differences between questionnaire, coding frame and data? 7. Are there undefined and so-called wild codes? 8. Comparison of the first counting of the data with the original documents of the researcher.

The editing step checks for the completeness, accuracy and uniformity of the data as created by the researcher. Completeness: The first step of editing is to check whether there is an answer to all the questions/variables set out in the data set. If there were any omission, the researcher sometimes would be able to deduce the correct answer from other related data on the same instrument. If this is possible, the data set has to rewritten on the basis of the new information. For example, the approximate family income can be inferred from other answers to probes such as occupation of family members, sources of income, approximate spending and saving and borrowing habits of family members etc. If the information is vital and has been found to be incomplete, then the researcher can take the step of contacting the respondent personally again and solicit the requisite data again. If none of these steps could be resorted to the marking of the data as missing must be resorted to. Accuracy: Apart from checking for omissions, the accuracy of each recorded answer should be checked. A random check process can be applied to trace the errors at this step. Consistency in response can also be checked at this step. The cross verification to a few related responses would help in checking for consistency in responses. The reliability of the data set would heavily depend on this step of error correction. While clear inconsistencies should be rectified in the data sets, fact responses should be dropped from the data sets. Uniformity: In editing data sets, another keen lookout should be for any lack of uniformity, in interpretation of questions and instructions by the data recorders. For instance, the responses towards a specific feeling could have been queried from a positive as well as a negative angle. While interpreting the answers, care should be taken as a record the answer as a positive question response or as negative question response in all uniformity checks for consistency in coding throughout the questionnaire/interview schedule response/data set. The final point in the editing of data set is to maintain a log of all corrections that have been carried out at this stage. The documentation of these corrections helps the researcher to retain the original data set.

Вам также может понравиться