Вы находитесь на странице: 1из 17

1. What is referencing?

the ideas and research of other sources: books, journal articles, websites, and so forth.
These other sources may be used to support the author's ideas, or the author may be
discussing, analysing, or critiquing other sources.
Referencing is used to tell the reader where ideas from other sources have been used
in an assignment. There are many reasons why it is important to reference sources
correctly:

 It shows the reader that you can find and use sources to create a solid argument

 It properly credits the originators of ideas, theories, and research findings

 It shows the reader how your argument relates to the big picture

Failure to properly acknowledge sources is called plagiarism, and it can carry significant


academic penalties. Fortunately, plagiarism is easy to avoid by following a few basic
principles.

Referencing is needed when:

 You have copied words from a book, article, or other source exactly (quotation)

 You have used an idea or fact from an outside source, even if you haven't used
their exact wording (paraphrasing and summarising)

Styles

 APA

 MLA

 Oxford

 Harvard

 Chicago
2. basic format for peer-reviewed research articles:

There are three basic formats for peer-reviewed research articles:

 • Full-length research articles: These articles contain a comprehensive


investigation of the subject matter and are viewed as the standard format. It uses
the “IMRAD” format: Introduction, Methods, Results and Discussion. (See
“Components of a Research Article.”) 
 • Short (or brief) communications: While not as comprehensive in scope as full-
length research articles, these papers also make a significant contribution to the
literature. Their length will be set by the journal but is usually 3500 words or less
and will contain up to 2 tables and figures.
Unlike full papers, methods, results, and discussions may be combined into a
single section.
 • Rapid communications: These articles quickly disseminate particularly “hot”
findings, usually in a brief communication format. Articles that have immediate
implications for public health would be appropriate for such a format, as might
findings in a highly competitive and quickly moving field.

3. Publication Ethics

 Carelessness: citation bias, understatement, negligence.


 Redundant publication: same tables or literature review reports without noting
prior source.
 Unfair authorship: failure to include eligible authors.
 Undeclared conflict of interest: failure to cite funding source.
 Human/animal subjects violations: no approval from Review Board or Ethics
Committee
 Plagiarism: reproducing others’ work or ideas without citing the original source.
 Other frauds: fabrication or falsification of data, misappropriation of others ideas
or plans given in confidence.

4. common errors:
•Sentence construction
•Incorrect tenses
•Inaccurate grammar
•Spelling mistakes

5. Primary sources

A primary source provides direct or firsthand evidence about an event, object,


person, or work of art. Primary sources include historical and legal documents,
eyewitness accounts, results of experiments, statistical data, pieces of creative writing,
audio and video recordings, speeches, and art objects. Interviews, surveys, fieldwork,
and Internet communications via email, blogs, listservs, and newsgroups are also
primary sources. In the natural and social sciences, primary sources are often empirical
studies—research where an experiment was performed or a direct observation was
made. The results of empirical studies are typically found in scholarly articles or papers
delivered at conferences.
6. Secondary Sources

Secondary sources describe, discuss, interpret, comment upon, analyze, evaluate,


summarize, and process primary sources. Secondary source materials can be articles
in newspapers or popular magazines, book or movie reviews, or articles found in
scholarly journals that discuss or evaluate someone else's original research. 

7. What is a sample?
A sample is defined as a smaller set of data that a researcher chooses or selects from a
larger population by using a pre-defined selection method. These elements are known
as sample points, sampling units, or observations. Creating a sample is an efficient
method of conducting research. In most cases, it is impossible or costly and time-
consuming to research the whole population. Hence, examining the sample provides
insights that the researcher can apply to the entire population.

8.
9. Definition of Reliability

According to Drost (2011), reliability is “the extent to which measurements are


repeatable when different people perform the measurement on different occasion, under
different condition, supposedly with alternative instruments which measure the construct
or skill”. It can also be defined as the degree to which the measure of a construct is
consistent or dependable. For instance when several people guess your own weight,
the value of the weigth might not be necessarily correct since it will be inconsistence
with the accurate value hence the measurement is said to be unreliable. If a weighing
scale is used by different people to give the value of your weight then there is likelihood
of getting the same value every time a measurement is done hence this measurement
would be said to be reliable.

It is basically the repeatability or replication of research findings. When a study is


conducted by a researcher under some conditions and then the same study is done
again for the second time and yields the same results then the data is said to be
reliable.

10. Definition of Validity

“The extent to which a measure adequately represents the underlying construct that
it is supposed to measure” (Drost, 2011) is called validity. The term construct refers
to the skill, knowledge, attribute or attitude that the researcher is investigating. For
instance if a researcher wanted to measure compassion ,it is vital to know if the
measure would accurately measure compassion or empathy because the two terms
are closely related.
11. Random error

Random error is attributed to a set of unknown and uncontrollable external factors


that randomly influence some observations but not others. For example respondents
who might have nicer moods might respond positively to constructs like self-esteem,
happiness and satisfaction as compared to respondents with bad mood. Random
error is seen as noise in measurement hence it is usually ignored.

12. Systematic error

Systematic error is an error that is introduced by factors that systematically affect all
observations of a construct across the entire sample. Systematic error is considered
as a bias in measurement and should be corrected to yield better results of the
sample.

The type’s reliability

13. Test-retest reliability

It is a measure of consistency between measurements of the same construct

administered to the same sample at two different points in time (Drost, 2011). If the

correlation between the two sets of test is significant then observations have not

changed substantially hence the aspect of time is very critical for this type of reliability.

14. Split-half reliability

Heale & Twycross, (2015) defined Split-half reliability as a measure of consistency

between two halves of a construct measure. For example, if a researcher uses a ten-

item measure to measure a construct, the items are divided into half or two sets of even

and odd if the total number of items is an odd one. It is assumed that the number of

items for measuring a construct is available and is measured within the same time

period hence minimize the random error. The correlation between the two halves must

be obtained to determine the coefficient of reliability. A practical advantage of this

method is that it is cheaper and obtained easily as compared test retest reliability where

the researcher has to design new set of items to administer later.


15. Inter-rater reliability

It is also called inter-observer rating or an agreement. It involves rating of observations

using a specific measure but by different judges. The rating is basically independent but

happens at the same time. Reliability is obtained by correlation of scores from the two

or more raters on the same construct or sometimes it is the decision of agreement of

the judgments of the same raters. This is basically used when judges are rating or

scoring a piece of an artistic work or music performance on stage. There scores are

correlated to give the Cohen` s Kappa coefficient of inter-rater reliability especially if the

variables are categorical.

16. Internal consistency reliability

It is a measure of consistency between different items of the same construct. It

measures the consistency within the instrument and questions on how well a set of

items measures a particular characteristic of the test. Single items within a test are

correlated to estimate the coefficient of reliability. Cronbach`s alpha coefficient is used

to determine internal consistency between items (Cronbach, 1951). An individual item

of a test might have a small correlation with true scores attest with higher items might

have a higher correlation. For instance, 5-item test might have a correlation of 0.40

while a 12-item test might have a correlation of 0.80. According to Cortina, (1993)

coefficient alpha is used to estimate reliability for item-specific variance in a one-

dimensional test. If the coefficient alpha is low, it means that the test is too short or the

items have little in common.

Types of validity

17. Construct validity


This refers to how a concept, idea or behavior that is a construct is translated or

transformed into functioning and operating reality (Trochim, 2006). This happens

especially if the relationship has its cause and effect hence the construct validity justifies

the existence of relationship. Construct validity is critically substantiated under the

following validity; face validity, content validity, concurrent and predictive validity, and

convergent and discriminant validity.

18. Face validity

It is where an indicator seems to be a reasonable measure of its underlying construct

“on its face”. It actually ascertains that the measure is appears to be assessing the

intended construct under investigation. For example the aspect of an individual going to

church every Sunday can make someone conclude that the person is religious which

might not be really true. The face validity is often used by university lectures when

assessing research instruments designed by their students.

19. Content validity

This is an assessment on how well a set of scale of items matches with the relevant

content domain of the construct that it is trying to measure. According to Bollen (1989),

as cited in Drost (2011) content validity is a qualitative type of validity where the domain

of the concept is made clear and the analyst judges whether the measures fully

represent the domain (p.185). The researcher should design a research instrument that

adequately addresses the construct or area under investigation. For instance if a

researcher wants to cover an investigation on implementation of a new curriculum then

the research instrument or test items designed by the researcher must adequately
address the domain to yield valid research findings. A group of judges or experts that

have content in the area under investigation can be used to assess this type of validity.

20. Convergent and Discriminant validity

They are assessed together or jointly for a set of measure. Convergent validity refers to

closeness of which the measure relates to the construct that it purported to measure or

simply it converges with the construct. Discriminant validity refers to the degree to which

a measure does not measure or discriminates the construct it is not supposed to

measure. To effectively obtain convergent validity comparison of observed values of

one indicator of one construct with others indicators of the same construct is done.

Discriminant validity is obtained by demonstrating that indicators of one construct are

dissimilar. A statistical procedure called bivariate correlation is used to analyze items

using exploratory factor analysis for convergent and discriminant validity.

21. Criterion-related validity

It is the degree of correspondence between a test measure and one or more external

referents (criteria) by correlation (Mohajan, 2017). For instance suppose students sat

for an examination and scored some scores and then we ask them about their scores. A

correlation can be done between their observed scores and true scores from the

teachers’ record. Criterion –related validity is closely related concurrent or predictive

types of validity. Concurrent validity is where one measure relates to other concrete

criterion that is presumed to occur simultaneously. It happens when a criterion exist at

the same as the measure. An example could be the Students’ performance scores

obtained from calculus and linear algebra since all of them are mathematics test.

Predictive validity is where a measure successfully predicts a future outcome that it is


theoretically expected to predict. A good example of predictive validity is the use of

Students Continuous Assessment Test (CAT) to predict their performance in final

Examination. The scores for the CAT can be correlated with the scores obtained from

the Final Examination.

Types of Data Collection

22. Quantitative Data. 

These are data that deal with quantities, values or numbers, making them

measurable. Thus, they are usually expressed in numerical form, such as length,

size, amount, price, and even duration. The use of statistics to generate and

subsequently analyze this type of data add credence or credibility to it, so that

quantitative data is overall seen as more reliable and objective.

23. Qualitative Data. 

These data, on the other hand, deals with quality, so that they are descriptive rather

than numerical in nature. Unlike quantitative data, they are generally not

measurable, and are only gained mostly through observation. Narratives often make

use of adjectives and other descriptive words to refer to data on appearance, color,

texture, and other qualities.

Qualitative Data Collection Methods

24. Face-to-Face Personal Interviews

This is considered to be the most common data collection instrument for qualitative
research, primarily because of its personal approach. The interviewer will collect data
directly from the subject (the interviewee), on a one-on-one and face-to-face interaction.
This is ideal for when data to be obtained must be highly personalized.

The interview may be informal and unstructured – conversational, even – as if taking


place between two casual to close friends. The questions asked are mostly unplanned
and spontaneous, with the interviewer letting the flow of the interview dictate the next
questions to be asked.

However, if the interviewer still wants the data to be standardized to a certain extent for
easier analysis, he could conduct a semi-structured interview where he asks the same
series of open-ended questions to all the respondents. But if they let the subject choose
her answer from a set of options, what just took place is a closed, structured and fixed-
response interview.

 (+) This allows the interviewer to probe further, by asking follow-up questions and
getting more information in the process.

 (+) The data will be highly personalized (particularly when using the informal
approach).

 (-) This method is subject to certain limitations, such as language barriers,


cultural differences, and geographical distances.

 (-) The person conducting the interview must have very good interviewing skills in
order to elicit responses.

25. Qualitative Surveys

 Paper surveys or questionnaires. Questionnaires often utilize a structure


comprised of short questions and, in the case of qualitative questionnaires, they are
usually open-ended, with the respondents asked to provide detailed answers, in their
own words. It’s almost like answering essay questions.

o (+) Since questionnaires are designed to collect standardized data, they


are ideal for use in large populations or sample sizes of respondents.

o (+) The high amount of detail provided will aid analysis of data.

o (-) On the other hand, the large number of respondents (and data),
combined with the high level and amount of detail provided in the answers, will make
data analysis quite tedious and time-consuming.
 Web-based questionnaires. This is basically a web-based or internet-based
survey, involving a questionnaire uploaded to a site, where the respondents will log into
and accomplish electronically. Instead of a paper and a pen, they will be using a
computer screen and the mouse.
o (+) Data collection is definitely quicker. This is often due to the questions
being shorter, requiring less detail than in, say, a personal interview or a paper
questionnaire.

 (+) It is also uncomplicated, since the respondents can be invited to answer the
questionnaire by simply sending them an email containing the URL of the site
where the online questionnaire is available for answering.
 (-) There is a limitation on the respondents, since the only ones to be able to
answer are those who own a computer, have internet connection, and know their
way around answering online surverys.

 (-) The lesser amount of detail provided means the researcher may end up with
mostly surface data, and no depth or meaning, especially when the data is
processed.

26. Focus Groups

Focus groups method is basically an interview method, but done in a group discussion
setting. When the object of the data is behaviors and attitudes, particularly in social
situations, and resources for one-on-one interviews are limited, using the focus group
approach is highly recommended. Ideally, the focus group should have at least 3 people
and a moderator to around 10 to 13 people maximum, plus a moderator.

Depending on the data being sought, the members of the group should have something
in common. For example, a researcher conducting a study on the recovery of married
mothers from alcoholism will choose women who are (1) married, (2) have kids, and (3)
recovering alcoholics. Other parameters such as the age, employment status, and
income bracketdo not have to be similar across the members of the focus group.

The topic that data will be collected about will be presented to the group, and the
moderator will open the floor for a debate.

 (+) There may be a small group of respondents, but the setup or framework of
data being delivered and shared makes it possible to come up with a wide variety of
answers.

 (+) The data collector may also get highly detailed and descriptive data by using
a focus group.

 (-) Much of the success of the discussion within the focus group lies in the hands
of the moderator. He must be highly capable and experienced in controlling these types
of interactions.

27. Documental Revision

This method involves the use of previously existing and reliable documents and other
sources of information as a source of data to be used in a new research or
investigation. This is likened to how the data collector will go to a library and go over the
books and other references for information relevant to what he is currently researching
on.
 (+) The researcher will gain better understanding of the field or subject being
looked into, thanks to the reliable and high quality documents used as data sources.

 (+) Taking a look into other documents or researches as a source will provide a
glimpse of the subject being looked into from different perspectives or points of view,
allowing comparisons and contrasts to be made.

 (-) Unfortunately, this relies heavily on the quality of the document that will be
used, and the ability of the data collector to choose the right and reliable documents. If
he chooses wrong, then the quality of the data he will collect later on will be
compromised.

28. Observation

In this method, the researcher takes a participatory stance, immersing himself in the
setting where his respondents are, and generally taking a look at everything, while
taking down notes.

Aside from note-taking, other documentation methods may be used, such as video and
audio recording, photography, and the use of tangible items such as artifacts,
mementoes, and other tools.

 (+) The participatory nature may lead to the researcher getting more reliable
information.

 (+) Data is more reliable and representative of what is actually happening, since
they took place and were observed under normal circumstances.

 (-) The participation may end up influencing the opinions and attitudes of the
researcher, so he will end up having difficulty being objective and impartial as
soon as the data he is looking for comes in.

 (-) Validity may arise due to the risk that the researcher’s participation may have
an impact on the naturalness of the setting. The observed may become reactive
to the idea of being watched and observed. If he planned to observe recovering
alcoholic mothers in their natural environment (e.g. at their homes with their
kids), their presence may cause the subjects to react differently, knowing that
they are being observed. This may lead to the results becoming impaired.

29. Longitudinal studies

This is a research or data collection method that is performed repeatedly, on the same
data sources, over an extended period of time. It is an observational research method
that could even cover a span of years and, in some cases, even decades. The goal is to
find correlations through an empirical or observational study of subjects with a common
trait or characteristic.
An example of this is the Terman Study of the Gifted conducted by Lewis Terman at
Stanford University. The study aimed to gather data on the characteristics of gifted
children – and how they grow and develop – over their lifetime. Terman started in 1921,
and it extended over the lifespan of the subjects, more than 1,500 boys and girls aged 3
to 19 years old, and with IQs higher than 135. To this day, this study is the world’s
“oldest and longest-running” longitudinal study.

 (+) This is ideal when seeking data meant to establish a variable’s pattern over a
period of time, particularly over an extended period of time.

 (+) As a method to find correlations, it is effective in finding connections and


relationships of cause and effect.

 (-) The long period may become a setback, considering how the probability of the
subjects at the beginning of the research will still be complete 10, 20, or 30 years
down the road is very low.

 (-) Over the extended period, attitudes and opinions of the subjects are likely to
change, which can lead to the dilution of data, reducing their reliability in the
process.

30. Case Studies

In this qualitative method, data is gathered by taking a close look and an in-depth
analysis of a “case study” or “case studies” – the unit or units of research that may be
an individual, a group of individuals, or an entire organization. This methodology’s
versatility is demonstrated in how it can be used to analyze both simple and complex
subjects.

However, the strength of a case study as a data collection method is attributed to how it
utilizes other data collection methods, and captures more variables than when a single
methodology is used. In analyzing the case study, the researcher may employ other
methods such as interviewing, floating questionnaires, or conducting group discussions
in order to gather data.

 (+) It is flexible and versatile, analyzing both simple and complex units and
occurrence, even over a long period of time.

 (+) Case studies provide in-depth and detailed information, thanks to how it
captures as many variables as it can.

 (-) Reliability of the data may be put at risk when the case study or studies
chosen are not representative of the sample or population.
 
31. Quantitative Surveys

Unlike the open-ended questions asked in qualitative questionnaires, quantitative paper


surveys pose closed questions, with the answer options provided. The respondents will
only have to choose their answer among the choices provided on the questionnaire.

 (+) Similarly, these are ideal for use when surveying large numbers of
respondents.

 (+) The standardized nature of questionnaires enable researchers to make


generalizations out of the results.

 (-) This can be very limiting to the respondents, since it is possible that his actual
answer to the question may not be in the list of options provided on the questionnaire.

 (-) While data analysis is still possible, it will be restricted by the lack of details

32. Interviews

Personal one-on-one interviews may also be used for gathering quantitative data. In
collecting quantitative data, the interview is more structured than when gathering
qualitative data, comprised of a prepared set of standard questions.

These interviews can take the following forms:

 Face-to-face interviews: Much like when conducting interviews to gather


qualitative data, this can also yield quantitative data when standard questions are
asked.

o (+) The face-to-face setup allows the researcher to make clarifications on


any answer given by the interviewee.

o (-) This can be quite a challenge when dealing with a large sample size or
group of interviewees. If the plan is to interview everyone, it is bound to take a lot of
time, not to mention a significant amount of money.

 Telephone and/or online, web-based interviews. Conducting interviews over


the telephone is no longer a new concept. Rapidly rising to take the place of
telephone interviews is the video interview via internet connection and web-
based applications, such as Skype.

(+) The net for data collection may be cast wider, since there is no need to travel through
distances to get the data. All it takes is to pick up the phone and dial a number, or connect
to the internet and log on to Skype for a video call or video conference.
(-) Quality of the data may be questionable, especially in terms of impartiality. The net may
be cast wide, but it will only be targeting a specific group of subjects: those with telephones
and internet connections and are knowledgeable about using such technologies.

 Computer-assisted interviews. This is called CAPI, or Computer-Assisted


Personal Interviewing where, in a face-to-face interview, the data obtained from
the interviewee will be entered directly into a database through the use of a
computer.

(+) The direct input of data saves a lot of time and other resources in converting them into
information later on, because the processing will take place immediately after the data has
been obtained from the source and entered into the database.

(-) The use of computers, databases and related devices and technologies does not come
cheap. It also requires a certain degree of being tech-savvy on the part of the data
gatherer.

33. Quantitative Observation

This is straightforward enough. Data may be collected through systematic observation


by, say, counting the number of users present and currently accessing services in a
specific area, or the number of services being used within a designated vicinity.

When quantitative data is being sought, the approach is naturalistic observation, which
mostly involves using the senses and keen observation skills to get data about the
“what”, and not really about the “why” and “how”.

 (+) It is a quite simple way of collecting data, and not as expensive as the other
methods.

 (-) The problem is that senses are not infallible. Unwittingly, the observer may
have an unconscious grasp on his senses, and how they perceive situations and people
around. Bias on the part of the observer is very possible

34. Experiments

Have you ever wondered where clinical trials fall? They are considered to be a form of
experiment, and are quantitative in nature. These methods involve manipulation of an
independent variable, while maintaining varying degrees of control over other variables,
most likely the dependent ones. Usually, this is employed to obtain data that will be
used later on for analysis of relationships and correlations.

Quantitative researches often make use of experiments to gather data, and the types of
experiments are:
 Laboratory experiments. This is your typical scientific experiment setup, taking
place within a confined, closed and controlled environment (the laboratory), with the
data collector being able to have strict control over all the variables. This level of control
also implies that he can fully and deliberately manipulate the independent variable.

 Field experiments. This takes place in a natural environment, “on field” where,


although the data collector may not be in full control of the variables, he is still
able to do so up to a certain extent. Manipulation is still possible, although not as
deliberate as in a laboratory setting.

 Natural experiments. This time, the data collector has no control over the
independent variable whatsoever, which means it cannot be manipulated.
Therefore, what can only be done is to gather data by letting the independent
variable occur naturally, and observe its effects.

35. What is grey literature?


Grey literature is information produced outside of traditional publishing and distribution
channels, and can include reports, policy literature, working papers, newsletters,
government documents, speeches, white papers, urban plans, and so on.

This information is often produced by organizations "on the ground" (such as


government and inter-governmental agencies, non-governmental organisations, and
industry) to store information and report on activities, either for their own use or wider
sharing and distribution, and without the delays and restrictions of commercial and
academic publishing. For that reason, grey literature can be more current than literature
in scholarly journals.

However, because grey literature (usually) does not go through a peer review process,
the quality can vary a great deal. Be sure to critically evaluate your source.

Вам также может понравиться