Вы находитесь на странице: 1из 16

LECTURE: September 12, 2018  0.

7 is middling
Identify a set of dimensions that are latent in large set  0.8 is meritorious
of variables  0.9 is marvelous
Describe if possible covariance of relationships Commented [m2]: done

In Psychological measurement Conventional Code (CODE C) – EFA, PA extraction,


 Alternative technique to item analysis Quartimax rotation
 Provides additional information on  SPSS Results
conceptual information o KMO: 0.823
 For validation o Bartlett’s Test of Sphericity
 For testing the adequacy of the o Factor analysis is appropriate
unidimensional model because the null hypothesis is
 Interstitial rejected at alpha level 0.05 Commented [m3]: done

Terms Interpretation of Factors


 Communality: variance in the item  Qualitative dimension
accounted for by all factors  Coordinate axis
o indicates importance of item, in  Defines the way in which entities may differ
explaining factor structure or not
o items with high communality are  Does not indicate how much different various
good for a test entities are
 Eigenvalue: indicates importance of a factor  Indicates important common qualities
in the data present in the data
o Factor 1 usually has largest
eigenvalue, and can also explain Methods of Extraction
biggest variance in total score  Extraction is the process of obtaining the
o 1 or higher eigenvalue means factor factors from the data
is important  Methods
 Common variance: distribution of variance o Principal Components
among factors o Principal Axis (a.k.a Principal
 Total variance: variance explained by each Factor Solution)
factor considering total variability of original o Maximum Likelihood
variable o Others
o high variance % = important factor,  For Likert scales, use principal components
as it explains a significant amount or principal axis method
of total score Commented [m1]: done
Factor Rotation
Model Adequacy  Goals of factor analysis:
 Test of sphericity: FA is appropriate if the o To achieve the simplest possible
p-value is smaller than the level of structure
significance (if the null hypothesis is o To make each factor uniquely
rejected) define a distinct cluster of
o Lower p value  appropriate intercorrelated variables
analysis  Some basic ideas
 Kaiser-Meyer-Olkin (KMO) o All variables contribute to a factor
o Higher value means data is o Several different factor solutions
adequate for that factor analysis can account for the same set of
 <0.5 is unacceptable correlations
 0.5 is miserable o Location of the variables in the
 0.6 is mediocre relation to the factor axes is
arbitrary  no orientation of the
axes (factors) is any more correct
than another Using Exploratory Factor Analysis in Checking
 Rotation is done to obtain the simplest the Unidimensionality of a Scale
structure  Chen, Hwang, and Lin (2013)
 Two types: orthogonal and non-orthogonal o First factor accounts for at least
rotation 20% of variance
 Can say scale is
Axis: each factor unidimensional, is
Rotate axis so that the items fit better measuring one aspect of
Interstitial items: loads on factor 1 and factor 2 construct
o Ratio of first and second
Orthogonal rotation eigenvalues is greater than 4
 axes remain perpendicular in rotation (90  If scale is unidimensional
degrees) o All items that don’t load on first
 types factor, can be removed
o Varimax: factors have large,
medium, and small loading Cronbach’s alpha if item deleted
o Quartimax: for a given variable, Discrimination index
there is one and only one major EV
loading on a given factor SEM
 Maximize loading for one Communality
factor KMO
o Equimax: combination of varimax Bartlett’s Test of Sphericity
& quartimax Quartimax
 Implications of an orthogonal rotation
o Factors will remain statistically September 19, 2018
uncorrelated Preparation of data
o Communality estimates are not  Excluding respondents that shotgun
altered o Countig on excel
o Percentage of variance accounted o Compute standard deviation for
for by an individual factor, will, in each person
general, be different  Less than 1  review
participant
Oblique Rotation  Demographics of participants
 no longer 90 degrees  Recode negatively-worded items
 Also known as non-orthogonal rotation o Can be done on spss  recoded
 Factors will be correlated variable
 Types o Excel  “i1rec”
o Oblimax: high and low loadings are  =6-(score)
increased, middle loadings are  spss has function that can remove duplicates
decreased
o Quartimin: sum of inner products of Item Evaluations
loadings are minimized  Are the items dichotomous or polytomous?
o Covarimin: similar to varimax but o Item facility/item difficult
using oblique rotation  P and item variance
o Biquartimin: combination of  Proportion of each
quartimin and covarimin response category
o Oblimin: different combination of  Mean/median score
quartimin and covarimin o Item discrimination
 Computing item-total
correlations
 Coefficient of
determination
 Comparing groups (those September 21, 2018
who got the item correct vs Scoring and Norm Development
those who got it wrong)
 Comparing extreme What can we use instead of raw scores in reporting
groups results? In comparing an individual against others? In
comparing scores of an individual?
Sample Protocol of Data Analysis (CTT) Commented [m4]: important
Abstract Conscientiousness Ranks
Reasoning Scale  Compares different attributes of an individual
Test  No frame of reference
Item Facility >Compute p >Compute
and p(1-p) proportion/percentage Percentiles & Percentile Ranks
of each response  Compares the individual relative to others
category  Too many frequency distributions
>Compute item  Can the scores in different attributes be
median compared? Central tendency and
Item >Compare >Compare Mean dispersion?
Discrimination Mean Total Total Scores then  Unequal abilities for different attributes
Scores of Item Scores of Top
Correct vs 30% vs Bottom 30% Z-score
Wrong >Compute and  Standard score
>Compute corrected item-total  Scores from different scales are directly
and correlations comparable
corrected (Pearson’s r)  [X-(mean)]/standard deviation
item-total
 higher p-value  more interest
correlations
 advantages
(point-
o retains the shape of the raw score
biserial)
distribution and the relationship
Internal KR-20 Cronbach’s alpha
between scores
consistency
o easily identifies score location
relative to average (mean=0, SD=1)
Item Facility
 score of 0 = avg/close to
mean
Item Discrimination
o has the properties of a normal
Item total correlation
curve
Exploratory Factor analysis would inform us the
 disadvantage
different constructs given in the Philippine context
o not easily understood (negative
signs and decimals)

Standardization for easier interpretation


 z-score
o (raw score-mean)/(standard
deviation)
 T score
o (zx10)/(50)
 Stanine score
o Normal curve divided into 9 groups (Highest PR – Lowest PR) / # of intervals between
 Sten score score
o Normal curve divided into 10 add score to subsequent scores
groups
 Scaled score e.g. What score is the 20th percentile?
o (z score x3)+10 Raw Score Percentile
 IQ format score 34 15.6
o (z score x15)+100 35 17.2
36 18.8
37 20.4
Lowest possible score = 36 38 22.1
Highest possible score = 180
Frequency distribution table, and then look at 22.1-15.6 = 6.5
cumulative percent 6.5/4 = 1.625
15.6+(3)(1.625) Commented [m5]: important
Get min score
Get max score Lecture: October 3, 2018
Get total score for each participant Item Response Theory

Reminders for Norm Development CTT – Classical Test Theory


 clean the data then reverse-code  Various techniques for measuring reliability
 run the reliability analysis can be applied depending on the
o scales and subscales for scoring consistency being measured. A longer test is
should have good reliability better to have a reliable test as it produces
 determine the scoring system: which scores less random error.
you will use and how will you report the  Item means are typically used to examine
results item location
 in using score classifications, characterize  Item total-correlations are commonly used to
and differentiate the categories examine item relevance
o expected behaviours and  The property of invariance is NOT shown
characteristics if e.g. low, medium, o P (not p-value) for item facility
or high score  Low ability: lower p
 consider different scoring systems and  High ability: high p
norms for different populations and  It is much simpler to understand
subpopulations  Homogeneity of the sample is needed. Tests
are dependent on the sample.
Adjusting Scores: What do you do when certain raw  Items contribute equally to the total score. A
scores are not observed in your normative sample? single SEM is computed for the whole test.
 Include these scores to the closest group of
scores (either the lower or upper group) IRT
 Information functions are used for test and
e.g. including scores to lowest group item reliabilities. A reliable tool gives more
Group Scores information as it has more precision.
Low 1 20-41  Parameter estimates are used to examine
2 42-47 item location. It can be observed how items
3 48-52 function differently along the trait continuum
4  Parameter estimates and IIFs can be used to
5 examine item relevance
 The property of invariance can be applied.
Example: Interpolation of Scores Items can have a common metric for a latent
trait. Individuals from different trait levels can  Difficulty parameter: indicated by the
be placed onto a common scale, even if they location of the point of inflexion (where curve
responded to different items. Computer- changes direction)
adaptive testing systems can also be o A point of inflection located at
developed from this higher ability level means greater
o Regardless of ability of people item difficulty
taking the test, the items still have o Curve more to right means greater
same properties item difficulty
 It gives thorough information on  Guessing parameter: indicated by starting
psychometric properties of items, tests, and point of curve within graph
scores, which are used in arriving at results o It pertains to the probability that low
 It is NOT sample-dependent. ability level individuals can get the
 Items can have different contributions to the item correct
total score and various SEM can be o Lower x-axis
computed  People aren’t guessing the
o E.g for high ability and low ability item
o Higher in y-axis
Three-Parameter Model  Higher chances of getting
 Fits responses of probability getting a item correctly (assume:
response correct in different curves due to guessing)

Rasch Model Item Information Functions (indicate reliability of


items)
Item Characteristic Curves  Proficiency
 Assuming dichotomous items o How much information each item
 Discrimination parameter: indicated by the can give across the continuum of
steepness of the slope ability levels?
o A steeper slope means the item can o Item selection will depend on the
discriminate individuals according test’s purpose but we would not
to ability level want something like item 38
o Axis o Item 8 will give more information
 X-axis: sum ability / ability about people who are on the
continuum average, but not extreme scorers
 Y-axis: probability of (peak)
getting correct o Item 38 is an ugly item since it
o Steep slope: does not give a lot of information
 Increase in ability means across all possibilities (flat)
an abrupt increase also in o One line represents one item
getting item correct o Axis
 Deviating just a little bit  X: ability/proficiency for a
from the average results in trait
a much higher change in  Y: Information
getting the item correct
o Non-steep slope: Test Information Function
 Increase in ability means  Summary of item information functions
only a gradual increase in  Information and standard error (S.E.) have
getting item correct inverse relationships
o “Ogive curve” o High reliability coefficient, you
expect a lower standard error of
measurement
o If information is high, expect a low o Each ability point has a standard
standard error of measurement error of measurement
o Low standard error of measurement
Assumptions = high reliability = gives a lot of
 Unidimensional: items in a test measure information
only one latent trait o Bad items = no curves, all people
o Do factor analysis first agree to item
 Local independence: responding to an item
should be unrelated to responding to another
item De Mars, C. Item Response Theory
 Model choice and fit: the model chosen is
appropriate to the data
VALIDITY
What are we measuring with tests?
Graded Response model (GRM) for polytomous What inferences can we draw from test scores?
items Not black and white – there is a degree of validity
 Operating Characteristic Curves
o Probability of an individual with a Validity
particular trait level selecting a  Agreement between the test score and the
response category quality it is measuring
o One graph = one item  Evidence for inferences made about a test
o Each line = response category score
o e.g. 90 graphs, 90 items, with 4 o Construct-related
lines (strongly agree, agree, o Criterion-related
strongly disagree, disagree) o Content-related
o solid line = strongly disagree  Process whereby validity evidence is
o axis gathered
 y: probability of endorsing  Begins with an explicit statement of the
or choosing a response conceptual framework and the rationale for a
 x: trait level test
o positively-worded item  Is open-ended (includes all information that
 people with low ability/trait adds to our understanding of test results)
will have higher chances
of disagreeing with Constructs in Test Validation
positively-worded items  What the test author sets out to measure:
 Item Information Functions traits, processes, knowledge stores,
o Each item is a summary of characteristics designated through behaviour
response curves  Specific interpretation of test data based pre-
o How much information across the established theoretical and empirical
continuum of trait levels relationships between test scores and other
o For personality tests, we would variables: inferences designated based on
want to get information across the test scores
continuum o Theoretical
o A certain item gives information
about certain ability level Face Validity
 Test Information Function (graph for all test)  An aspect of validity
o Summary of item information  The mere appearance of a measure
functions  Do the items seem to be reasonably related
o Information and S.E. have an to the perceived purpose of the test?
inverse relationship
 Note: it is really not validity at all because it
does not offer evidence to support
conclusions drawn from test scores
 E.g. if your test is about intelligence, will a Research
person taking it agree that it is about a test of  compare scores on test to scores on other
intelligence measures believed to be related to test
 compare individuals with themselves, not to
Content-Related Evidence for Validity the norms of a group
 The adequacy of representation of the  mean nothing unless the criterion is valid
conceptual domain the test is designed to and reliable
cover
 Can be defended on logical grounds Evaluation of Research
 Entails careful specification of content  review the population
domains, cognitive processes, skills, or types  be sure sample size is adequate
of performance to be sampled by the test  never confuse the criterion with predictor
and of their relative importance or weight  check for restricted range on both predictor
 Are important aspects of the construct and criterion
included in the measure?  review evidence for validity generalization
 E.g. get a pool of experts, and get them to  consider differential prediction
validate construct
 Possible Errors Construct-Related Evidence for Validity
 Construct underrepresentation:  established through a series of activities in
failure to capture important components which a researcher simultaneously defines
of a construct some construct and develops the
 Construct-irrelevant variance: when instrumentation to measure it
scores are influenced by factors o convergent: a test should have
irrelevant to the construct strong correlations with related
constructs
Criterion-Related Evidence for Validity – more on  e.g. scale on loneliness,
performance general sense of
 How well a test corresponds with a particular belongingness
criterion o divergent: a test should have weak
o Predictive: forecasting function correlations with unrelated
with predictor and criterion constructs
 e.g. UPCAT (predictor):  e.g. loneliness, solitude,
aims to predict academic isolation
performance (as criterion)  e.g. stress, anxiety
in UP
 test scores are used as
predictors validation study does not represent low SOB
 other behaviour or
outcome as criterion Criterion-Related Validity (Predictive) Study
o Concurrent: simultaneous Measure performance in terms of:
relationship between test and  engagement of students in school-related
criterion activities
 e.g. ?? Level of o joining orgs
extaversion o joining councils
 Validity coefficient: relation between a test o cooperation for a group project
and its criterion o joining sorority/frats
o e.g. correlation or regression  deviant behaviors
o alcohol/drugs
 depressive symptoms Other Pointers
 academic performance  Validation involves from inferences made on
test score
Lecture: October 5, 2018  Criterion = any behavioral outcome indicator,
that is parallel to the construct being
Multitrait Multimethod Correlation Matrix measured in scale
 identify 3 methods
o self-report Criterion-related
o peer-report  Predictive: longitudinal study
o observation  Concurrent: experimental study
 construct/traits: pakikisama, agreeableness,
pakikibata Construct-related validity evidence
 (BLUE) reliability coefficient: correlating  purpose: establish construct you are trying to
same construct using same method // measure as a separate construct
correlating score with itself
o better if high RECAP: Test Development
 pakikisama-pakikisama: high  we should develop and use tests that are
 pakikisama-agreeableness: high reliable have valid results
 pakikisama-pakibata: low  we use statistics to establish reliability and
 if negative relationship  high but negative validity
correlation coefficient  we try to minimize measurement errors
o e.g. sob and loneliness through good design and standardized
 (PURPLE) validity coefficient: same procedures
construct but different method
o better if high Personality
 Consistent behaviour patterns and
Some useful statistical techniques in validity intrapersonal processes originating within
studies the individual  behaviour is not solely a
 t-test function of the situation (Burger, 2010)
 ANOVA o Has to do something within the
 Tests of association/correlation person
 Regression  Personality is a pattern of relatively
 other multivariate analyses permanent traits and unique characteristics
that give both consistency and individuality
to a person’s behaviour (Feist & Feist, 2008)
 Psychological qualities that contribute to an
individual’s enduring and distinctive patterns
of feeling, thinking, and behaving (Cervone &
Pervin, 2013)
o ABCs of psych
 Set of psychological traits and mechanisms
within the individual that are organized and
relatively enduring and that influence his or
her interactions with, and adaptations to, the
Lecture: October 10, 2018 intrapsychic, physical, and social elements
(Buss & Larsen, 2010)
Important when making validity tests
 Reliability of scale Traits and Characteristics
 Scoring system  Traits are relatively stable
o Universal descriptors of people e.g. o Freudian
OCEAN o Neo-Freudian
 Unique characteristics  Behaviorist
o Each person is unique  Cognitive
 Social cognitive
Levels of Analysis in Personality  Cultural
 Human nature  Humanistic
o Common amongst people  Existential
 Individual and Group Differences  Positive
 Individual uniqueness  Life story
o Study of the whole person o A person is an actor, author, and
agent, of his or her own life
Nomothetic or Idiographic Approach  SP
 Nomothetic
o Human nature and (individual and
group differences) uses nomothetic Humanistic: “you tell me what’s wrong with you”
approach
o Quantitative studies Person: sum of experiences
o Big samples
 Wanting to generalize the Projective tests: stimuli that can elicit whatever is
population unconscious in the person, so we interpret the
 Idiographic responses
o Individual uniqueness uses
idiographic approach Assessment consists of
o Qualitative studies  Mental Status
o Small sample  Cognitive
 E.g. case studies  Intellectual
 Emotional
Data Sources
 Social
 Self-report data
 Personality
o Interviews, responses to non-
standardized questionnaires
Assessment: measurement according to a goal
 Observer-report data
 (personality) to describe a person
 Test data (standardized tests)
 (clinical assessment) for diagnosis and
o Intelligence tests
treatment plant
o Personality tests with normative
 (industrial organization) for hiring, classifying
sample
people, promotions
 Comparing yourself to rest
of population  (education) to check performance for
expected learning outcomes of a course
 Life outcome data
o Achievements of people e.g. grades
General Steps in Assessment
1. identify the problem to be addressed
Issue: Links among various data sources
a. translate requests for consultation
into questions that can be
meaningfully answered
Paradigms in Personality
b. can you answer the question/s
 Evolutionary
within the time allotted?
 Biological 2. select and implement methods for extracting
 Dispositional/trait information needed
 Psychodynamic
3. integrate sources of information around the Methods Summary
original purposes Approach,
4. report opinions and recommendations reliability, validity
Procedure
Step 1: five general types of referral questions Aspect 1
 description/formulation of current behaviors Aspect 2
 causes of behaviors observed Aspect 3
 changes that can be anticipated in these Recommendations
behaviors over time
 ways in which these primary patterns may be Example (clinical assessment)
modified Methods Summary
 patterns and areas of deficit Approach, reliability,
o usually those with cognitive deficits validity
Procedure
Step 2: Cognitive function and
 assessment assumes that the test ideation
environment and the associated behaviors Affect/mood/emotional
constitute representative samples of external control
environment and of concomitant responses Conflict areas
in these environments Recommendations
o e.g. a child with cognitive deficiency
being tested should have the same
behaviour as if he or she was in Step 4
school  sample outline of psychological report
 test environments (clinical assessment)
o structured/ambiguous o identifying information
o internal/external experience o referral question
o stress on respondents o assessment procedures
 determine methods o background
o testing o summary of impressions and
o observation findings
o interview o diagnostic impressions
o artifacts/records/archives  we won’t have this
 examine tests and tools o recommendations
o scaling methods used  sample outline of psychological report
o measurement sensitivity (personality assessment)
o measurement specificity o Identifying Information
o availability of normative data  name – can make pseudo
 date when normative name
sample was taken  sex
(outdated norms)  age
 race, educational  ethnicity
attachment,  date of evaluation
socioeconomic status  referring practitioner
o reliability of observations o Goals in and Purpose of
o validity of observations Assessment
 What are the goals in
assessment?
Step 3: Integrating Sources of Information Around the  What is the purpose of the
Original Purpose assessment?
 Purpose is to  What can you say about
create a general the person? Who is the
personality profile person?
 What is the context of  If any, how can they
assessment? address their
 What is the primary issues/problems/concerns
paradigm you are using? ?
o Assessment Procedures  How can they develop
 describe how you further?
gathered information for *can use references
the question
 report deviations from
standard procedures Intelligence Test
 e.g. do tests Personality Test
(reliability, Interview – Transcription
validity) Method/Test/Tool of your choice (depending on
 e.g. normative paradigm)
sample is western
o Background
 information relevant to
clarifying the assessment Reflection Paper
goals and purpose A. For each test that I will ask you to take:
 “client is a bs a. What are the results?
psychology b. What can you say about the test?
student” How did you take the test?
 who is your c. What can you say bout the results?
client? Do you agree/disagree with the
 Information sheet results? Provide supporting
 How they evidence.
experience stress B. Describe yourself.
in acads, how can a. i.e. Who are you?
they tolerate the C. How can you develop further?
stress D. Describe and explain how you view a
 a statement of the person. You may align this with a
probable reliability/validity framework/paradigm in personality.
of conclusions
*flow doesn’t have to be ^, just make sure it’s
 e.g. if client isn’t
coherent
cooperating,
doesn’t open up
o Summary of Findings
 What are the aspects of
the person that you
measured/observed?
 What can you say about
the person based on the
results?
o Conclusion and Recommendations
 Were you able to satisfy
the assessment goals?
How?
Trends in Intelligence Testing
 psychometric: examine properties of a test
through evaluation of its correlates and
underlying dimensions
 information-processing: examine
Lecture: October 17, 2018 processes that underline how we learn and
solve problems
Issues in Assessment  cognitive: how humans adapt to real world
1. Professional Issues demands
a. Theoretical concerns  issue: correlation between socioeconomic
b. Adequacy of tests background and scores on standardized
c. Actuarial vs. clinical prediction intelligence tests
i. Big data vs what happens
in practice
ii. Sample size concerns
2. Moral Issues Models of Intelligence
a. Human rights  Binet: intelligence as the capacity (1) to find
b. Labeling and maintain a definite direction or purpose,
c. Invasion of privacy (2) to make necessary adaptations to
3. Social Issues achieve that purpose, and (3) to engage in
a. Dehumanization self-criticism so that necessary adjustments
i. Tendency to see data as in strategy can be mdae
numbers only o Age differentiation
b. Usefulness of tests  Different standards for
c. Access to psychological testing interpreting results for
services different age groups
o General mental ability
Belmont Principles  Intelligence is not specific
Tuskeyee Studies  Spearman: intelligence consists of one
1. Respects for Persons general factor (g) plus a large number of
a. Recognize as participants, not specific factors
subjects o Contribution: started single score
2. Beneficence (IQ)
a. Withdraw  The gf-gc theory of intelligence: human
3. Social Justice intelligence can be conceptualized in terms
a. No physical abuse, psychological of multiple intelligences rather than a single
harm score
o Countering Spearman’s ideas
o Two types of intelligence
Ability Testing  Fluid (f): abilities that
allow us to reason, think,
Can measure and acquire new
 Intelligence knowledge
 Achievement  application
 aptitude  Crystallized (c): acquired
o potential to learn new skill knowledge and
o career assessment understanding
 e.g. bio stuff
 Implications to models?  Right to have test results to keep confidential
to the extent allowed by law

Testing Considerations
 Goal/s of the assessment
 Capacities of the test user
 Characteristics of the test-taker Responsibilities of Test Takers
o Age  Responsibility to read and/or listen to their
o Gender rights and responsibilities
 Nature and psychometric properties of the  Responsibility to ask questions prior to
test testing about why the test is being given,
 Availability of updated norms how it will be given, what they will be asked
 Deviations from standard procedures to do, and what will be done with the results

PAP Code of Ethics


Qualities for Users of Psychological Tests Principles
Test-user: person administering test 1. Respect for dignity of persons and peoples
Their knowledge and skills 2. Competent caring for the well-being of
 Psychometric principles and statistics persons and peoples
 Selection of tests in light of their technical 3. Integrity
qualities, the purpose for which they will be 4. Professional and scientific responsibilities to
used, and issues related to the cultural, society
racial, ethnic, gender, age, linguistic, and
disability related characteristics of General Ethical Standards and Procedures
examinees 1. Resolving Ethical Issues
 Procedures for administering and scoring 2. Competencies
tests, as well as for interpreting, reporting, 3. Human Relations
and safeguarding their results 4. Confidentiality
5. Advertisements and Public Statements
 All matters relevant to the contet-whether it
6. Records and Fees
be employment, education, career and
vocational counseling, health care or
Ethical Standards and Procedures in Assessment
forensic
 Bases for Assessment
Rights of Test Takers  Informed Consent
 Right to receive an explanation prior to  Assessment Tools
testing about purposes for testing, tests to be  Obsolete and outdated Test Results
used, whether the test results will be  Interpreting Assessment Results
reported to them or to others, and the  Release of Test Data
planned uses of the results. If test takers  Explaining Assessment Results
have a disability or difficulty comprehending  Test Security
the language of the test, they have the right  Assessment by Unqualified Persons
to inquire and learn about possible  Test Construction
accommodations
 The right to know if a test is optional and to
learn of the consequences of taking or not
taking the test, fully completing the test, or Informed Consent
cancelling the scores  We gather informed consent prior to
 The right to receive an explanation of test assessment of our clients except for
results within a reasonable time and in following instances
commonly understood terms o When it is mandated by law
o When it is implied such as routine
educational, institutional, and The Q-Sort Technique
organizational activity  challenge posed by Carl Rogers: how do
o When purpose of assessment is to therapists know that their treatment is
determine the individual’s effective?
decisional capacity  Developed by William Stephenson
 Can examine congruence of the self-concept
Friday, October 19, 2018 and ideal self in various constructs such as
parent-child attachment, defense
Five Factor Theory mechanisms, temperament, and strength of
 Influenced by genetic make up of a person romantic relationships
 Biology  traits  characteristic
adaptations ( behaviour)
 Culture characteristic adaptations Make an assessment plan.
 Culture  behaviour 1. Describe your framework and assessment
 Behaviour  culture goal. Specify the focus of the assessment, if
any.
OCEAN 2. Provide a brief background of your client.
 Openness to experience 3. Define the different aspects of the person
o Fantasy, aesthetics, feelings, that you will be looking into. This must be
actions, ideas, values aligned with your framework and goal/s
 Conscientiousness 4. Determine the methods and tools that you
o Competence, order, dutifulness, will be using for the assessment.
achievement-striving, self- a. You are required to use two tests. It
discipline, deliberation can be from the Assessment Lab or
 Extraversion from the internet. If from the
o Warm, gregariousness, internet, it should be
assertiveness, activity, excitement- “psychometrically sound” and
seeking, positive emotions indicate the details of the test in the
 Agreeableness report.
o Trust, compliance, altruism, b. Aside from the tests, interview, and
straightforwardness, modesty, information sheet, you are required
tender-mindedness to do at least one other method or
 Neuroticism use another test
o Anxiety, hostility, depression, self- 5. Show your project timeline
consciousness, impulsiveness,
vulnerability

Spearman’s rho Cognitive function and ideation


Affect/mental/emotional control
 D is difference
Conflict areas
 K is number of items
Intro and interpersonal strategies
 if rho is positive: you see yourself similar to
the way you would like to be Other Types of Tests
o a value closer to 1 shows greater
 16PF
congruence
 VIVO Questionnaire
 if rho is negative: you see yourself as
 Q-Sort Technique
dissimilar to the way you would like to be
o a value closer to -1 shows greater
congruence
 also known as spearman’s rank correlation
understanding, openness, honesty, and
fairness
 Important in the patients’ evaluation:
perception of interviewer’s feelings

Avoid
Lecture: October 26, 2018  Judgmental or evaluative statements
Interview  Probing statements
 Structured  Hostile responses
o Directive  False reassurance
o Narrow and restricted  Avoid why questions
o May be based on a document o E.g. Why did you yell at him?
and/or interview guide  Tell me more about what
o More reliable (stable/consistent) happened
 Unstructured  How did you happen to
o Nondirective yell at him?
o Broad and unrestricted  What led up to the
o Why don’t you tell me a little bit situation?
about yourself? o E.g. why did you say that?
o Can provide information that other  Can you tell me what you
sources cannot provide mean?
 I’m not sure I understand.
Possible Sources of Error  How did you happen to
 Halo effect say that?
 General standoutishness o E.g. why can’t you sleep?
o Interpreting responses of client  Tell me more about your
based on an outstanding sleeping problem.
characteristic  Can you identify what
 Cross-ethnic, cross-cultural, cross-class prevents you from
sleeping?
Guidelines for Job Interviews  How is it that you are
 Increase interviewers’ motivation to form an unable to sleep?
accurate impression  structured; panel
 Focus attention on the interviewee  note- Interview: Question Types
taking; longer time Close-ended Open-ended
 Focus attention on information predictive of Do you like sports cars? What kind of cars do
job performance  structured you like?
 Involves mutual interaction  reciprocal Do you like baseball? What kinds of
nature recreational activities do
 Participants affect each other  the good you prefer?
interviewer: Are you having a Can you tell me about
o Knows how to provide a relaxed problem? your problems?
and safe atmosphere through social Is your father strict? What is your father like?
facilitation Would you like to What are your favorite
o Remains in control and sets the vacation in Hawaii? vacation spots?
tone
 Attitudes related to good interviewing skills:
warmth, genuineness, acceptance, Interview
 Evaluation interview
 Structured clinical interview
 Case history interview
 Mental status examination

Observation
 Is it better to measure using a scale or
observe actual behaviours?
 Is it participant observation?
 Is it systematic observation?
 Are you going to use an observation
checklist
 Some guidelines:
o Be respectful.
o Be subtle.
o Be prepared.

Вам также может понравиться