Вы находитесь на странице: 1из 5

Define three heuristics that are commonly used in making

judgements and provide concrete examples for each

There are 3 different types of heuristics commonly used in decision


making Representative heuristics, Familiarity bias and Availability
heuristics

Representative Heuristics is when our judgements are made based on our


common sense and what we deem as reasonable and will happen. The
more it fits with our common sense, the more we will choose that as our
decision. An example of that would be a coin flip. Even though the
probability of getting 3 heads and the probability of getting 2 heads +1
tail is exactly the same, 0.53, participants will still make the judgement
that the probability of having 2 heads and 1 tail is more as it fits with our
common sense.

Availability Heuristics- is when our judgements are made based on how


available is an example. An example task would be asking which ones has
more examples words beginning with a C or words with the 3rd letter as a
C. Most participant will answer words beginning with a C as it is more
available to them and they are able to think about more examples with
regarding to that.

Familiarity Bias is when our judgements are made based on how familiar
the example is to us. An example of that is a study done by McKevie
(1997) where participants were given names of 12 famous men and 14
normal female names and were asked which has more, participants
answered that there were more names of men as the male names were
more familiar to them.

Imagine you want to buy lunch but you realise you have no
money. Using the cognitive approach to problem solving describe
the steps youd take to solve the problem using terms like the
operator.

The cognitive tradition consist of searching between the problem space


which consist of two states the initial state (no lunch money) to goal state
(buying lunch). Problem solving requires to move from initial to goal state.
In many situations of problem solving, heuristics are being used to solve
the problem. Heuristics is quick but not always right way of solving
problems. One heuristic is means end analysis in which a middle ground
is reached. The middle ground is designed to help with solving the
problem. For example in the case of having no money to buy lunch, the
middle ground is being able to figure out is there any one that can borrow
money from which can be a success or a failure depending who you ask.

What is a double dissociation? Using the distinction b/t STM and


LTM, illustrate what is meant by a double dissociation using 2
different examples
A double dissociation is when either STM or LTM is affected by a certain
effect but not the other. The difference between these two is that through
repetition STM transfers to LTM and through recall and retrieval, LTM
STM. In a standard word memorising task, the graph is a U shaped
graph with both Primacy and Latency effect. In the distractor task in
which participants were provided with a separate task for participants
such as counting backwards. The results of the task showed that the
latency effect (STM) is affected but not the primacy effect(LTM). Another
study used to show double dissociation is the speed of presentation of
presentation of words. Whilst a longer delay in b/t words improves the
accuracy of words in the primacy effect, being able to rehearse the words
prior to the next word being presented. The latency effect is not affected.

Define each type of LTM. Be sure to indicate how they are related
to each other and provide an example of each. Also indicate their
putative relations to awareness.

There are 3 types of LTM Episodic LTM, Semantic LTM and Procedural
LTM. Procedural LTM is the most complicated of all 3 types of LTM.
Procedural LTM linked with amniotic awareness which is the self-
awareness. Procedural LTM also helps us to remember how to do specific
things such as riding a bike/ playing the piano. Semantic LTM is linked to
the Noetic awareness and is the memory that stores all factual and
linguistic information. The last type of LTM is the episodic LTM which is the
most basic type of LTM connected to automatic awareness. Episodic LTM
stores information regarding specific events.

Explain the difference b/t episodic and semantic LTM, using


examples.

One of the key differences b/t Episodic LTM and Semantic LTM is the
memory the nature of information that each of the memory stores.
Episodic LTM stores information on specific events which are date and
time specific whilst Semantic LTM stores general factual and linguistic
information. Another difference b/t Semantic and Episodic LTM is the
development and decay of it. While semantic LTM is developed at the
earlier stages of childhood, episodic memory is developed at a later stage
which could reflect why we cant remember events from childhood.
Episodic memory also decays at an earlier stage compared to Semantic
memory. Two example that shows double dissociation b/t episodic and
semantic LTM can reflect the differences are Amnesia and Semantic
dementia. Patients who are amnesic has their hippocampus damaged and
are not able to remember specific events. Despite that, amnesic patients
still have their Semantic memory intact and are able to speak normally
and remember factual details. Contrast to amnesic patients, patients who
are diagnosed with semantic dementia has their Anterior Temporal lobe
damaged. This hence caused them to be unable to remember linguistic
information or factual information even though they are able to remember
specific events and the date and time it happened at.

Explain the two classical studies of STM and the theoretical


problems behind these two studies.

One of the classical studies was Miller (1956) where the capacity of the
short term memory (STM) is studied whilst the other classical study was
Peterson Peterson (1959) and Brown ( 1958) which studied the rate of
decay in our STM.

Miller (1956)s study showed that the capacity of our STM was 7 +/- 2
items. This can also be improved by chunking which is grouping related
details together. The main problem with Miller (1956)s study is the
context of it, and that in some tasks we arent able to remember 7 +/- 2
things.

Brown (1958), Peterson and Peterson (1959) study provided participants


with 3 letter combination and immediately after participants were told to
backwards count for a duration of time between 1 20s. The aim of the
backward counting was to distract participants and force them to think
about something else other than the letter combination. The results of the
study showed that those who counted for a short period of time had an
accurate recall rate however, participants who counted for a longer period
of time had a lower recall rate which shows that STM decays as time
increases. One of the theoretical problems with this task is Miller (1959)
stated that are capacity allows more than 3 items however the
researchers has only tested for 3 items.

What is word frequency and what aspect of word processing is


thought to cause it?

Word frequency is when response time on different tasks is faster for high
frequency words and slower for low frequency words. Result from Balota
and Chumbleys 1984 study reflects that theres an increase in both
naming task and lexical decision task in particular for lexical decision.
Researchers argued that the reason for less of an increase in naming tasks
is due to the fact that lexical memory was only driven by the familiarity of
the word. However, Monsell, Doyle, and Haggard (1989) observed word
frequency effects on both a lexical decision task and a semantic
categorization task (which does require lexical access). On this basis,
Monsell et al. argued that lexical identification itself is the locus of the
frequency effect.

Describe an experiment that shows non-word legality

Non word legality is an experiment done by Stanners, Forbad and Headley


(1971) and is an lexical decision task. The experiment required
participants to decide if these 3 letter word combos are actual words.
These three letter combos include CVC, CCC and words. C- Consonants, V-
Vowels. Participants had a faster reaction time when rejecting CCC
combinations. This is because there arent any CCC words hence, it is
easier for participants to be rejecting CCC words whereas there are
examples of CVCs hence, participants would have to consider before
rejecting/ accepting the word.

Describe an experiment that shows words similarity effect

Using a lexical decision task deciding whether the target given are words/
non- words. There are 3 different conditions either TL transposed letter
in which 2 of the letters in the word are swapped SL Switched letter in
which a letter is switched in replace of another letter and the control
condition in which the word is presented as the target. The result of the
study shown that TL words have a shorter RT in deciding that the target is
not a word. This is because SL words in which only one letter is switched
out, the RT is slower as it is more similar to the original word.

Describe an experiment that shows Word Superiority effect

The word superiority effect is an experiment done by Reicher (1989) and


Wheeler (1970) in which greater accuracy happens to letters that belong/
not belong to words when compared to non-words. This is because words
are more familiar to the participants hence a greater accuracy and a
faster RT for words when compared to non words.

Define logogen model and describe the basic phenomena of that


the logogen model shows

The Logogen model is a first generation model introduced by Morton


(1969). When presented with a word as a visual / auditory stimulus. Each
word has a specific logogen amt. and each matching logogen is activated.
When the activation is reached to a specific amount, the logogen is fired
and the word is identified. There are 2 basic assumptions of the logogen
model All of the logogen are independent and will not affect each other
and in addition the logogen model is a top down process where the
stimulus activates the logogen and not vice-versa. In regards to the word
frequency effect, the logogen model is able to explain it. For high
frequency words, the threshold is lower and hence the firing speed is
faster causing a lower RT. Contextual effect can also be explained similar
to how high frequency effect is explained threshold is lower and hence the
reaction time is lower. Stimulus degradation effect could also be explained
by the logogen model. Degraded words are difficult to identify hence
theres a higher threshold which can explain a longer reaction time.
Define the Frequency Ordered Bin search (FOBs) model and the
basic phenomena that the model shows

The frequency Ordered bin search organizes the words in frequency, high
order frequency words are on top of the bin and the lower frequency
words at the bottom of the bin. The search ends once the word is found.
Frequency effect can be explained by the Frequency ordered bin as the
words are organized with the high frequency words on top. High frequency
words are on top of the bin which can be accessed first. In addition the
lexical status effect can also be explained as its a self terminating
search, the search ends when the word is found hence non- words will
take a longer time as it is an exhaustive search. The frequency ordered
bin also can explain the repetition priming effect in which the bin that the
primed word is in will already be opened prior and the information can be
accessed at a faster rate.

Define the interactive activation model and the basic phenomena that the
model shows

Вам также может понравиться