Вы находитесь на странице: 1из 5

Chapter 6 Attention, Lecture 7

This chapter deals with some aspects of the psychology of attention, whereas the next lecture will
focus on the research on attention with more of a neuroscientific background. However, both are of
course complimentary.
The first question we should ask is why we should study attention (beyond the fact that it's
interesting). We can see that we need attention, for example to get through work, or studying.
Attention here pertains to both, not being distracted, but also to what we will talk about visual
search for which we likewise need attention.
Attention research has great relevance for other aspects of life, and of designing life. Attention
research has contributed substantially do the ban on mobile phone use when driving (when I was
young, you were still allowed to speak on the phone and drive!).
Attention research has its roots in one of the biggest influences on research in psychology, which is
related to warfare. One task which spurned research into attention is how operators who had to track
radars for missiles or alien submarines. This task is very monotonous, basically pertains to looking
at a screen and paying attention to beeps. This started the research into the human ability to remain
vigilant, or to maintain sustained attention. The other aspect of cognition we'll touch upon in the
framework of attention is that our cognitive processsing power may be limited we call this
capacity.
Remind yourself of the processes we talked about in the introduction when putting ourselves in the
feet, or seat, of a life guard.
Some of the aspects we came up with is that attention eintails maintaining awareness of the task and
of the visual scene (that's again related to sustained attention), but also the ability to filter out
irrelevant stuff and focus on the important bits.
Before we get to this filtering and focussing let's look again at sustained attention. This was one of
the concepts we talked about first: your ability to sustain attention for the entire length of the
lecture. We said at the time that ways to help you to stay attentive is taking breaks (so that the
periods for which you have to be attentive don't get too long), but also by making the lecture less
monotonous and keeping you engaged. In many ways, the issues we're facing in a lecture situation
are similar to the issues faced by radar controllers. Except that they cannot use the tricks we do to
make the task easier.
Rather than sustained attention, or lecture will focus on the selection components of attention. This
is one of the biggest topics in attention and it feeds back into perception (knowing about the
modularity of the visual system will help you), and leads to cognitive control.
One difference between different types of attention is whether its controlled endogenously, or
exogenously. This task is the Posner paradigm. Participants have to fixate in the middle of the
screen. They then get a cue, which tells them where a target is likely to appear. They mustn't ever
look away from the fixation cross in the middle, but they are required to pay attention to the side of
the screen which is indicated by the cue. The target will probably appear on the cued side of the
screen, this is why it is useful to utilize the cue. If this cue appears on the side of the screen where
the target appears, we call this exogenous cueing (exo usually means something comes from the
outside). If the cue appears in the middle instead, and its shape or form tells people where the target
will probably appear, that's called endogenous cueing. One has to voluntarily orient one's attention
to the relevant side of the screen, rather than it's being drawn to that side by the exongenous cue.
(endo usually means coming from the inside, compare to exogenous and endogenous ligands!). If
you want you can make a difference here between attention being controlled by the outside,
followning a salient cue, and attention being controlled by the inside, following an interpretation of
the central cue.
To give you the full picture of the Posner paradigm, the trick is that the cue is sometimes invalid,
that means that the cued side is not the side where the target will appear. This is true for the

endogenously cued variant and the exogenously cued variant. Look at the figure, what do you think,
which performance measure is this supposed to mean? And what does it tell us about what happens
in the Posner paradigm (which cognitive processes can we infer?)
So, this was an initial example of selection. You can ask why we need to select at all, after all, it
means that some stuff won't reach our awareness; we may be missing something important, after
all! One proposal is that there is a capacity limit on the processing power of the brain. Some
computer models say, for exampe, that it's just impossible for all the neurons that would be needed
to convey the informatio
You could think of capacity as of a bucket you just can't fill it any further at a certain stage. I like
the idea of a processor better at some stage all the bits are engaged in coding for something, so
they just cannot process and convey any more information.
A long debate circles around the question where selection occurs. The core concept is that of a
bottleneck. The bottleneck is the filter, only some things can pass through, everything else is lost
(?). The question was where the bottleneck sits after sensory coding (perception) of all stimuli, so
that only limited content will be semantically processed and then enters working memory? That was
Donald Broadbent's theory. This early bottleneck, because it doesn't even semantically process the
unselected input, has to select input based on physical properties. This is called early selection.
The contra-position is late selection, proposed by Deutsch & Deutsch. They argued that inputs are
processed semantically and then selected based on their semantic features.
Let's look at the evidence! A core task to research these matters was dichotic listening a task
developed by Cherry in the 1950s. Hat happens? The participants wear headphones. Each
headphone plays them a separate auditory stream. For example different voices saying different
(coherent or uncoherent) sentences. In the shadowing task, participants are instructed to repeat the
content of one stream, but ignore the other. We measure how many mistakes the participants make,
or what they can remember from the attended and unattended stream.
What do we find in the original paradigm? Participants can report the meaning and the physical
properties of the attended stream. For the unattended stream, participants were found to be able to
report physical characteristics, but not the meaning of what was being said. Does this finding favour
early or late selection theories?
The answer is that these results were interpreted to favour early selection the semantic meaning of
the unattended stream is not accessed, only the physical properties. (You can already see here that
the bottleneck drawing I gave you earlier is a simplification something which is processes before
being filtered out has to trickle through to memory, otherwise they couldn't even report the physical
characteristics).
Now, this task has been used again and again, sometimes yielding evidence in favour of early
selection, sometimes in favour of late selection. Moray found that participants pick up their name in
the unattended stream. Because you're name is semantically connected to you, thus was taken as an
argument in favour of late selection. That's the cocktail party effect.
You may already know this from other classes, the 1970s were quite big on using electric shocks
and all sorts of other unpleasant manipulations. So, unsurprisingly, we find a variant of this
experiment using electrical shocks; but it's an interesting one: Corteen & Dunn played lists of words
to participants. Some of the words were regularly paited with an electric shock, so for example,
everytime participants heard the word 'Ox' they would also receive an unpleasant, yet not dangerous
electric shock. When the authors then tested the participants on the dichotic listening task, and
measure their galvanic response (skin conductance), they found that words associated with electric
shocks led to an increase in skin conductance, even though they were presented on the unattended
channels and participants remianed unaware of them. So participants had a stress reponse to words
they couldn't say they'd heard, because these words had been paired with an unpleasant event in the
past. This was taken as a proof of late selection. Do you agree???

Despite the fact that we're in the cognition lecture, I'll show you a bit of neuropsychological
evidence as well. Damage to certain brain areas leads to hemispatial neglect. That means that
participants begin to ignore everything that happens/exists in their left hemifield (we usually see
neglect associated to injuries to the right side of the brain). This may even go so far that participants
start to only attend to the righ side of their body. When you ask them to copy an image, they may
copy only the right side of it, if you ask them to mark the middle of a line, they may place their
marker too far to the right. Now this is taken as evidence for early selection, as their selection of the
right side does not seem to be related to semantic evaluation.
But look at this:
You can see a drawing of a house that you probably wouldn't want to live in. And you certainly
know why. As you can see from these interview cutouts, the patient didn't recognize that there was
any difference between the houses. However, when he was asked to make a random choice where
he wanted to live, he nearly always picked the house that wasn't on fire. This indicates that he may
have semantically evaluated the images, but it never reached his awareness. This is taken as an
example for late selection.
You can see that this debate has been going on for a while, and that one can find evidence for either
idea. That may mean, that we're asking the wrong question. Treisman and Gelade certainly though
so, so they came up with a differet idea:
They proposed that no real bottleneck exists, but that an attenuator could limit how much
information reaches semantic processing. Non-attended physical input is passed on, but weakened
by the attenuator. This is especially the case for stimuli which are emotionally important, or
biologically relevant, or preset not to do so. Generally, there can always be a breakthrough for any
of these reasons. This model, as all models faces criticism. The most unsatisfactory bit is that its
mechanistically not more advanced than the previous model. What switches on the attenuator? How
does presetting happen? What happens to the attenuated stimulus? What aspects of the attenuated
stimulus become attenuated and for what reason?
A more modern development are hybrid models. Especially Milli Lavies work has been very
important here. She proposed that the brain can experience more or less load that is has a capacity
which can be taken up by this load. There are two different kinds of load perceptual and cognitive.
And both limit what reaches awareness. If were operating under high load, selection of relevant
stimuli happens early (because early selection is somehow easier). This means, the unselected input
will not be semantically processed. If were operating under low load, we have more resources for
stimulus processing and selection happens later.
In these experiments, Lavie and her co-worker showed a) that having a demanding task results in
missing relevant stimuli more often than having a non-demanding task (cognitive load), and b) that
having a crowed complex visual display likewise leads to missing a stimulus more often than
having a display which is easier to process (perceptual load). The concept of a demand-specific
filter is attractive, but it doesnt explain everything. Think about the unaware semantic processing
of unattended stimuli what can cognitive load theory say about that?

Next topic
How and to what can we allocate attention. Thinking back to the life guard, it seems clear that
we can allocate attention in space we can pay attention to the part of the visual field that is
relevant for our task.

But in fact, we can pay attention to space without visually exploring the space. Hermann von
Helmholtz was one of the first people to experimentally test perception and action. In fact he build
an apparatus to test the idea that we can attend to a location in space without directly looking at it.
This is called covert spatial attention. By the way, Helmholtz was also the first person to suggest
that perception relies on predictive processes and that the brain has to interpret input to arrive at a
decision as to what it is perceiving.

An experiment which involves participants allocation of attention in space is the Posner Paradigm
(weve covered this at the beginning). So spatial attention can be exogenously and endogenously
cued meaning it can be drawn to a location by virtue of a salient input, or in a top-down fashion.
And weve also seen this earlier! In Hillyards task participants kept fixating the middle dot in
fact, if they had moved their eyes, this would have messed up the eeg recordings. But early neural
coding for attended stimuli was still affected by this internal allocation in space. Space is not the
only thing that we can pay attention to.
We can also pay attention to objects. This is an experiment by Rees and colleagues. They made
participants pay attention either to words on a screen, or to the objects that were presented in the
same location. Their task was to react to a repetition in the relevant dimension (word or object).
Using fMRI the authors show that areas in the brain associated with object processing (for example)
are more activated when participants do the object task than when they do the word task. This is
taken to show that a) attention to objects is possible and b) it increases processing for these objects.

Stroop task: Pay attention/process one specific feature (we need control for that, cf cognitive
control)

In many ways, this raises the question: what is an object? Feature integration theory posits that
objects are made up of different features, ie colour and shape, and that these features have to be
combined to process the object. (Thats the binding problem in perception). We can clearly see this
in visual search paradigms: if items only differ along one feature, theyre easu to spot (feature
search), but if two features are defining the object, we need attention to glue the representation of
the features together to find it. This is the depiction of the hierarchical system underlying feature
integration: Attention has a spotlight function, which it throws upon the modules representing the
relevant features (i.e. colour & shape) and glues these representations together (representation of
green colour and of square shape) to then find the object. Well come back to Feature Integration
Theory when we talk about biased competition.
Of course space and objects can interact: Searching for people in a scene (people are objects), we
scan those locations where they are most likely to appear first.
Lastly, and this is one dimension in which we can pay attention that is easily forgotten: we can pay
attention in time. Give me examples. And if you ever consider going into research, you can do
dissertation projects and possibly PhDs with me on the topic.

Lets bring it all together: and start speaking about the brain. Weve seen that attention involves

selection and that there are many different theories, especially with regard to object selection of
where selection occurs, and when it occurs. Without speaking about the brain and neural processes,
its hard to find a really satisfying answer to either of these questions, and the question how is
something we havent really addressed, have we? Weve also seen that attention may be necessary
to definine what an object is, especially in visual search!

Maybe put Duncan & Desimone into brain lecture alltogether?

Now imagine you look at a scene with green and red items. The part of your brain which codes for
colour will signal that there is red stuff and green stuff outside. Which neurons should win and
reach your awareness, so that you locate your attention towards them? Well, it depends on what
youre looking for, something red or something green?
At the same time, there are round and square elements neurons in your visual system representing
curvature will be active, but so will those that correspond to the angular lines on the squares
again, which neurons should fire more strongly and make your mind ultimately aware of the shape
they are representing? Again, it probably depends on what youre looking for.
Biased competition is a model by Duncan and Desimone, which may be the best explanation we
have for the binding problem (see above), or selective attention, especially in visual search.
Selective attention is a top-down process which selectively helps the neurons which represent
something were looking for to fire more strongly, so that whatever they represent gets through to
our awareness. If were looking for a square, green object ...
Explain
Finish lecture

Вам также может понравиться