Вы находитесь на странице: 1из 13

Whitepaper

Studying web pages


using Eye Tracking

Written by Kirk Ewing, Tobii Technology


August 2005
Abstract
Eye tracking has been investigated and “toyed with” for many years by researchers and
commercial usability professionals. There are examples of high quality results being obtained in
both of these areas using eye tracking techniques. Too often though these studies and
commercial ventures have been hampered by inadequate hardware and software tools. These
inadequate tools have caused the scope and goals of studies to be limited in their ability to
provide interesting and powerful results without undue resources being required. Many
researchers have become discouraged by the previous generation of eye tracking solutions and
branded the technique “interesting but practically useless”.
New hardware and software tools have been developed by companies such as Tobii Technology
to overcome all of the outstanding objections to the collection of high quality eye tracking data.
Modern, customizable and relatively automated analysis tools have also been developed. These
tools do not interfere with traditional techniques and do not require many extra resources.
Many new techniques and therefore interesting and powerful results are now available that were
previously inaccessible. These results, available with limited additional resources, make eye
tracking commercially viable prospect in usability testing, and web site testing in general, for the
first time. There are quite a number of commercial and academic labs running highly successful
programs today testing hundreds, and in some cases thousands, of subjects per year and
effectively analyzing this data to produce highly useful results.
Table of contents

ABSTRACT 2

INTRODUCTION 4

PREVIOUS WORK AND PREVIOUS PROBLEMS 4

NEW DEVELOPMENTS SOLVE ISSUES 5


Adequate tools to record data 5
New analysis tools 6

ADDITIONAL INSIGHTS TO TRADITIONAL TECHNIQUES 7


Complementing verbal protocols 7
Stunning Deliverables 8
Answering typical questions 8
What happened in the breakdown event? 8
What do users see? 9
Where should valuable content be placed? 9
Do available visual cues drive users effectively? 10
How effectively do users complete the task? 10

LIMITATIONS OF EYE TRACKING 11


Data easily influenced 11
They looked at it but did they really “see” it 11
How people use peripheral vision 11

CONCLUSION 12

REFERENCES 13
Introduction
Eye tracking as a tool to discover problems in user interfaces has been probed many times over
the last 15 years. The larger problem of trying to infer cognitive processes of a subject by
analyzing their eye movements has been studied for more than a century. Eye tracking techniques
have always been discussed as having great promise but there has been only sporadic adoption of
it as a usability tool, especially in the commercial field. Jacob & Karn (2003) talk of eye tracking
as “‘Rising from the Ashes’ rather than ‘Taking off like Wildfire’”. What has caused this to be the
case? When so many researchers and cutting-edge commercial ventures have examined eye
tracking, and in some cases used it successfully, why is it not more widespread?
There seem to be three areas which have held the field back. The first issue investigators struggle
with is hardware that is to complex and difficult to use. This has caused many studies to be
restricted in their scope and resources by the amount of effort it has taken to simply collect good
eye tracking data. The second is the mountainous task it has been to match the eye tracking data
to meaningful stimulus. This has caused investigators to use “static” or highly simplified versions
of the interfaces they are examining. This results in data and conclusions which are extremely
difficult to generalize and also makes commercial use of eye tracking highly problematic.
Software with the ability to map the eye tracking data back to useful stimulus such as application
user interfaces and web pages is an example of the tools missing. The third area is the ability to
analyze the enormous amount of data that eye tracking generates. Investigators have often
struggled to make sense of the quantity of data they have collected and draw powerful
conclusions from this.
Modern innovations resulting in new hardware and software tools seem to solve many, if not all,
of the outstanding issues holding back widespread us of eye tracking in both the research and
commercial fields. New hardware which is “plug-and-play”, accurate and extremely easy to use
allows data to be collected painlessly and new software tools allow the data that is collected to be
powerfully analyzed.

Previous work and previous problems


There have been many studies over the last fifteen years studying the use of eye tracking in
usability studies. The difficulties in collecting and analyzing data had previously been
prohibitively difficult and time consuming. Many investigators have struggled with difficult tools
from both a software and hardware point of view (Aaltonen, 1999; Cowen, 2001; Golberg,
Stimson, Lewenstein, Scott & Wichansky, 2002). For a comprehensive history of eye tracking as
it relates to usability and discussion of these problems see Jacob & Karn (2003) as well as in
Duchowski (2003).
Much of the previous research would lead an investigator to believe that eye tracking holds
promise as all these quotes arranged by Jacob and Karn (2003) attest:
• “eye tracking in human-computer interaction remains a very promising approach” (Jacob & Karn,
2003)
• “For a long time now there has been a great need for a means of recording where people are
looking while they work at particular tasks. A whole series of unsolved problems awaits such a
technique.” (Mackworth & Thomas, 1962, p.713).
• “…[T]he eyetracking system has a promising future in usability engineering” (Benel, Ottens &
Horst, 1991, p.465).
• “…[A]ggregating, analyzing, and visualizing eye tracking data in conjunction with other
interaction data holds considerable promise as a powerful tool for designers and experimenters in
evaluating interfaces” (Crowe & Narayanan, 2000, p.35).
• “Eye-movement analysis does appear to be a promising new tool for evaluating visually
administered questionnaires” (Redline & Lankford, 2001).
• “Another promising area is the use of eye-tracking techniques to support interface and product
design. Continual improvements in… eye-tracking systems… have increased the usefulness of
this technique for studying a variety of interface issues” (Merwin, 2002, p.39).
The investigator might also conclude that, although the insights available are amazingly useful, a
high quality, proven solution to painlessly collect and analyze eye tracking data does not seem to
be readily available.
Many different groups have designed their own software and/or hardware to try and solve some
of the limitations of the available solutions (Golberg, Stimson, Lewenstein, Scott & Wichansky,
2002; Schiessl, Duda, Thölke & Fischer, 2003; Redline & Langford, 2001). Some of these studies
have been reasonably successful though none seem to have successfully developed a strong,
flexible solution. It is not apparent from the available literature, that a strong hardware and/or
software solution is available for off-the-shelf studies.
Eye trackers have traditionally been difficult to setup and run along. They have often required
head-mounted cameras or other equipment. They also have been unable to track a significant
proportion of the population; 15-20% according to Jacob and Karn (2003). Others have placed
extreme limitations on head movement, often requiring chin rests or other forms of head
stabilization. Adjustments for each participant have added to the difficulty and time required to
run a study, not to mention increasing the complexity for any potential researcher.
Software that makes studies powerful and easy also doesn’t seem to have been available. Such
software solutions which automatically and powerfully deal with normal events on web pages
such as Page transitions, scrolling and the like have only been developed in the last few years.

New developments solve issues


Adequate tools to record data
As discussed there was a desperate need for new tools to collect data. A new generation of eye
tracking systems have recently become available, such as the Tobii 1750, which are fully
automatic and measure eye gaze without any requirements for specialist technical expertise. They
provide a completely normal subject environment without any head mounted equipment or
undue restraints on subject movements. This takes the pain out of collecting eye tracking data,
and also enables eye tracking to take place in parallel with the collection of other interesting data
such as talk-aloud audio and video feeds. These solutions track 95% or more of the population
with high accuracy. Calibration is now a 30 second process with no adjustments required by the
operator. These improvements remove all of the major hurdles from a broad-based adoption of
eye tracking hardware. Commercial and academic labs are currently testing hundreds of subjects
in an extremely efficient way with Tobii eye tracking systems. The removal of the need for an
expert operator allows many more staff members to run the equipment and therefore maximizes
the efficiency with which a study can be run.
There are also a range of new software solutions that have become available which solve the
recording and analyzing issues we have already discussed. Modern software tools such as the
Tobii ClearView software solution combine collection of eye tracking data with sophisticated
logging of the contents of web pages, video feeds of user behavior and capture of screen
contents at full resolution. Straightforward analysis tools make it relatively pain-free to analyze
data and present the results in a meaningful way. These results allow an accurate estimation about
the behavior of a set of subjects to be made. From this estimation it is then possible to answer
questions that investigators and, more importantly, their customers consider critical to creating a
successful design. These conclusions are also very easy to justify using the built in visualization or
statistical tools. Simple, powerful and fast hardware and software tools along with answers to
important questions all add up to an all important Return On Investment (ROI) improvement
for the overall usability study.
One of the key benefits of solutions such as the Tobii Technology system is they can be easily
added into a standard usability lab or test procedure with minimal additional burden. This allows
a practitioner to continue to use familiar and trusted techniques and protocols while
incorporating the additional power and insight offered by an eye tracking solution.

New analysis tools


Analyzing eye tracking data can be done in a multitude of different ways. One can use visual
tools to get a qualitative representation of the viewing of a single subject or multiple subjects.
These qualitative measures have a great deal more objectivity than traditional usability measures
as they represent the actual eye movements of subjects rather than opinions or subjective
observations. One can also use a variety of statistical measures to get a quantitative and
thoroughly objective measurement of the behavior of a group of subjects.
Using modern software solutions it is possible to use visualizations to gain clear insight into the
behavior of a single test subject. A tool that is useful is one which captures the contents of the
subject’s screen as they view it so that they eye tracking data can be displayed over the stimulus
the subject was viewing. This can be done in real-time or during post-analysis. This becomes
even more useful when combined with video of the user’s expressions and audio of talk-aloud
reactions. This allows you to understand the thought process of your subject in a very real and
insightful way. This knowledge can be used either during the test sessions, to allow you to ask
better questions and direct the study more efficiently, or in post analysis. This also provides
context of any feedback and statements you obtain from your subject.
There are statistical tools that can also be used to extract certain key metrics that provide insight
into the behavior of a single user. An important metric is a sequence of the user’s fixations,
known as a scan-path. Modern software has the ability to automatically analyze eye movement
data and generate a scan-path. This identifies points on the web page that the user fixated on,
giving great insight into the user’s cognitive process.
Other tools provide an understanding of the behavior of a group of subjects using visualizations
and metrics based on total or average results from entire groups of users. Typical examples
include “hotspot” or “heatmaps” and certain statistical metrics. This can be even more
interesting when you analyze the results of two or more distinct user populations (this might be
subjects who were successful in solving a task compared to those who were unsuccessful). These
hotspot tools overlay the viewing behavior of a group of subjects onto the viewed web-pages.
Various different views are available such as total viewing time, normalized viewing time, number
of fixations (independent of fixation length) and the percentage of subjects viewing an area.
There are also a number of statistical measures which are interesting in analyzing eye tracking
data. These include statistics about the viewing of specific, practitioner defined areas of the
interface. These areas are often referred to as Areas of Interest (AOI). Some of the statistics
which can be of interest here are the total and average viewing time, number of repeated
examinations, transition statistics between areas and many others. There are other statistics such
as scan path length, searching/fixating ratio and others which are independent of AOIs which
can be of interest also. A comprehensive history of the use of these metrics is available in Jacob
and Karn.
All of these analysis techniques, both those involving visual techniques and those looking at
statistical measures are supported by modern software tools such as Tobii Technology’s
ClearView software. These results and tools are being successfully used both in industry and
academia to provide new insights into the design of pages. The new techniques available are
providing a measurable impact on both the actual and perceived value of user testing. They are
also providing measurable improvements in the effectiveness of the sites that have been studied.
Additional insights to traditional techniques
Complementing verbal protocols
Verbal protocols, also known as think- or talk-aloud protocols, are commonly used in usability
testing today. Papers in the field, and the practice of commercial usability labs, show this is often
considered to be the usability technique. These techniques could be said to be the industry
standard method of user testing. Jakob Nielsen and many others, who are often seen as leading
lights in the area of usability, are tireless supporters of these techniques. It appears to be an
effective tool to discover usability issues (Ebling & John, 2000 amongst many others). The take-
up of this technique in commercial labs shows this to be the case.
Though the think-aloud protocol is a useful technique, the fact that there are some inherent
problems with the technique cannot be ignored. There is a range of in-depth research into these
issues which is comprehensively examined in Edgar (2005), a paper comparing talk-aloud,
retrospective and eye movement data cued retrospective protocols.
In the examination of the previous research into talk-aloud, Edgar (2005) identified the following
issues as problems with the think-aloud protocol:
• Inaccuracy – Subjects generate causal theories to explain their behavior rather than
reporting their actual mental process
• Incompleteness – Subjects are not able to put thought processes into words. This is
especially true of the processes we are most interested in studying (e.g. they will become
silent when struggling with an issue). Subjects edit their speech based on what they
anticipate the moderator wishes to hear (this can be greatly increased with biased
prompting that is often used in practical testing)
• Interference – Where the actual thought process of the subjects is changed by the
process of talking as the subject thinks.
The results in Edgar (2005) show the use of eye movement data cued retrospective protocol
seemed to uncover a much higher number of usability problems thus suggesting a lower lever of
inaccuracy and incompleteness. The subjects were much more likely to complete the tasks they
were given and reported a preference for the protocol, feeling that there was less interference
from the testing process. This would show that interference is much less of an issue when using
this protocol.
Interference is arguably one of the most critical issues in think-aloud testing. This occurs when
subject behavior is effected by the requirement to verbalize their thoughts, searching strategies
etc as they are performing a task. Both the talk- and think-aloud processes can also slow a
subject’s performance or alter their spontaneous responses. These protocols also place a high
cognitive load on the subject, often when they are struggling to solve a problem they will become
silent, in the very time the practitioner needs them to be talking most.
Many of these issues may not be severe in a particular test but the issues should not be ignored.
It does seem that some of the cognitive processes practitioners are looking to gain a window into
using verbal protocols are very clearly shown in eye tracking data. Especially when this data is
viewed in real-time the requirement on the subject to constantly verbalize, especially in the
periods requiring high cognitive load such as searching, should be reduced dramatically. As is
shown in Edgar (2005) this does not need to replace verbal protocols and is very much
complementary.
One possibility might also be the use of what could be thought of as a fusion of the think-aloud
and retrospective protocols. One of the goals of using think-aloud may be to ensure that
problems can be understood, both how the user got into the problem and the nature of the
problem itself. Using eye tracking one could allow the subject to act without having to verbalize
every step or reaction until the subject seemed to be in a problematic situation. When viewing the
eye tracking information in real-time it is normally quite obvious when a subject is at an impasse.
The moderator could then prompt the subject to explain the problem they are having and
explore how it developed. If this is a major problem a replay of the session with eye tracking
information may be useful. This is especially true when the subject has a specific task and you are
aware of what that task is. Such a technique then provides the best of both worlds, allowing the
power of understanding allowed by the talk-aloud protocol with faster and more natural behavior
experienced without the need for constant verbalization.

Stunning Deliverables
An issue with current usability testing report is the struggle to communicate their message
effectively. They often lack strong tools such as stunning visuals. Having the best results and
analysis will not always convince a customer or the customer’s development team. Having
objective deliverables such as video of the web session with eye gaze overlaid or hotspots of
multiple subjects etc. can provide not only an impression of quality but also the conclusive and
objective proof that what is being reported is factual. With previous testing methodologies often
the only proof is provided after an extensive redesign and then months later the click-through
results and sales show the results. With eye tracking there is the possibility of doing rapid testing
to show objective results of any changes required.
It is often difficult to choose a visual design for a website. Individuals in the customer’s
organization, designers and developers often have strong opinions on what will work or won’t
work in the visual design of a specific site. These opinions are just that, opinions. They are often
not backed up by any proof at all. It is extremely difficult to convince someone that their opinion
of how a page should be laid out is incorrect. Eye tracking provides conclusive, measurable and
repeatable performance measures of a particular visual layout. Even with small sample sizes and
an early prototype page you should be able to effectively choose and justify the visual designs to
your customer.

Answering typical questions


Although eye tracking alone does not offer all answers to a usability study, it can add clear value
in answering certain key questions with a greater amount of understanding. A couple of straight-
forward examples of this are described below.

What happened in the breakdown event?


One of the most important events to fully understand in usability testing is what truly occurs
when a subject finds a problem or has some sort of issue - what is often called a “breakdown
event”. This is often the very event a usability practitioner is most interested in. Sometimes the
reason for an issue or problem may be unclear or the information the subject is able to provide
may be ambiguous. Quite often, test subjects tend to pause in the verbalizations using the think-
aloud during such events. Having the eye
tracking information as a supplement to
think-aloud and video of the user’s
expressions can provide a level of
understanding that is difficult to replicate
without the eye tracking data. One could
think of the eye tracking information as
providing a context to the expressions
and think-aloud information one is trying
to understand. You can see exactly which
piece of text or button is causing them to
frown, what they are looking at when
explaining their frustration. This can
provide vital clues to understanding the
breakdown event and proposing an
optimal solution to the issue.
What do users see?
One of the most obvious questions answered by eye tracking is “What did people see?” This can
be a very important issue if the element you are interested in is a key message or offer your
customer considers vital. Design and marketing teams spend vast resources developing and
promoting messages and it is critical to find out if these messages are getting across and if not,
why not. The level of subject recall can be determined by questioning them after they exit the
page. This can be problematic as there are numerous possible reasons a subject may not recall the
information. The information may not be interesting to them or, after being attracted to the area
of the screen containing the information, they read the first piece of text and moved on. Either
of these reasons for non-recall may lead you to change the graphical design or layout
unnecessarily, a change unlikely to bring any improvement, where a change of copy, likely to be a
cheap and effective solution, is
what is required. Gaze time
The meaning and scope of the
question “what did people see” 1600.00

could be a huge range of different 1400.00

possibilities. As such the answer


may be best illustrated best using 1200.00

scan paths and hotspot 1000.00


visualizations or basic and simple Mean
G a z e tim e

eye tracking metrics such as “total 800.00

time spent looking at an object”. In


some instances a combination of 600.00

these different tools might illustrate 400.00

the answer with greater clarity.


200.00

0.00
Benetton logo Nipple Nipple Spoon Arm Website WFP Logo Leftside Arm Chest Area
Addresses
AOI
Figure 2 - Graph of gaze time statistics

Where should valuable content be placed?


Eye tracking data obviously points to the
attractiveness and/or the level of attention people
focus on the various visual elements or sections of a
webpage. This allows you to focus valuable
development and design resources on the areas of
the webpage which will provide the greatest return
on investment. Effort placed on high traffic areas
may provide measurably higher results than those on
unused sections. Conversely one could wish to try
and spread attention to other sections of the page as
you may feel you are not effectively using the pages
valuable “real estate”. The eye tracking information
will easily show when a page has text or visual
elements which are “dead” or unused space. The
hotspot tool shows the spread of attention and areas
of “dead space” extremely effectively and with great
clarity.

Figure 3 - Hotspot of a watch commercial


Do available visual cues drive users effectively?
Information on what is attended to could also be used to determine if subjects find information
that may influence a decision, such as a purchase. A useful tool often used in a study may be a
pre-study interview or screening process to ensure a subject is interested in the message or
products you are pushing to
them. If they report being very
open to your product or
message then they way they
read or scan the information
on a page will be extremely
interesting and should illustrate
well how you are
communicating you’re your
target audience regardless of if
they clicked on or even
remembered the information.
Figure 4 - Plot of fixations

How effectively do users complete the task?


Another important question is how effectively users are able to find the information they need.
Also when they find information do they correctly identify it as the information they are looking
for?
In many usability studies, it is common to measure “time to success” of a task. This is one of the
few “objective” measurements practitioners have today, but one could think of many cases where
this may be misleading. For example, not all tasks have a distinct finishing point defined by a
mouse-click. Also, many “time to success” measures may be misleading if the subject carefully
reads relevant information before completing the task. In this case, the subject has successfully
found the right information quickly but their time to completion may be much longer than a
subject who searches quickly in 4 or 5 wrong places before finding the relevant information but
then completes the task quickly.
Finding how long it takes a subject to find the correct information is answered in a simple and
powerful way by using basic eye tracking metrics. One example is to measure the time from the
start of a task to when the subjects looks at the relevant information. For a closer examination,
you could determine if they have correctly identified the information simply by looking at a video
recording of screen contents with eye gaze overlaid to identify whether they have then read the
information or simply scanned it Time to first fixation
and moved on. 4500.00

Again this could be a time when 4000.00

think-aloud might be used in 3500.00

conjunction with eye tracking 3000.00


Time to first fixation

information. When viewing the 2500.00 Mean

eye tracking information in real- 2000.00

time one would see the person go 1500.00

to the correct area and then leave 1000.00

without a seemingly appropriate 500.00

response. This would prompt 0.00


Benetton logo Nipple Nipple Spoon Arm Website WFP Logo Leftside Arm Chest Area
you to ask the subject what their AOI
Addresses

thought processes were in


continuing to search when they Figure 5 - Time to first fixation
had found the correct
information.
Limitations of eye tracking
Data easily influenced
When running eye tracking studies it can be very easy to influence the eye tracking data you
collect. Pages or interfaces that are known or previously experienced have a very different
viewing pattern, even after minimal exposure than pages or interfaces that are completely new.
This first experience of a new page or interface can often provide a huge amount of information
to the designer about where subject expect to find a piece of information. It is therefore
important to provide all the information to the subject on the task required etc before the
stimulus is shown. It is also important not to interrupt or introduce secondary cognitive
processes during this initial period.

They looked at it but did they really “see” it


Although eye tracking information does tell us the direction a person’s eyes are pointed in this
does not directly translate into a conscious and complete cognitive understanding of the object
their eyes were pointed at. It can be problematic to assume that because someone saw something
they consciously processed it or if they processed it they understood what they were looking at.
When we look at eye movements on a “Where’s Waldo” picture for example some subjects look
directly at Waldo two or three times before processing the visual correctly to “find” him. The
same is true for users of a web page. Just because their eyes were aimed at an area of the screen
does not, in all cases, allow you to draw the concrete conclusion that this information went into
their conscious mind and was correctly processed. The likelihood is often quite high that this
occurred, the longer they have looked at an area the higher this is, but it is problematic to draw
definitive conclusions on an individual basis. It is possible though to draw conclusions on groups
of people. For example if most people looked at an area with a simple visual message then there
is a high likelihood that this information would be passed to a majority of users.
It is often important to examine the next target of the subject’s gaze or examine any action they
took on the basis of their eye movements. Here transition matrix statistics are extremely useful as
well as the comparison of eye movements to mouse clicks etc.

How people use peripheral vision


When looking for information or attempting to solve a task a subject will look around the screen
by making a series of saccades. They choose where they will saccade to using both their
expectation and their peripheral vision. If the subject is looking for navigation for example they
will saccade to something in their peripheral vision which has the appearance of a navigation area
to their peripheral vision. Our peripheral vision in of much lower resolution or acuity than our
main or foveal vision thus a subject will not be able to “know” what is in an area they are
saccading to but will get a “feel” for it. An example of this is people avoiding ads, especially
banner ads as these are superfluous to their task and they can easily identify these by their look
and their expected location. (http://www.poynterextra.org/eyetrack2004/advertising.htm)
As you would expect the more focused a subject’s task is the more specific they will be able their
visual search and thus the more important their peripheral vision becomes. An example of this
would be asking someone to find the products section of a website. They will most often saccade
directly to the navigation areas or what they expect to be navigation areas. As such columns of a
different color on the left hand side of the page or a row of a different color at the top of the
page would expect to get an extremely high number of fixations.
People use their peripheral vision to determine what areas seem to be of interest and which are
clearly not of interest, especially when they are searching for specific information. They use this
area of vision to define targets for their subsequent eye movements or saccades. As a result they
often deliberately avoid areas which seem to be “useless” to them, for example advertisements. It
has been shown in a number of studies that users have a very highly developed skill of avoiding
all ads, be they in-line, banner ads or a variety of other formats.
It is obvious that any eye tracking system will not be able to determine what parts of the
peripheral vision a subject was driven by. By definition the eye tracker reports where the subject
is looking not the areas where they are not looking.
A visual design could fall down simply because a visual element looks too much like an ad and
people then take active steps to avoid the area based on the visual information they collect with
their peripheral vision. After you have documented the behavior of subjects avoiding an area,
using the gaze plot or gaze replay, it may then be useful to interview them to determine why they
avoided an area. Again here it is important to emphasize eye tracking is one of the many tools in
the usability arsenal. Interviews and think-aloud often provide the depth of understand of an
issue that eye tracking identifies.

Conclusion
Modern solutions of the type provided by Tobii Technology remove the issues that have
traditionally held back eye tracking. There have been many issues that have previously forced eye
tracking into the “interesting but commercially useless” category. Long setup time, high level of
expertise required, low reliability of both equipment and data and poor product packaging have
been some of these. It has been shown in many commercial labs now that these issues have now
been overcome. Modern solutions provide hassle-free collection of data and tools to allow a
practitioner to easily analyze this data.
The value added by eye tracking in traditional usability studies is clearly significant. One can
answer more questions, answer them with greater depth and conviction and develop deliverables
that clearly illustrate these results. This is shown in research such as Edgar (2005), the Eye Track
III study (http://www.poynterextra.org/eyetrack2004) as well as the business growth from
companies running eye tracking tests. The questions which eye tracking can answer are also one
which simply cannot be answered by traditional techniques. All of this will provide clear
advantages in marketing as well as improved ROI for your customers.
Modern eye tracking solutions provide easy to use and effective tools which can be used to
compliment traditional usability techniques. Running eye movement data cued retrospective
protocol allows some of the more problematic issues of traditional usability testing to become a
thing of the past. There is also the possibility to run more ambitious studies looking to gain even
greater insights from a larger subject base than the traditional 6-15 subjects that are often run in
usability studies.
The ability to run traditional studies, gathering all the data that they collect, plus have the added
value eye tracking provides at a minimal additional cost and effort adds enormous value to a
study. This combined with the less problematic methodology ensure a traditional usability test
can provide better results with more confidence. The extra information and insight that can be
gained using eye tracking ensures that customers are provided the maximum benefit from their
investment. It is only a matter of time until eye tracking protocols become a standard part of
usability testing and website design in general.
References
Aaltonen A., (1999). Eye Tracking in Usability Testing: Is It Worthwhile? In CHI'99 Workshop
The Hunt for Usability: Tracking Eye Movements, Pittsburgh, PA, May 1999.
Cowen, L. (2001). An Eye Movement Analysis of Web-page Usability.
http://www.cs.ucl.ac.uk/staff/J.McCarthy/pdf/library/eyetrack/laura_cown.pdf
Ebling, M.R. & John, B.E. (2000). On the contributions of different empirical data in usability
testing. In Proceedings of the conference on Designing interactive systems: processes, practices, methods, and
techniques 2000.
Eger, N. L. (2005). Using Eye-movement Data to Cue Retrospective Protocols in Online
Usability Testing.
Golberg, J.H., Stimson, M.J., Lewenstein, M. Scott, N. & Wichansky, A.M. (2002). Eye tracking
in web search tasks: design implications. In Proceedings of the Eye Tracking Research & Applications
Symposium 2002. Pg 51-58. New York, ACM.
Jacob, R. & Karn, K. (2003). Commentary on Section 4: Eye tracking in human-computer
interaction and usability research: Ready to deliver the promises. In The Mind’s Eye, Elsevier,
North Holland
Neilsen, J., Clemmensen, T. & Yssing, C. (2002). Getting access to what goes on in people's
heads?: reflections on the think-aloud technique. In Proceedings of the second Nordic conference on
Human-computer interaction. Pg 101-110 New York, ACM.
Redline, C.D. & Lankford, C.P. (2001). Eye-movement analysis: a new tool for evaluating the
design of visually administered instruments (paper and web). Paper presented at 2001 AAPOR
Annual Conference, Montreal, Quebec, Canada, May 2001. In Proceedings of the Section on Survey
Research Methods, American Statistical Association.
Schiessl, M., Duda, S., Thölke, A., Fischer, R. (2003). Eye tracking and its application in usability
and media research. In Sonderheft: Blickbewegung in MMI-interaktiv Journal - Online Zeitschrift zu
Fragen der Mensch-Maschine-Interaktion. ISSN 1439-7854. Gastherausgeber: Katharina Seifert
& Matthias Rötting. 12.03.03, Ausgabe Nr. 6.
http://www.poynterextra.org/eyetrack2004/advertising.htm

Вам также может понравиться