Вы находитесь на странице: 1из 19

Electronic Government, An International Journal, Vol. 8, No.

1, 2011 1

Evaluating usability, user satisfaction and intention


to revisit for successful e-government websites

Dae-Ho Byun*
School of Economics and Logistics,
College of Economics and Commerce,
Kyungsung University,
Busan 608-736, Republic of Korea
E-mail: dhbyun@ks.ac.kr
*Corresponding author

Gavin Finnie
School of Information Technology,
Bond University,
QLD 4229, Australia
E-mail: gfinnie@bond.edu.au

Abstract: This paper determines a set of usability factors for evaluating


e-government websites and describes causal effects, which determine the extent
to which e-government website usability affects user satisfaction and their
intention to revisit sites for continued usage. Measurement data was gathered
from user testing on the websites of representative administration departments
in South Korea. This data was analysed using factor analysis and a structural
equation model was developed. Navigation, utilisation of image and graphics,
effective readability, utilisation of multimedia technology, site structure and
information search capability were shown to be major factors affecting
usability of e-government websites. Findings suggest that the usability strongly
affected both user satisfaction and intention to revisit.

Keywords: e-government; usability; user testing; usability factors; user


satisfaction; revisiting intention; website evaluation; factor analysis; structural
equation model; South Korea.

Reference to this paper should be made as follows: Byun, D-H. and Finnie, G.
(2011) Evaluating usability, user satisfaction and intention to revisit for
successful e-government websites, Electronic Government, An International
Journal, Vol. 8, No. 1, pp.119.

Biographical notes: Dae-Ho Byun is a Professor of the School of Economics


and Logistics at Kyungsung University, South Korea. He has published in
Information and Management, Expert Systems with Applications, International
Journal of Information Management, International Journal of Computer
Applications in Technology, Human Systems Management, Journal of End
User Computing, and the Encyclopedia of Computer Science and Technology.
His main research interest concerns methodologies for evaluating e-government
and mobile government.

Gavin Finnie is a Professor of the School of Information Technology at


Bond University, Australia. He has been involved in Computer Science and

Copyright 2011 Inderscience Enterprises Ltd.


2 D-H. Byun and G. Finnie

Information Systems teaching and research for over 30 years. He has published
over 100 papers in journals and refereed conferences as well as a book and
several book chapters. His research interests are in the area of AI/expert system
applications in information systems, intelligent agents, electronic business,
business process management and real-time business intelligence.

1 Introduction

Efficient public service is critical to national competitiveness. To improve the provision


of public services, countries throughout the world are promoting the establishment
of e-government. The OECD defines e-government as The use of information and
communication technologies, and particularly the internet, as a tool to achieve better
government (OECD, 2003). One of the core issues in the realisation of e-government
is the implementation of a portal site (Wescott, 2001; Zhang and Hsieh, 2010). Through
this portal, citizens can acquire information of and get public services or useful
information regarding government policies, because key government websites are linked
by the portal. Governments obviously aim for good quality and usable e-government
websites. For a successful government website however, users should be satisfied with
their use of the site and revisit as frequently as they need to.
Website usability has attracted considerable research on website evaluation. Usability
has been regarded as one of the most important criteria for measuring and evaluating
websites (Zimmerman and Muraski, 1995; Nielsen, 2000; Smith, 2001; Badre, 2002;
Palmer, 2002). Usability means that the site is easy to learn, can be used efficiently, is
easy to memorise, has few errors, and is subjectively satisfactory to the user (Nielsen,
1994, 1996). Highly usable websites support the user and allow users to accomplish their
goals quickly, efficiently and easily (Nielsen, 2000).
Measuring and evaluating e-government websites is essential for improving these
websites and usability is regarded as one of the most significant criteria to measure
(Choudrie et al., 2009; Teo et al., 20082009; Henriksson et al., 2007; Byun, 2007).
Evaluation of websites provides us with guidelines to improve the website design.
To evaluate e-government websites from the perspective of usability, we first need
to develop a measurement model by determining the major factors affecting usability,
which become the evaluation criteria of the model. Given that user goals may differ
between commercial and e-government websites, we need to find whether the same
principles apply to both classes of site.
A primary goal of e-government websites is to have satisfied users who will revisit
the site as needed. Highly usable websites generally lead to user satisfaction with
the site. Moreover, continuous use of e-government websites is an important issue in the
mature stage of e-government. Revisiting suggests that e-government users are willing to
search for helpful information or solve their government-related tasks, e.g., renewing a
driver licence or paying tax on the internet, by visiting websites instead of using
telephone enquiries or making a personal visit.
This paper considers several research questions. Is it relevant to apply the general
principles of usability used for evaluating commercial websites to e-government sites?
If not, what are the special criteria suitable for e-government website evaluation? What
influences users to revisit e-government websites? What are the relationships between
Evaluating usability, user satisfaction and intention 3

usability, user satisfaction and intention to revisit? Finding these relationships provides us
with guidance for improving e-government websites and they can be used as a model for
promoting e-government website success.
The objective of this paper is to describe the importance of usability in evaluating
websites. We find major factors affecting usability of e-government websites and show
causal effects to determine the extent to which usable websites affect user satisfaction
and intention to revisit. Our paper is part of research outcomes published in Korean
journals by the first author (Byun, 2005, 2007). We performed user testing on the
websites of representative administration departments in Korea, which are linked by the
Korean e-government web portal. Users were requested to answer the usability evaluation
checklists after finding correct answers to required questions by navigation. The target
website of the Korean government portal, which opened in 2002, had a top rank in 2006
and 2007 in the e-government country ranking (West, 2007).
The paper is organised as follows: First, a research model and hypotheses for
the research questions are developed from recent research on e-government focused
on e-government success, efficiency and evaluation. Second, data is collected by
user testing, analysed by factor analysis and a structural equation model developed.
Third, we suggest findings that include ranking of e-government websites, factors
affecting usability and causal effects between usability, user satisfaction and intention to
revisit.

2 Literature review

Recent research on e-government has focused on e-government success, efficiency and


evaluation. Usability, usefulness, user satisfaction, trust, quality and continuous usage
were considered as some of the interesting issues for e-government success. Hung et al.
(2006) identified the factors that determined the public acceptance of e-government
services in Taiwan. They found the important determinants were usefulness, ease of use,
risk and trust. Srivastava and Teo (2007) analysed the relationship of e-government
development with national efficiency and performance. Srivastava and Teo (2008)
examined the relationships of e-government development and e-participation with
national business competitiveness. Wu et al. (2009) argued that user interface design
issues were highly significant for e-government and m-government success. Verdegem
and Verleye (2009) developed a structural model for measuring user satisfaction in
the context of e-government. Gotoh (2009) developed a theoretical model for assessing
the performance of e-government services to clarify the factors that increase user
satisfaction.
Baker (2009) suggested a content-analysis methodology utilising Guttman-type scales
where possible to refine e-government usability assessments. Lean et al. (2009)
investigated the factors that influenced the intention to use e-government services
among Malaysians and they found that trust and usefulness affected the intention to use
e-government.
Sarmad and Hamid (2009) developed evaluation criteria for assessment of
e-government systems. Teo et al. (20082009) proposed and tested a model to
assess e-government website success. They argued that intention to continue using the
e-government websites is more important for e-government website success than the
initial intention to use. Henriksson et al. (2007) described an instrument for evaluating
4 D-H. Byun and G. Finnie

the quality of government websites, which can automatically determine a measure


of quality following input of data. Detlor et al. (2010) identified internal factors within
government that affect the adoption and use of government websites.
Our paper focuses on measuring usability. Different usability evaluation techniques
have been developed and incorporated into the process of website design and
development. Monique and Jaspers (2009) provided an overview of the methodological
and empirical research available on usability inspection and testing. Delice and Gngr
(2009) proposed a new approach to reveal usability problems on a website and to define a
solution priority for these problems. Nakamichi et al. (2007) developed a new usability
evaluation environment that supports recording, replaying and analysis of a gazing point
and operation while a user is browsing a website. Fang and Holsapple (2007) developed
a taxonomy of factors influencing website usability. Hernndez et al. (2009) analysed
the main factors of website quality, accessibility, speed and navigation that must be taken
into account when designing a commercial website.
Previous research work for e-government website evaluation usually aims to define
the concept, the major factors and observed variables of usability and compute evaluation
scores for the websites. Smith (2001) evaluated the usability of e-government websites
in New Zealand. Bertot and Jaeger (2006) suggested methods for assessing e-government
websites such as functionality, usability and accessibility. Byun (2005, 2006), Byun and
Jeon (2006) performed research to measure usability of Korean e-government websites.
Byun (2005) evaluated the Korean e-government websites based on usability. Eighteen
representative government sites were chosen and two testing methods were performed
with different questionnaires to different respondents. Byun and Jeon (2006) considered
two approaches of user testing and usability inspection for evaluating e-government
websites. Byun (2006) picked 30 popular e-government websites in Korea and
determined the factors, which significantly affected the usability as follows: contents
design, page design, graphic design, easy to learn, graphic design, navigation, system,
interaction and web functionality.

3 Research model and hypotheses

The constructs we will consider are perceived usability (PU), user satisfaction (US) and
intention to revisit (IR). Perceived usability means the usability based on the users
viewpoint. Our objective is to find a causal relationship between these three constructs
such that usable e-government websites affect user satisfaction and intention to revisit
sites, respectively. The basic proposition of our research model is that user satisfaction is
determined by perceived usability. Similarly, intention to revisit is also influenced by
user satisfaction, i.e., the more usable the website, the higher the user satisfaction, and the
more the user satisfaction, the stronger the intention to revisit the website.
Flavin et al. (2006) showed that greater usability of websites was found to have
a positive influence on user satisfaction and this also generated greater website loyalty.
We can assume loyalty is the same concept as intention to revisit. Floropoulos et al.
(2010) developed a model that included the constructs of service quality, perceived
usefulness and user satisfaction for government information system success. The results
provided evidence that there were strong connections between these constructs. We can
define usefulness as subset of usability. Oztekin et al. (2009) proposed a methodology to
Evaluating usability, user satisfaction and intention 5

combine web-based service quality and usability dimensions of information systems and
revealed a strong relationship between quality and usability.
From the preceding discussion, we can define the relationship between usability and
user satisfaction. However, we divided user satisfaction into two constructs. One is
physical satisfaction (PS). Although PS is a difficult concept to define precisely, for this
research we consider it to include the view that users are not fatigued while performing
the task, are happy to proceed with the task and are willing to continue with the next task.
The other is achievement satisfaction (AS), measuring whether users achieved their
objectives well on the website. PS occurs in the process of finding the information users
want, but AS occurs after users find the required information. We hypothesise:
H1: Perceived usability positively affects physical satisfaction.
H2: Perceived usability positively affects achievement satisfaction.
Chen and Macredie (2005) determined that usability for e-shopping interfaces was
critical to help users to obtain their desired result. Casal et al. (2008) confirmed the
influence of website usability on consumer satisfaction and showed that usability played
a special role in the loyalty formation process. It follows that
H3: Perceived usability positively affects intention to revisit.
Teo et al. (20082009) found that quality perceptions of citizens were affected by their
trust in e-government websites and intention to continue using was affected by user
satisfaction. Deng et al. (2010) showed that trust, customer satisfaction and switching
cost directly affected customer loyalty in using mobile instant messaging. From the
preceding discussion, we hypothesise:
H4: Physical satisfaction positively affects achievement satisfaction.
H5: Achievement satisfaction positively affects intention to revisit.
In the research model as shown in Figure 1, perceived usability is considered a prior
variable; user satisfaction is considered an intermediate variable; revisiting intention is
considered an achievement variable.

Figure 1 Research model


6 D-H. Byun and G. Finnie

Items for user satisfaction were derived from the Spool et al. (1999) research.
User satisfaction was measured by seven items, which were physical fatigue (US1),
confusion during the task (US2), degree of stress after finding a correct answer (US3),
overall physical feeling (US4), actual speed of tasks (US5), satisfaction about the quality
of information provided (US6), and attitude about proceeding to another task after
completing a task (US7). Intention to revisit was measured by three items, which were
acquisition of information (IR1), civil appeal (IR2) and getting several documents on
the e-government websites (IR3).
Figure 1 gives a graphical representation of the above-mentioned hypotheses. These
hypotheses were tested with a questionnaire during user testing.

4 Method

4.1 User testing and data collected


User testing and usability inspection have been popularly used for measuring usability.
User testing involves observing users performing specific tasks with websites to identify
what problems they have as they use the site. Users who finished tasks and found a
correct answer to the tasks through navigating the website are asked to complete
a questionnaire that expresses their feelings regarding satisfaction with the website.
The target websites in this paper were the 18 Korean e-government websites and
a Korean e-government web portal. These websites are representative of the
administration of South Korea that undertakes the following affairs:
ministries of finance and economy
education and human resources development
unification
foreign affairs and trade
justice
national defence
government administration and home affairs
science and technology
culture and tourism
agriculture and forestry
commerce, industry and energy
information and communication
health and welfare
environment
labour
gender equality and family
Evaluating usability, user satisfaction and intention 7

construction and transportation


maritime affairs and fisheries.
The subjects who participated in user testing were 60 students who were enrolled in a
course on the design of websites offered at a Korean university.
Since there were individual differences in their skills in using the internet, subjects
were asked to visit and navigate the e-government websites at some time before the main
user testing. To develop task items, Spool et al. (1999) proposed four different types of
task items, which consist of questions asking simple facts, questions asking judgements,
questions asking comparison of facts and questions asking comparison of judgements.
In this paper, we prepared two questions asking simple facts for each site to save test
time.
Question items were developed so that the subjects were familiar with the questions
and they could find an answer on the main page or sub-pages at the first level below the
main page. For example, in the website of the Ministries of Finance and Economy, the
simple question is What is the fax number? The question requiring judgement is
Is the annual budget sufficiently well described for easy understanding? The question
requiring comparison of facts is Which department has the smallest annual budget?
The question requiring comparison of judgements is For departments A and B, which
has more reasonable government policies for citizens?
Task items were simplified for users to answer as many questions as possible and
made easy enough to directly find the answer on the main page or a sub-page one level
down from the main page. The questions that proved difficult enough to make subjects
give up were removed after the prior user testing.
Tasks were carried out in a computer laboratory with access to the internet. After
finding answers for the task items, subjects filled out questionnaires. Subjects who could
not find an answer to a certain question in a reasonable amount of time were asked to
proceed to the next question after recording their opinions. To reduce fatigue, tests were
performed for three hours a day over three days. The 58 significant results were obtained
from the 60 users excluding users who had given up during the test period. The usability
was measured on a seven-point scale ranging from strongly disagree (1) to strongly agree
(7). The measurement was finally converted into a 100-point scale.

4.2 Measurement items


The questionnaire items were developed by applying heuristic principles proposed
by Nielsen (2000). The questionnaire items consisted of three constructs of page design,
contents design and site design. For these three constructs, we have selected 9, 10 and 12
appropriate detailed questions, respectively, as shown in Table 1.
Since a page is the gateway of first contact of users visiting a website, the principle of
page design should address maintaining a positive image and encouraging a
longer stay of users. The page design construct measures whether the pages can be
accessed rapidly and allow easy navigation between pages. Content may play a role
in motivating users to visit the websites again. Contents design measures whether
the contents is attractive and easy to read. Site design measures how usable the site
structure is.
8 D-H. Byun and G. Finnie

Table 1 Items for measuring usability

Construct Item code Questionnaire item


Page design (I) I-1 Content accounts for at least half of a pages design, and
preferably closer to 80%. Navigation menu is kept below
20% of the space for destination pages
I-2 Graphics is mixed with text
I-3 Navigation between pages is easy
I-4 Web page works well on a 17-inch monitor running at a
resolution of at least 1024 768 pixels
I-5 A new page can be accessed within 10 seconds
I-6 Users can predict the response time in downloading large
pages or multimedia files by indicating the size of the
download next to the link
I-7 Pages use multiple occurrences of the same image instead
of using different images
I-8 Users can decide to follow a link after reading what it is
I-9 Pages minimise use of frames
Contents design (II) II-1 The text is short and concise
II-2 Users can scan text and pick out keywords, sentences, and
paragraphs of interest while skipping over those parts of
the text they care less about
II-3 Pages show overly long papers, which have been split into
two parts
II-4 The page title has enough words to stand on its own and be
meaningful when read in a menu or a search listing
II-5 The text is easy to read in terms of font size and paragraph
alignment
II-6 Colours are highly contrasted between the text and the
background
II-7 Higher-level pages minimise the number of illustrations
and details seen by drill-down
II-8 The help menu is easy to search and provides good
explanation
II-9 Animations have their appropriate place in web design
II-10 Video images have their appropriate place in web design
Site design (III) III-1 Users can understand what to do in home pages
III-2 Home pages and interior pages share the same style
III-3 The site environment is designed to reflect the real-world
III-4 Navigation interfaces help users answer the following
questions: Where am I? Where have I been?, and Where
can I go?
III-5 Users can decide alternative movements relative to the
structure of underlying information space, using the
summarised information on the site
III-6 Information is represented by grouping, summarising,
filtering, and examples
III-7 Boolean search avoids AND and OR operators
Evaluating usability, user satisfaction and intention 9

Table 1 Items for measuring usability (continued)

Construct Item code Questionnaire item


Site design (III) III-8 The search result page has a sorted list of hits with the best
hits at the top. The search results list eliminates duplicate
occurrences of the same page
III-9 A Good quality FAQ is provided to answer the users
questions
III-10 The URL is understandable and as short as possible
III-11 The site supports user-contributed contents
III-12 The applet supports data processing, operation, query, and
navigation control

5 Findings

5.1 Ranking of e-government websites


Table 2 represents the sites in which correct answers were obtained for at least two
tasks. Nielsen (2000) suggested that only five users for each test are enough for most
usability test problems. On this basis, this study is considered to have a valid number
of respondents to measure the website usability. We included those websites in which
at least five subjects gave correct answers to the task. The three sites of S11, S12 and S17
were disregarded for evaluation because less than five subjects answered correctly.
The more subjects that answered the question correctly, the more satisfied with the
websites the subjects appeared. The average score of overall e-government websites was
67.6 points and S15 gained the highest score of 74 points. In particular, S15 was superior
in site design.

Table 2 Average score of e-government websites

S0 S1 S2 S3 S4 S5 S6 S7 S8 S9 S10 S13 S14 S15 S16 S18 Score


I Score 68 67 66 66 65 71 67 68 67 68 67 68 68 75 66 69 68
Rank 4 10 14 15 16 2 9 8 12 5 11 7 6 1 13 3
II Score 74 68 68 70 68 72 66 68 69 71 66 67 69 73 69 73 69
Rank 1 12 10 6 13 4 16 11 8 5 15 14 7 3 9 2
III Score 67 64 63 66 62 69 61 64 66 66 66 65 65 76 64 68 66
Rank 4 11 14 6 15 2 16 13 5 8 7 10 9 1 12 3
Overall Score 70 66 66 67 65 70 64 66 67 68 66 66 67 74 66 70 67.6
Rank 3 12 14 6 15 2 16 11 7 5 10 9 8 1 13 4
I: Page design; II: Contents design; III: Site design.

In page design, item I-3 had the highest score and I-2 the lowest score. The subjects had
comparatively high satisfaction with navigation, but the sites failed to provide
convenience with proper mixing of graphic and text. In contents design, item II-5 had the
highest score, but II-10 had the lowest score. The subjects have comparatively
high satisfaction with the ease of reading the text, but the sites do not properly
10 D-H. Byun and G. Finnie

use video images. From the results, the websites were reasonably good at offering
information in general text mode, but were insufficient in providing information in
various modes like video and animation.
Site design was less than the average score of overall websites. Item III-4 had
the highest score and III-12 had the lowest score. This indicated that users had no
difficulty in obtaining information by navigation, but the websites thought only of
providing information one-way rather than aiming at the medium of mutual
communication.

5.2 Testing hypotheses


We can say that websites are not usable if users of equal ability in internet usage fail to
find the correct answers for the given questions. We analysed what the causes were by
hypotheses based on demographic criteria, including experience in internet usage and
frequency of internet usage. Internet experience was divided into three categories, which
were less than 6 months, 6 months to 2 years, and more than 2 years. Internet
usage (hours of internet use per week) also had three categories of less than 1 hour,
1 to 7 hours and more than 7 hours. We also tested whether there were any
differences in scores of the websites in terms of demographic criteria, as shown in
Table 3.

Table 3 Hypotheses tests for demographic criteria

Hypotheses p-value Result


There are no differences in Internet experience between users who 0.871 Accept
answered the question correctly or incorrectly
There are no differences in Internet usage between users who answered the 0.875 Accept
question correctly or incorrectly
There are no differences in evaluation scores of websites by gender 0.286 Accept
There are no differences in evaluation scores of websites by internet 0.486 Accept
experience
There are no differences in evaluation scores of websites by internet usage 0.884 Accept
per week

Applying a t-test, all p-values of the five hypotheses were greater than the significance
level of 5%. We can conclude that internet skills and user experience have no effect on
usability and the websites have no differences in usability according to gender, internet
experience and internet usage.
We tested whether there was significant correlation among the ranks of page,
contents and site design. That is, we tested whether websites of good page design provide
good contents design, or site design. Table 4 shows Spearman rank-order correlation
coefficients. In general, two constructs are highly correlated when the correlation
coefficient is less than or equal to 0.60, or greater than or equal to 0.60. The ranks of the
page and content design construct were relatively highly correlated with 0.582 at
p-values of 0.018. Websites with a high rank in page design also had a high rank in
contents design. The ranks of the page and site design construct were highly
Evaluating usability, user satisfaction and intention 11

correlated with 0.632 at p-values of 0.009. The correlation coefficient of the contents and
site design construct was 0.774.

Table 4 Rank-order correlation coefficients between constructs

Page design Contents design Site design


Page design R 1.000 0.582 0.632
p-value . 0.018 0.009
Contents design R 0.582 1.000 0.774
p-value 0.018 . 0.000
Site design R 0.632 0.774 1.000
p-value 0.009 0.000 .

We can conclude that when the usability of contents design is high, the site design tends
to a high value. Because the rank orders of page, contents and site design showed
relatively high correlation, the hypothesis There were no differences of website ranks
by construct was accepted at the significance level 5%. We can conclude that websites
with high usability in one construct, therefore, showed high usability in the other
constructs.

5.3 Factors affecting usability


To empirically assess the constructs, we conducted factor analysis and reliability
analysis. We identified six factors found significant in Table 5 and grouped them based
on similarity. These were navigation and utilisation of images and graphics in the page
design construct, effective readability and utilisation of multimedia technology in the
contents design construct and site structure and information search in the site design
construct. Principal components factor analysis was conducted to verify conceptual
validity of the measurement instrument using the varimax rotation approach. We selected
factors with an eigenvalue greater than 1.
In page design, six items were loaded into the factor navigation and three items
into utilisation of images and graphics. In contents design, seven items were loaded into
effective readability and three into utilisation of multimedia technology. In site design,
seven items were loaded into the factors site structure and five into information search.
For each construct, the items with lower value of factor loadings were the following:
prediction of downloading time and guessing what the destination site contains before
clicking for page design; minimising illustrations on higher-level pages and ease of using
the help menu for contents design and URL understandability and user contributed
contents for site design.
The measurement items demonstrated adequate internal consistency and validity.
A majority of items load highly (>0.60) on their associated factors showing convergent
validity. Cronbachs alpha values were used for verifying conceptual reliability or
internal consistency. Cronbachs alpha values for page design, contents design
and site design were 0.759, 0.858 and 0.825, respectively, which is higher than the
0.7 threshold normally considered. We can conclude that the conceptual reliability
is acceptable and the measurement items are reliable.
12 D-H. Byun and G. Finnie

Table 5 Result of factor analysis

Item Factor Factor loading


I-5 Navigation 0.714
I-3 0.694
I-4 0.652
I-1 0.647
I-8 0.573
I-6 0.546
Average 0.637
I-7 Utilisation of image and graphics 0.739
I-9 0.731
I-2 0.607
Average 0.702
II-5 Effective readability 0.799
II-1 0.745
II-6 0.741
II-4 0.702
II-2 0.700
II-3 0.670
II-7 0.605
Average 0.675
II-9 Utilisation of multimedia technology 0.880
II-10 0.834
II-8 0.498
Average 0.737
III-6 Site structure 0.760
III-5 0.749
III-3 0.673
III-4 0.670
III-1 0.623
III-2 0.607
III-9 0.528
Average 0.658
III-8 Information search 0.796
III-12 0.692
III-7 0.649
III-10 0.461
III-11 0.451
Average 0.609
Evaluating usability, user satisfaction and intention 13

5.4 User satisfaction and intention to revisit


5.4.1 Model fit
The fit of the overall measurement model was estimated by various indices.
MacCallum (1986) and Anderson and Gerbing (1988) suggested a proper level of model
fit and we used their level to fit the research model. The ratio of Chi-square statistics
to degree-of-freedom (d.f.) was used since the Chi-square statistic is sensitive to large
sample size. A value of 2.435 (207/85) was obtained, which satisfied their recommended
level of under 3.0. The Root Mean Square Residual (RMSR) indicates the proportion
of the variance not explained by the model. A value of 0.069 was obtained, which was
within the recommended level of 0.10. This implies a good fit between the observed data
and the proposed model.
The values of Goodness-of Fit-Index (GFI), Adjusted Goodness-of Fit-Index (AGFI)
and Normalised Fit Index (NFI) were 0.810, 0.732 and 0.807, respectively. However,
these values suggest that the model fit is only moderately acceptable, since the data is
considered to fit a model when the values of GFI, AGFI and NFI are greater than 0.90,
0.80 and 0.90, respectively.

5.4.2 Reliability and validity


The conceptual reliability or internal consistency was assessed by computing Cronbachs
alpha. Cronbachs alpha for user satisfaction was above 0.9 and for intention
to revisit was above 0.6, which is higher than the 0.6 threshold generally agreed as
minimum (Nunally, 1978). Confirmatory factor analysis was conducted to validate the
constructs by convergent validity and discriminant validity.
Convergent validity was tested by factor loadings, which are considered as significant
if greater than 0.5 (or 0.7 if following a stricter criterion (Fornell, 1982)). All the factor
loadings were greater than 0.7 with a majority of them above 0.9 and all items strongly
loaded on their underlying construct, showing convergent validity (see Tables 6 and 7).
Discriminant validity was tested by examining whether the shared variance between
constructs was lower than the Average Variance Extracted (AVE) of the individual
constructs. Table 6 shows the result of confirmatory factor analysis. The instrument
demonstrates discriminant validity.

Table 6 Result of confirmatory factor analysis

Construct Variable Factor loadings Cronbachs alpha AVE


Physical Satisfaction (PS) US1 0.974 0.976 0.915
US2 0.990
US3 0.995
US4 0.860
Achievement Satisfaction (AS) US5 0.840 0.946 0.862
US6 0.991
US7 0.949
Intention to Revisit (IR) IR1 0.887 0.665 0.609
IR2 0.421 (removed)
IR3 0.782
14 D-H. Byun and G. Finnie

Table 7 Shared variance between constructs

Construct Average Variance Extracted PS AS IR PU


PS 0.915 1.000
AS 0.862 0.416 1.000
IR 0.609 0.199 0.596 1.000
PU 0.837 0.158 0.582 0.671 1.000

5.4.3 Testing hypotheses


Structural Equation Modelling (SEM) (Byrne, 1998) was conducted in testing of each
path in a specified causal structure of each measurement. It was implemented using
LISREL, which is based on maximum likelihood estimation. The research hypotheses
described in Section 3.1 were subjected to a validation process of path analysis
(see Figure 2).

Figure 2 Path diagram

The hypotheses H1H4 were all supported with t-values greater than the critical value of
1.96, but H5 was rejected at the 5% significance level (see Table 8). A significant
positive relationship was found between perceived usability and user satisfaction. This
implies that the higher the perceived usability, the more the user satisfaction. Similarly, a
significantly positive relationship was also found between PS and AS. This implies that
the higher the PS, the higher is the AS. In the structural equation model, we can say
that there is an indirect effect between two variables when the second latent variable
is connected to the first latent variable through one or more other latent variables.
Therefore, the perceived usability had an indirect effect on AS through PS. This suggests
that PS is a prior variable affecting the relationship between perceived usability
and AS.
As can be observed from Figure 2, the direct path between perceived usability and
intention to revisit was significant. This suggests that the higher the perceived usability,
the more the intention to revisit. This is an interesting result, i.e., to increase revisiting
Evaluating usability, user satisfaction and intention 15

of e-government websites, it is more important to increase the perceived usability


because of the direct relationship with the intention to revisit. Furthermore, even though
users were satisfied with achieving their objectives on the websites, we could not
guarantee this could increase revisiting, because there was no significant effect between
AS and the intention to revisit.

Table 8 Result of hypotheses test

Hypotheses Path Partial effects Standard error t-value Result


H1 PU PS 0.420 0.140 3.001 Accept
H2 PU AS 0.617 0.095 6.477 Accept
H3 PU IR 0.464 0.162 2.866 Accept
H4 PS AS 0.393 0.099 3.974 Accept
H5 AS IR 0.290 0.156 1.860 Reject
p < 0.05.

The total effect between two latent variables is the sum of any direct effect and all
indirect effects that connect them. Quantitative analysis of the effects of each construct
on intention to revisit reveals that perceived usability, PS and AS have a total effect
of 0.68, 0.11 and 0.29, respectively. For example, the total effect of perceived usability
was computed by summing the products of each path coefficient along the three
possible paths starting from perceived usability to reach intention to revisit, i.e.,
0.42 0.39 0.29 + 0.62 0.29 + 0.46. Similarly, the total effect of PS 0.11 was
obtained by multiplying the path coefficient 0.39, which linked PS to AS, by 0.29, which
linked AS to intention to revisit.
Perceived usability has the highest total effect 0.68 in the three constructs of
perceived usability, PS and AS and is thus the most important driver of intention to
revisit.

6 Conclusion

6.1 Explanation
For a successful government website, users should be satisfied when using it and be
willing to continuously revisit the website as needed. Measurement and evaluation
provides guidelines to improve e-government websites. In this paper, we measured
usability of e-government websites and found major factors affecting the usability using a
user testing method. In addition, we investigated the causal effects among the constructs
of perceived usability, user satisfaction and revisiting intention. The target websites were
the Korean e-government web portal and 18 websites linked by the web portal, which
represents Korean administration departments.
The overall usability score of Korean e-government websites was not high.
In particular, the site design score was low although contents design scored well.
We found six factors affecting usability for e-government websites, which were
navigation, utilisation of images and graphics, effective readability, utilisation of
multimedia technology, site structure and information search.
16 D-H. Byun and G. Finnie

The perceived usability strongly affected both user satisfaction and intention to
revisit. It also affected intention to revisit directly without the intermediation of user
satisfaction.

6.2 Implications
Although considerable research for e-government success has been performed, there has
been a lack of research on evaluation and improvement of e-government websites.
E-government websites including the portal site play a major role in the interaction
between government and citizens. Given that the primary goal of e-government is
efficient interaction with citizens via quality websites, relevant methods for evaluating
and measuring e-government websites are important.
The major contribution of this paper was first to adapt the usability concept for
evaluating e-government websites. Even though a reasonably well-established set of
usability factors are generally accepted for commercial website evaluation, we suggested
special factors relevant for e-government website evaluation. Therefore, our approach can
provide guidelines to improve e-government website design, as well as practically be
used as a method for ranking e-government websites. In addition, the user testing method
performed in this paper allows the production of exact evaluation results with relatively
less respondents, which is an advantage over other exploratory approaches.
Second, the concept of intention to revisit e-government websites has rarely been
considered as a factor for e-government website success. The finding of the relationship
between intention to revisit and usability is a major contribution of this paper. Compared
with the Singapore research performed by Teo et al. (20082009), we obtained the same
result in the Korean case that user satisfaction was a strong driver to induce revisiting
e-government websites. We classified the user satisfaction concept into AS and PS,
which differs from the research of Teo et al. (20082009). Our work implies that
perceived usability was the most important driver for affecting intention to revisit for
e-government users even though PS was a strong factor, which increased revisiting
intention. Some users who tried to achieve their goals on the website were affected by
lack of PS. These users did not have an intention to revisit if they were not physically
satisfied with the website, even though they successfully achieved their goal on the
websites.
In conclusion, our work proved that usability is an important criterion for
e-government website evaluation, as well as perceived usability leading to intention
to revisit. In addition, our work indicates that making website users satisfied physically
was more important than achieving their goals on the e-government websites. Because
e-government websites usually contain a larger volume of information than commercial
websites, users may find it hard to access useful information through navigation.
Sometimes, this makes users fatigued and users, therefore, will tend not to revisit
e-government websites.

6.3 Limitations
The limitations of this paper were first in the number of task items in the user testing.
Among the four types of questionnaire items suggested by Spool et al. (1999), we did not
use comparison of facts and questions asking comparison of judgements. Second, we did
not propose a complete method for ranking e-government websites using the significant
Evaluating usability, user satisfaction and intention 17

factors affecting usability. To do that, the determination of weight of the usability factors
is necessary and this will form the basis of further research. Third, we did not include
trust and service quality in the proposed model. To find further significant factors
affecting intention to revisit e-government websites, it will be necessary
to develop a structural model including usability, user satisfaction, trust and service
quality.

Acknowledgements

This research was supported by Kyungsung University Research Grants in 2011.

References
Anderson, J.C. and Gerbing, D.W. (1988) Structural equation modeling in practice: a review and
recommended two-step approach, Psychological Bulletin, Vol. 103, pp.411423.
Badre, A.N. (2002) Shaping Web Usability: Interaction Design in Context, Addison-Wesley,
Pearson Education, UK.
Baker, D.L. (2009) Advancing e-government performance in the United States through enhanced
usability benchmarks, Government Information Quarterly, Vol. 26, No. 1, pp.8288.
Bertot, J.C. and Jaeger, P.T. (2006) User-centered e-government: challenges and benefits for
government web sites, Government Information Quarterly, Vol. 23, No. 2, pp.163168.
Byrne, B.M. (1998) Structural Equation Modeling with LISREL, PRELIS, SIMPLIS, Lawrence
Erlbaum, Mahwah, Basic Concepts, Applications, and Programming, NJ.
Byun, D.H. (2005) Evaluation of the Korean e-government web sites focused on usability,
The Korean Journal of Information Systems Review, Korea, Vol. 7, No. 1, pp.120.
Byun, D.H. (2006) Usability factors and variables of e-government web sites, The Korean
Journal of Information Policy, Korea, Vol. 13, No. 3, pp.2748.
Byun, D.H. (2007) Perceived usability of e-government web sites affecting the user satisfaction
and revisiting, The Korean Journal of Information Systems, Vol. 16, No. 2, pp.5168.
Byun, D.H. and Jeon, H.D. (2006) Factor analysis of the usability for Korean e-government web
sites, The Korean Journal of Social Science Research Review, Korea, Vol. 22, No. 1,
pp.435456.
Casal, L., Flavin, C. and Guinalu, M. (2008) The role of perceived usability, reputation,
satisfaction, and consumer familiarity on the web site loyalty formation process, Computers
in Human Behavior, Vol. 24, No. 2, pp.325345.
Chen, S.Y. and Macredie, R.D. (2005) The assessment of usability of electronic shopping:
a heuristic evaluation, International Journal of Information Management, Vol. 25, No. 6,
pp.516532.
Choudrie, J., Wisal, J. and Ghinea, G. (2009) Evaluating the usability of developing countries
e-government sites: a user perspective, Electronic Government, Vol. 6, No. 3, pp.265281.
Delice, E.K. and Gngr, Z. (2009) The usability analysis with heuristic evaluation and analytic
hierarchy process, International Journal of Industrial Ergonomics, Vol. 39, No. 6,
pp.934939.
Deng, Z., Lu, Y., Wei, K.K. and Zhang, J. (2010) Understanding customer satisfaction and loyalty:
an empirical study of mobile instant messages in China, International Journal of Information
Management, Forthcoming issue, Vol. 30, No. 4, pp.289300.
Detlor, B., Hupfer, M.E. and Ruhi, U. (2010) Internal factors affecting the adoption and use of
government websites, Electronic Government, Vol. 7, No. 2, pp.120136.
18 D-H. Byun and G. Finnie

Fang, X. and Holsapple, C.W. (2007) An empirical study of web site navigation structures impact
on web site usability, Decision Support Systems, Vol. 43, No. 2, pp.476491.
Flavin, C., Guinalu, M. and Gurrea, R. (2006) The role played by perceived usability,
satisfaction and consumer trust on web site loyalty, Information & Management, Vol. 43,
No. 1, pp.114.
Floropoulos, J., Spathis, C., Halvatzis, D. and Tsipouridou, M. (2010) Measuring the success of
the Greek taxation information system, International Journal of Information Management,
Vol.30, No. 1, pp.4756.
Fornell, C. (1982) A Second Generation of Multivariate Analysis, Methods, Praeger Special
Studies, Vol. 1, New York.
Gotoh, R. (2009) Critical factors increasing user satisfaction with e-government services,
Electronic Government, Vol. 6, No. 3, pp.252264.
Henriksson, A., Yi, Y., Frost, B. and Middleton, M. (2007) Evaluation instrument for
e-government websites, Electronic Government, Vol. 4, No. 2, pp.204226.
Hernndez, B., Jimnez, J. and Martn, M.J. (2009) Key web site factors in e-business strategy,
International Journal of Information Management, Vol. 29, No. 5, pp.362371.
Hung, S.Y., Chang, C.M. and Yu, T.J. (2006) Determinants of user acceptance of the
e-government services: the case of online tax filling and payment systems, Government
Information Quarterly, Vol. 23, No. 1, pp.97122.
Lean, O.K., Zailani, S., Ramayah, T. and Fernando, Y. (2009) Factors influencing intention to use
e-government services among citizens in Malaysia, International Journal of Information and
Management, Vol. 29, No. 6, pp.458475.
MacCallum, R. (1986) Specification searches in covariance structure modeling, Psychological
Bulletin, Vol. 100, pp.107120.
Monique, W. and Jaspers, M. (2009) A comparison of usability methods for testing interactive
health technologies: methodological aspects and empirical evidence, International Journal of
Medical Informatics, Vol. 78, No. 5, pp.340353.
Nakamichi, N., Sakai, M., Shima, K., Hu, J. and Matsumoto, K. (2007) WebTracer: a new web
usability evaluation environment using gazing point information, Electronic Commerce
Research and Applications, Vol. 6, No. 1, pp.6373.
Nielsen, J. (1994) Heuristic evaluation, in Nielsen, J. and Mack, R.L. (Eds): Usability Inspection
Methods, John Wiley and Sons, New York, pp.2561.
Nielsen, J. (1996) Usability metrics: tracking interface improvement, IEEE Software, Vol. 13,
No. 6, pp.1214.
Nielsen, J. (2000) Designing Web Usability: The Practice of Simplicity, New Riders Publishing,
Pearson Education, UK.
Nunally, J. (1978) Psychometric Theory, McGraw-Hill, New York.
OCED (2003) The e-government imperative, OECD E-government Flagship Report,
GOV/PUMA(2003)6, 7 March.
Oztekin, A., Nikov, A. and Zaim, S. (2009) UWIS: an assessment methodology for usability
of web-based information systems, Journal of Systems and Software, Vol. 82, No. 2,
pp.20382050.
Palmer, J. (2002) Web site usability, design, and performance metrics, Information System
Research, Vol. 13, No. 2, pp.151167.
Sarmad, A. and Hamid, A. (2009) E-government evaluation: citizens perspective in developing
countries, Information Technology for Development, Vol. 15, No. 3, pp.193208.
Smith, A.G. (2001) Applying evaluation criteria to New Zealand government web sites,
International Journal of Information Management, Vol. 21, pp.137149.
Spool, J.M., Scanlon, T., Schroeder, W., Snyder, C. and DeAngelo, T. (1999) Web Site Usability,
A Designers Guide, Morgan Kaufman Publishers Inc., San Franciso, CA, USA.
Evaluating usability, user satisfaction and intention 19

Srivastava, S.C. and Teo, T.S.H. (2007) E-government payoffs: evidence from cross-country data,
Journal of Global Information Management, Vol. 15, No. 4, pp.2040.
Srivastava, S.C. and Teo, T.S.H. (2008) The relationship between e-government and national
competitiveness, the moderating influence of environmental factors, Communications of the
Association for Information Systems, Vol. 23, pp.7394.
Teo, T.S.H., Srivastava, S.C. and Jiang, L. (20082009) Trust and electronic government success:
an empirical study, Journal of Management Information Systems, Vol. 25, No. 3, pp.99131.
Verdegem, P. and Verleye, G. (2009) User-centered e-government in practice: a comprehensive
model for measuring user satisfaction, Government Information Quarterly, Vol. 26, No. 3,
pp.487497.
Wescott, C.G. (2001) E-government in Asia-Pacific region, Asian Journal of Political Science,
Vol. 9, No. 2, pp.124.
West, D.M. (2007) Global E-Government, 2007, Center for Public Policy, Brown University, 2007,
from http://www.InsidePolitics.org/egovtdata.html
Wu, H., Ozok, A.A., Gurses, A.P. and Wei, J. (2009) User aspects of electronic and mobile
government: results from a review of current research, Electronic Government, Vol. 6, No. 3,
pp.233251.
Zhang, Y.J. and Hsieh, C. (2010) Chinese citizens opinions on e-government benefits, issues and
critical success factors, Electronic Government, Vol. 7, No. 2, pp.137147.
Zimmerman, D.E. and Muraski, M.L. (1995) Usability testing-an evaluation technique,
The Elements of Information Gathering, A Guide for Technical Communications, Scientists
and Engineers, Oryx Press, Phoenix, Ariz.