Вы находитесь на странице: 1из 14

Using Web-Referral Architectures to

Mitigate Denial-of-Service Threats

XiaoFeng Wang, Member, IEEE, and Michael K. Reiter, Senior Member, IEEE
AbstractThe web is a complicated graph, with millions of websites interlinked together. In this paper, we propose to use this web
sitegraph structure to mitigate flooding attacks on a website, using a new web referral architecture for privileged service (WRAPS).
WRAPS allows a legitimate client to obtain a privilege URL through a simple click on a referral hyperlink, from a website trusted by the
target website. Using that URL, the client can get privileged access to the target website in a manner that is far less vulnerable to a
distributed denial-of-service (DDoS) flooding attack than normal access would be. WRAPS does not require changes to web client
software and is extremely lightweight for referrer websites, which makes its deployment easy. The massive scale of the web sitegraph
could deter attempts to isolate a website through blocking all referrers. We present the design of WRAPS, and the implementation of a
prototype system used to evaluate our proposal. Our empirical study demonstrates that WRAPS enables legitimate clients to connect
to a website smoothly in spite of a very intensive flooding attack, at the cost of small overheads on the websites ISPs edge routers.
We discuss the security properties of WRAPS and a simple approach to encourage many small websites to help protect an important
site during DoS attacks.
Index TermsDenial of service, WRAPS, web sitegraph, capability.

HE web is a complicated referral graph, in which a node
(website) refers its visitors to others through hyper-
links. In this paper, we propose to use this graph (called a
sitegraph [1]) as a resilient infrastructure to defend against
distributed denial-of-service (DDoS) attacks that plague
websites today. Suppose eBay allows its trusted neighbors
(websites linking to it) such as PayPal to refer legitimate
clients to its privileged service through a privileged referral
channel. A trusted client needs to only click on a privileged
referral hyperlink on PayPal to obtain a privilege URL for
eBay, which certifies the clients service privilege. When
eBay is undergoing a DDoS attack and not accessible
directly, routers in its local network will drop unprivileged
packets to protect privileged clients flows. As such, a client
being referred can still access eBay even during the attack.
Referral relations can be extended over the sitegraph: e.g.,
PayPal may refer its neighbors clients to eBay. In this way,
a website could form a large-scale referral network to fend
off attack traffic.
The architecture we propose to protect websites against
DDoS attacks, which we refer to as the web referral
architecture for privileged service or WRAPS [2], is built
upon existing referral relationships among websites. Incen-
tives for deployment, therefore, are not a significant barrier,
provided that the overhead of the referral mechanism is
negligible. Indeed, a website that links to others provides a
better experience to its own customers if the links it offers
are effective, and so websites have an incentive to serve
privileged URLs for the sites to which they link. The
overheads experienced by this websites users will be either
nonexistent if the website offers privileged referrals to only
customers that have already authenticated for other reasons,
or minimal if the website will refer any client after it
demonstrates it is driven by a human user (in the limit,
asking the user to pass a reverse Turing test or CAPTCHA
[3]). As we will show, the referrer incurs only negligible
costs in order to make referrals via our technique.
In order to evaluate the likely efficacy of WRAPS, we
implemented it in an experimental network environment
which includes a software router (Click [4]) and Linux-based
clients and servers. Our empirical study shows that WRAPS
enables clients to circumvent a very intensive flooding
attack against a website, and imposes reasonable costs on
both edge routers and referral websites. A limitation of
WRAPS is that it requires modifications to edge routers, as
many capability-based approaches [5], [6] do. However,
unlike those approaches, WRAPS does not require installing
anything on a Web client. We explore the importance of web
sitegraph topology to the efficacy of WRAPS. We also
describe a simple mechanism that helps a website to acquire
referral sites at a negligible cost and helps legitimate clients
to retrieve referral relationships from the Internet.
Effective defense against DDoS attacks is well known to
be a challenging task because of the difficulty in eliminating
the vulnerabilities introduced during the design and
implementation of different network components, which
can be potentially exploited by the adversary. The techni-
que we propose in this paper is aimed at raising the bar,
making a DDoS attack harder to launch and easier to
contain. However, just like many existing approaches, our
. X. Wang is with the School of Informatics, Indiana University at
Bloomington, Room 209, Informatics Building, 901 E. 10th St.,
Bloomington, IN 47408-3912. E-mail: xw7@indiana.edu.
. M.K. Reiter is with the Department of Computer Science, University of
North Carolina at Chapel Hill, Campus Box 3175, Sitterson Hall, Chapel
Hill, NC 27599-3175. E-mail: reiter@cs.unc.edu.
Manuscript received 27 Mar. 2007; revised 23 Jan. 2008; accepted 10 Sept.
2008; published online 1 Oct. 2008.
For information on obtaining reprints of this article, please send e-mail to:
tdsc@computer.org, and reference IEEECS Log Number TDSC-2007-03-0043.
Digital Object Identifier no. 10.1109/TDSC.2008.56.
1545-5971/10/$26.00 2010 IEEE Published by the IEEE Computer Society
technique inevitably suffers from certain limitations and
leaves several open questions to be addressed in the future
research. These issues are elaborated in this paper (Sec-
tion 8).
The rest of this paper is organized as follows: Section 2
surveys prior work related to WRAPS. Section 3 presents the
assumptions made in our research. The design and im-
plementation of WRAPS are presented in Sections 4 and 5,
respectively. Its efficacy was evaluated through experiments
reported in Section 6, and a study of web sitegraph topology
described in Section 7. Section 8 discusses the limitations of
our technique, and Section 9 concludes this paper.
In response to the challenge of DDoS, various counter-
measures have been proposed in the last decade [7], [8], [9],
[10], [11], [12], [13], [14], [15], [16], [17], [18], [19], [20], [21],
[22], [23], [24], [25]. In this section, we focus on the
mechanisms which are most related to our proposal, in
particular, overlay-based and capability-based approaches,
and compare them with WRAPS.
Overlay networks have been applied to proactively
defend against DoS attacks. Keromytis et al. propose a
secure overlay services (SOS) architecture [26], which has been
generalized by Anderson [27] to take into account different
filtering techniques and overlay routing mechanisms.
Morein et al. further propose to protect web services using
SOS [28]. In these approaches, an overlay network is
composed of a set of nodes arranged in a virtual topology.
The routers around the protected web server admit http
traffic from only trusted locations known to overlay nodes.
A client who wants to connect to the web server has to first
pass a CAPTCHA posed by an overlay node, which then
tunnels the clients connection to an approved location so as
to reach the web server. Other work in this vein includes that
of Adkins et al. [29], which employs the Internet Indirection
Infrastructure i3 to enable the victim to selectively stop
individual flows by removing senders forwarding pointers.
WRAPS differs from overlay-based approaches in several
important ways. First, these approaches assume the exis-
tence of an overlay infrastructure in which a set of dedicated
nodes collaborate to protect an important website, and need
to modify protocols and client-side software. This could
introduce substantial difficulties for deployment. WRAPS,
however, asks only referral websites to offer a very light-
weight referral service, which allows WRAPS to take
advantage of existing referral relationships on the web to
protect important websites. WRAPS also alters neither
protocols nor client software. Second, overlay routing could
increase end-to-end latency [26], though such overheads can
be significantly reduced using techniques such as topology-
aware overlays [30], [31] and multipath overlays [32]. In
contrast, WRAPS does not change packets routing paths and
thus avoids these overheads. An advantage of overlay-based
approaches is that they do not need to modify edge routers.
Recently researchers have studied capability-based ap-
proaches that authorize a legitimate client to establish a
privileged communication channel with a server using a
secret token (capability). Anderson et al. present an
infrastructure from which a client can obtain a capability
to send packets to a server [6]. Yaar et al. designed an
approach that utilizes a clients secret path ID as its
capability for establishing a privileged channel with a
receiver [5]. Yang et al. propose an DoS-limiting Internet
architecture which improves SIFF [33]. Gligor proposes to
implement end-to-end user agreement [34] to protect
connections against flooding attacks on the TCP layer.
Similar to these approaches, WRAPS also uses capability
tokens to identify good traffic. However, WRAPS focuses
on important challenges that have not been addressed
previously. First, the capability distribution service itself
could be subject to DDoS attacks. Adversaries may simply
saturate the link of the server distributing capabilities to
prevent clients from obtaining capabilities. The problem
becomes especially serious in an open computing environ-
ment, where a service provider may not know most of its
clients beforehand. WRAPS distributes capabilities through
a referral network. In many cases, the scale of this referral
network will make it difficult for the attacker to block
clients from being referred to a target website by other sites.
Second, all existing capability-based approaches require
modifications to client-side software, which WRAPS avoids.
A fundamental question for both overlay-based and
capability-based mechanisms is how to identify legitimate
but unknown clients. So far, the only viable solution is a
CAPTCHA, which works only in human-driven activities
and has been subject to various attacks [35]. Although
WRAPS can also use CAPTCHA, our approach instead
relies on socially based trust
to identify legitimate clients: a
target server will trust new clients on the basis of their
referrals from referrers that it trusts. This offers a new and
potentially more effectively way to protect an open system
against DoS attacks. For instance, suppose a group of
universities agree to offer referral service to each other. Once
ones website is subject to a flooding attack, the trusted users
of other schools (those with proper university accounts) can
still access the victims website through WRAPS.
The idea of embedding a secret token inside part of an IP
address and port number (as we do here) has also appeared
in a DoS defense mechanism proposed by Xu and Lee [36].
However, they only use this approach as an extension of
syn-cookies, for the purpose of detecting the flows with
spoofed IP addresses. Our approach can also be viewed as
an extension of network address translation (NAT), taking
part of the IP and port number fields to hold an authoriza-
tion secret.
We assume that adversaries can modify at most a small
fraction of the legitimate packets destined for the target
website. Attackers capable of tampering with these packets
on a large scale do not need to flood the targets bandwidth.
Instead, they can launch a DDoS attack by simply destroy-
ing these packets.
1. The term trust in this paper refers to the belief that a website or a
client is not controlled by an adversary. Such trust comes from knowledge
about the intentions of a party and confidence in its ability to protect itself
from compromise.
We assume that adversaries cannot eavesdrop on most
legitimate clients flows. In practice, monitoring a large
fraction of legitimate clients flows is difficult in wide area
However, WRAPS still works when adversaries
are capable of eavesdropping on some privileged clients
flows. In this case, a defender could control the damage
caused by these clients through standard rate limiting.
We assume that routers inside a websites protection
perimeter are trusted. In practice, routers usually enjoy
better protection than end hosts. We further assume that a
DoS flooding attack on a website does not significantly
affect the flows from the website to clients. This is generally
true in todays routers which employ full-duplex links and
switched architectures.
In WRAPS, a website grants a client greater privilege to
access its service by assigning to it a secret fictitious URL
called privilege URL (Section 4.1) with a capability token
embedded in part of the IP and port number fields.
Through that URL, the client can establish a privileged
channel with that website (referred to as target website) even
in the presence of flooding attacks.
A client may obtain a privilege URL either directly from
the target website or indirectly from the websites trusted
neighbors. A website offers a client a privilege URL if the
client is referred by one of the sites trusted neighbors, or is
otherwise qualified by the sites policies that are used to
identify valued clients, for example, those who have paid or
who are regular visitors. A qualified client will be redirected
to the privilege URL generated automatically using that
clients identity, service information, and a server secret.
A privilege URL leads its holder to the target website
through a protection mechanism (Section 4.2) which protects
the website from unauthorized flows. The border of this
mechanism is the sites ISPs edge routers, which classify
traffic into privileged and unprivileged flows, and translate
fictitious addresses in privilege URLs into the websites real
address. Within the protection perimeter, routers protect
privileged traffic by dropping unprivileged packets during
A neighbor website refers a trusted client to the target
websites privileged service. The referral is done through a
simple proxy script running on the referrer site, from which
the client acquires a redirection instruction leading to the
privilege URL. This is discussed in Section 4.3.
4.1 Privilege URLs
Resources available on the Internet are located via Uniform
Resource Locators (URL). An http URL is of the following
format: http://<host>:<port>/<urlpath>, where the host
and port fields could be the (IP, port) pair of an http
service on the Internet, which are accessible to routers.
In WRAPS, we utilize privilege URLs to set up priority
channels. These URLs are fictitious because they do not
directly address a web service. Instead, they contain secret
capability tokens which are verified by edge routers for
setting priority classes, and unambiguously translated by
these routers to the real location of the service.
A privilege URL hides a capability token inside the suffix
of the destination IP field (last one or two octets) and the
whole destination port field.
The following fields are present in the token:
. Key bit (1 bit). This field is used to indicate the
authentication key currently inuse (see Section 4.2).
. Priority field. This is an optional field which
allows the website to define more than one service
priority. Here, we use one priority class to describe
our approach for clarity of presentation.
. Message authentication code (MAC). A MAC
prevents adversaries from forging a capability. The
algorithm computing a MAC over a message takes
as inputs a secret key / and the message to produce a
n-bit tag. MAC generation is ideally based on a
cryptographically strong pseudorandom function
(PRF) so that the probability to compute the right
tag without knowing / is negligibly larger than 2
For a privileged client i, its MAC is denoted by
`C/. 11
, where 11
is is IP address.
Encoding a capability token into the destination IP and
port fields limits the length of MAC, especially for IPv4. For
example, a small network using Class C IP may only be able
to support a 16- to 20-bit MAC. This seems to make WRAPS
vulnerable, allowing an adversary to forge a capability
token through a brute-force search. As we will show,
however, WRAPS contains a mechanism that effectively
mitigates this threat: any adversary without global eaves-
dropping capability will be unable to confirm its guess of a
MAC value. We present a detailed security analysis in
Section 4.4.
It is possible that some clients fictitious (IP, port) pairs
coincide with a real application in the local network.
However, this happens with a very small probability with
the MAC in place. One approach to prevent this problem
with certainty is to reduce the address range that can be
mapped to the web server, i.e., by reducing the MAC length,
so that this range does not intersect other servers. Another
choice is to use the most significant bit on the port field as a
token indicator, by which edge routers can identify the
packets that need address translation.
4.2 Protection Mechanism
A website (the target) is protected by the edge routers of its
ISP or organization, the routers inside its local network, and
a firewall directly connected to or installed on the sites web
The target website shares a secret long-term key / with
its edge routers on the protection perimeter. Using this key,
the website periodically updates to all its edge routers a
shared verification key. We call a period between updates a
privilege period. Specifically, the verification key used in the
privilege period t is computed as /t /
t, where / is a
PRF family indexed by the key /.
2. A recent work shows that strategic placement of monitors on the
Internet could significantly improve the chance to see the traffic between a
random source and a random destination [37]. However, this approach still
requires eavesdropping on the links among a large number of autonomous
The http server of the website listens to two ports, one
privileged and one not. The local firewall controls access to
those ports. Only the port corresponding to the unprivileged
traffic, typically port 80, is publicly accessible. The other port
can be accessed only by packets with source IP addresses
explicitly permitted by the firewall (as instructed by the web
server); this port is called the privilege port.
Below, we describe a protection mechanism which
allows a client to acquire a privilege directly from a website
and establish a privileged channel with that website. The
mechanism is also illustrated in Fig. 1:
. Privilege acquisition
1. A client that desires privileged service from a
website first applies for a privilege URL online.
This application process is site-specific and so
we do not discuss it here, but we presume that
the client does this as part of enrolling for site
membership, for example, and before an attack
is taking place.
2. In period t, if the website decides to grant
privilege to a client i, it first interacts with the
firewall to put is source IP address 11
onto a
whitelist of the privilege port. It then constructs a
privilege URL containing a capability token
t /
k`C/t. 11
, where /
is the
one-bit key field for period t and j
is a priority
class (say, for illustration, also 1 bit in length).
The website uses the standard http redirection
to redirect the client to this privilege URL.
. Privileged channel establishment
1. Edge routers drop packets to the website
addressed to the privilege port of that website.
2. According to the position and the length ` of a
capability token, an edge router takes a string 0
with ` bits from every TCP packet. Denote a
substring from the c
th bit to the c
th bit on 0 by
. c
. A router processes a packet from a
client i as follows:
If 03. ` `C/t. 11
Translate the fictitious destination IP
address to the target websites IP
Set the destination port number of the
packet to the privilege port.
Forward the packet.
Forward the packet as an unprivileged
3. Routers inside the protection perimeter forward
the packets toward the target website, according
to the ports of these packets.
4. Upon receiving a packet with a source IP
address 11
and to the privilege port, the
firewall of the website checks whether 11
on the ports whitelist. If not, the firewall drops
the packet.
5. Using the secret key, the web server or firewall
translates the source IP and port of every packet
emitted from the privilege port to the fictitious
(IP, port) containing the capability token. Note
that no state information except the key needs to
be kept during this process.
Discussion. To allow the update of privileged URLs to
clients, there exists a transition period between privilege
period t and t 1 during which both /t and /t 1 are in
use. This transition period could be reasonably long to
allow most privileged clients (who browse that website
sufficiently frequently) to visit. A website can also embed in
its web pages a script or a Java applet that automatically
reconnects to the server within that period.
Once visited by
a privileged client i, the website generates a new privilege
URL using /t 1 and redirects the client to that URL. To
communicate to edge routers which key is in use, both
website and edge routers keep 1-bit state for privilege
periods. A period is marked with either 0 or 1, and the
periods marked with 0 alternate with periods marked
with 1. The website sets the 1-bit key field on a privilege
URL to the mark of its corresponding privilege period. Edge
routers identify the verification key in use according to the
key field on a packets IP header.
A website can remove a clients privilege by not
updating its privilege URL. Standard rate-limiting technol-
ogy such as stochastic fair queueing [38] can also be used
to control the volume of traffic produced by individual
privileged clients in case some of them fall prey to an
adversary, as capability-based approaches do [33]. An
option to block a misbehaving privileged client within a
privilege period is to post that clients IP to a blacklist held
by the websites ingress edge routers. This does not have to
be done in a timely manner, as the rate-limiting mechan-
ism is already in place. In addition, an adversary cannot fill
the router blacklist using spoofed IPs because the blacklist
3. One way to do this is to set priority queues for different (IP, port) pairs
of the website, which can be easily configured in a modern router.
4. Our approach does not require the clients to enable mobile code in
their browsers. When mobile code is not supported, the user needs to
interact with the website manually during the transition period.
Fig. 1. (a) Privilege acquisition and (b) privileged channel establishment,
where t
is client is capability token and 22433 is the privilege port.
Here, we use a Class C network as an example.
here only records the misbehaving privileged clients who
are holding the correct capability tokens. This blacklist is
emptied periodically after the router updates its verifica-
tion key.
A trick a compromised privileged client can play is to
craft packets with TTLs too small to reach the target website
in order to cause congestion on the path toward that
website. This attack is still subject to rate limiting if the
congestion happens within the protection perimeter of our
mechanism. However, since the packets used in the attack
will not reach the target website, the websites mechanisms
for detecting malicious clients will be evaded. Our solution
to this threat is to let the edge routers refill the TTL value
of every packet destined for a host within the ISPs network
when it enters the network from the outside. The refill sets
the TTL to 255 so as to ensure that the packet will reach its
destination. Note that this approach will not lead to
immortal packets, because the routers within the network
will continue to reduce packets TTLs as usual, and such a
refill only happens at the time that the packet passes an
edge routers inbound interface, which it should not
traverse again.
4.3 Referral Protocol
When a website is under a flooding attack, legitimate but as-
yet-unprivileged clients will be unable to visit that site
directly, even merely to apply for privileged service. The
central idea of WRAPS is to let the trusted neighbors of the
website refer legitimate clients to it, even while the website
is under a flooding attack. The target website will grant
these trusted neighbors privilege URLs, and may allow
transitive referrals: a referrer can refer its trusted neighbors
clients to the target website. We discuss the relation between
web sitegraph topology and the deployment of WRAPS in
Section 7.
A privilege referral is done through a simple proxy script
running on the referrer website. A typical referrer website is
one that linked to the target website originally and is willing
to upgrade its normal hyperlink to a privilege referral link, i.e.,
a link to the proxy. This proxy could be extremely simple,
containing only a few dozen lines of Perl code as we will
discuss in Section 5, and very lightweight in performance.
The proxy communicates with the target server through the
referrers privilege URL to help a trusted client to acquire its
privilege URL. The target server publicizes an online list of
all these referrers and the neighbors it trusts (Section 7.5),
and only accepts referrals for privileged service from these
Only clients trusted by referrers are given privilege URLs
for the target server. Such trusted clients should be those
authenticated to the referrer website in a way so as to give
the referrer confidence that the client is not under the
control of an attacker. For example, the referrer might
employ a CAPTCHA to gain confidence that the client is
driven by a human user. With the proliferation of Trusted
Computing technologies (e.g., [39]), the referrer could
obtain an attestation to the software state of the client,
and form a trust decision on that basis. Similarly, if the
client computer can be authenticated as managed by an
organization with trusted security practices, then this could
provide additional confidence. The referrer might even
trust the client on the basis of its referral from a trusted
neighbor, though from the point of view of the target server,
admitting transitive referrals in this way expands the trust
perimeter substantially. A middle ground might be to give
directly referred clients a higher priority than indirectly
referred ones.
Below, we describe a simple referral protocol, which is
illustrated in Fig. 2:
1. A client i trusted by a referrer website i clicks on a
privilege referral link offered by i, which activates a
proxy on the referrer site.
2. The proxy generates a reference, including the
clients IP address 11
and priority class j
recommends, and sends the reference to the target
server through a privileged channel established by
, is privilege URL to the target website. As we
discussed in the previous section, edge routers of the
target website will authenticate the capability token
in jni|
3. Upon receiving is reference, the target website
checks its list of valid referrers. If is IP does not
appear on the list, the website ignores the request.
Otherwise, it generates a privilege URL jni|
client i using 11
and j
, embeds it to an http
redirection command and sends the command to
referrer i.
4. The proxy of referrer i forwards the redirection
command to client i.
5. Running the redirection command, client is browser
is automatically redirected to jni|
to establish a
privileged channel directly with the target website.
Discussion. An important issue is how to contain
trusted but nevertheless compromised referrers who might
introduce many zombies to deplete the target websites
privileged service.
One mitigation is to prioritize the
privileged clients: those who are referred by a highly
trusted referrer
have higher privileges than those from a
less trusted referrer. WRAPS assures that the high-priority
traffic can evade flooding attacks on the low-priority traffic.
Within one priority level, WRAPS rate limits privileged
clients traffic. The target website can also fairly allocate
referral quotas among its trusted neighbors. This, combined
with a relatively short privilege period for clients referred
from referrer websites, could prevent a malicious referrer
5. A compromised referrer can also frame their clients and have them
blacklisted by the target website. However, this threat is less serious than
the introduction of malicious clients, as it only affects those using the
referrers service.
6. Such a referrer could be a party which is known to implement strong
security protections and therefore is less likely to be compromised.
Fig. 2. Referral protocol, where t
is referrer is capability token, t
client is capability token, and 22433 is the targets privilege port. Here,
we use Class C network as an example.
from monopolizing the privileged channel. The target
website may update a reputation value for each of its
trusted neighbors. A malicious privileged client detected
will be traced back to its referrer, whose reputation will be
negatively affected.
4.4 Brute-Force Attacks on a Short Capability
An adversary may perform an exhaustive search on the
short MAC in a privilege URL. Specifically, the adversary
first chooses a random MAC to produce a privilege URL for
the target website. Then, it sends a TCP packet to that URL.
If the target website sends back some response, such as syn-
ack or reset, the adversary knows that it made a correct
guess. Otherwise, it chooses another MAC and tries again.
This threat has been nullified by our protection mechan-
ism. The firewall of the target website keeps records of all the
websites privileged clients and only admits packets to
privilege port fromthese clients. If the adversary uses its real
IP address for sending probe packets, the website will not
respond to the probe unless the adversary has already
become the sites privileged client. If the adversary spoofs a
privileged clients IP address to penetrate the firewall, the
websites response only goes to that client, not the adversary.
Therefore, the adversary will never knowwhether it makes a
correct guess or not.
We must stress here that this approach does not introduce
new vulnerabilities. Only a client trusted by the target
website directly or referred by trusted neighbors will have
its IP address on the firewalls whitelist. The storage for the
whitelists is negligible for a modern computer: recording a
million clients IP addresses takes only 4 Mbytes. Therefore,
our approach does not leave an open door to other resource
depletion attacks. The performance of filtering can also be
scaled using fast searching algorithms, for example, Bloom
filters [40] that are capable of searching for entries at link
To prevent a compromised referrer from attacking
the whitelist, a target website can set the quota of the number
of referrals a referrer can make in one privilege period.
The adversary may attempt to cause information leaks
using probe packets with short TTLs: those packets will not
reach the target website; however, a router that drops the
packet within our protection perimeter will bounce back an
ICMP error datagram, from which the attacker can tell
whether the packets IP/port pair has been translated. The
problem can be fixed by refilling the TTL values of the
packets when they enter the ISPs LAN to ensure that they
will reach their destinations, as described in Section 4.2.
A more serious threat comes from idle scanning [41], a
technique that enables the adversary to scan a target server
through a legitimate client. Some operating systems main-
tain a global counter for generating IP identifiers (IPID) for
their packets. This allows an attacker to probe the target
using the clients IP address: if the server responds, the
client will bounce back an RST packet; this can be inferred
by probing the client with two packets (such as ICMP echo),
one before the RST and the other after it, which reveals the
increment of the IPID value from the packets the client
responds to the probes. Such a technique is not very easy to
use in practice, as it requires that the client does not send
out other packets between the two probes. Nevertheless, it
poses a threat to WRAPS in that it enables an adversary to
search for a privileged clients capability. Note that the
threat is not specific to our approach. Because idle scanning
poses a more general threat outside our context, system
administrators have been advised to properly set their
firewalls to fend off the attack [41]. Many OS vendors have
also patched their systems to make them less vulnerable to
such a threat: for example, Solaris and Linux adopt peer-
specific IPID values, and OpenBSD can randomize its
IPID sequence [41]. Given these efforts, we expect that this
attack will be less likely to happen in the future. In addition,
the web server protected by WRAPS can detect the clients
with a vulnerable OS when they are registering for
privileged service or being referred from referrers, and
take proper measures such as notifying their owners of the
threat, setting bandwidth quota for these clients, or even
making them less privileged than those with a patched OS
to protect the latter.
To evaluate the design of WRAPS, we implemented it
within an experimental network with Class C IP addresses.
We utilized SHA-1 to generate a MAC [42] for privilege
URLs. The web server we intended to protect had Linux on
it and an joc/c http server that was configured to listen to
both ports 80 and 22433. The latter was the privilege port.
We built the protection mechanism into a Click software
router [4], [43] and the TCP/IP protocol stack of the targets
kernel, using Linux netfilter [44] as the firewall. We
constructed the referral protocol on the application layer,
on the basis of simple cgi programs running on referrer and
the target websites. In this section, we elaborate on these
5.1 Implementation of the Protection Mechanism
WRAPS depends on the targets ISPs edge routers to
classify inbound traffic into privileged and unprivileged,
and to translate fictitious addresses of privileged packets to
the real location of the service. It also depends on routers
inside the websites local network to protect privileged
flows. We implemented these functions in a Click software
Click is a modular software architecture for creating
routers developed by MIT, Mazu Networks, ICSI, and
UCLA [45]. The Click software can convert a Linux server
into a flexible and pretty fast router. A Click router is built
from a set of packet processing modules called elements.
Individual elements implement certain router functions
such as packet classification, queuing, and scheduling [4],
[43]. A router is configured as a directed packet-forwarding
graph, with elements on its vertices and packet flows
7. A weakness of Bloom filters is that they admit false positives, which
could introduce privileged attack traffic. However, this requires that the
attacker physically control an unprivileged client whose IP address
happens to collide with that of a privileged client in a Bloom filter. This
threat can be mitigated by enforcing a large ratio between memory
dedicated to the Bloom filter and the number of clients. For example,
storing 1 million IP addresses in a 1-Gbyte filter with five hash functions
gives a false positive probability below 10
. In addition, our approach
imposes rate limiting to privileged traffic and therefore can tolerate a small
number of malicious privileged clients (Section 4.2).
traversing its edges. A prominent property of Click is its
modularity which makes a Click router extremely easy to
We added WRAPS modules to an IP router configuration
[4], [43] which forwards unicast packets in compliance with
the standards [46], [47], [48].
WRAPS elements planted in the standard IP forwarding
path are illustrated in Fig. 3. We added five elements:
IPClassifier, IPVerifier, IPRewrite, Priority queue, and
PrioSched. IPClassifier classifies all inbound packets into
three categories: packets addressing the websites privilege
port 22433 which are dropped, TCP packets which are
forwarded to IPVerifier, and other packets, such as UDP
and ICMP, which are forwarded to the normal forwarding
path. IPVerifier verifies every TCP packets capability token
embedded in the last octet of the destination IP address and
the 2-octet destination port number. Verification of a packet
invokes the MAC over a 5-byte input (four for IP, one for
other parameters) and a 64-bit secret key. The packets
carrying correct capability tokens are sent to IPRewrite,
which sets a packets destination IP to that of the target
website and destination port to port 22433. Unprivileged
packets follow UDP and ICMP traffic.
Both privileged and unprivileged flows are processed by
some standard routing elements. Then, privileged packets
are queued into a high-priority queue while other packets
flow into a low-priority queue. A PrioSched element is used
to multiplex packets from these two queues to the output
network interface card (NIC). PrioSched is a strict priority
scheduler, which always tries the high-priority queue first
and then the low-priority one, returning the first packet it
finds [4], [43]. This ensures that privileged traffic receives
service first. Though we explain our implementation here
using only two priority classes, the whole architecture can
be trivially adapted to accommodate multiple priority
On the target server front, netfilter is used to filter
incoming packets. Netfilter is a packet filtering module
inside the Linux kernel. Users can define their filtering
rulesets to the module through iptables. In our implementa-
tion, the target website first blocks all access to its privilege
port. Whenever a new client obtains privilege, a cgi script
running on the web server adds a new filter rule to iptables,
explicitly permitting that users access to the privilege port.
Direct utilization of iptables may potentially cause the
performance degradation of netfilter when there are many
privileged clients. Our general approach, however, could
still scale well by replacing iptables with a fast filter such as
a Bloom filter. Research on this issue is left as our future
To establish a privileged connection, packets from the
target web server to a privileged client must bear the
fictitious source address and port in that clients privilege
URL. In our implementation, we modified the target
servers Linux kernel to monitor the privilege port (22433).
Whenever a packet is emitted with that source port, the
kernel employs the secret key and the MAC to generate a
capability token, and embeds this token into last octet of the
source IP field and the source port field. This address
translation can also be done in the firewall, and configured
to support more than two priority classes.
5.2 Implementation of the Referral Protocol
The referral protocol is performed by two simple scripts
running on the referrer and the target websites. The script on
the referrer acts as a proxy which is activated through
privilege referral links accessible to the trusted clients of that
website. Aprivilege referral link is a simple replacement of a
normal hyperlink. For example, a normal hyperlink to eBay
(http://www.ebay.com) can be replaced with a privilege
hyperlink on PayPal (http://www.paypal.com/cgi-bin/
proxy.pl?http://www.ebay.com), where proxy.pl is the
proxy written in perl. Clicking on that hyperlink, a client
triggers the proxy which in turn invokes a cgi script on the
target website through the referrers privilege URL, convey-
ing the clients source IP address as a parameter.
The cgi script on the target website first checks
whether the proxy is entitled to make such a referral by
searching its referrer list. If the referrer is on the list, the
script inputs a filtering rule to iptables to permit the
access of the client being referred, generates a privilege
URL, and then sends to the referrer proxy a new web
page containing an http redirection command to the new
URL. Here is an example of the redirection command:
<meta HTTP-EQUIV=Refresh CONTENT=1;
URL=http://A.B.C.t/index.htm>, where t is a
capability token and A.B.C is the IP prefix of the
target website.
Receiving the web page with the redirection command,
the proxy relays it to the client. Interpreting the page, the
clients browser will be automatically redirected to the
target website through a privileged channel, and commu-
nicate with the website directly afterward.
In this section, we report our empirical evaluation of WRAPS
in an experimental network composed of a set of Linux
servers, each with up to 2.8-GHz CPU and 1-Gbyte memory.
The objectives of this study are: 1) evaluation of the
overheads of WRAPS, both on edge routers (Section 6.1)
and referrer websites (Section 6.3) and 2) testing the
performance of WRAPS under DoS flooding attacks (Sec-
tion 6.2). We elaborate our experimental results in the
following sections.
6.1 Overhead on Edge Router
The target servers ISPs edge routers play an important role
in WRAPS, undertaking the tasks of classifying packets,
verifying capability tokens, and translating addresses. An
important question is whether the overheads of these tasks,
especially computing a MAC for every TCP packet, could
be afforded by an edge router. We investigated this problem
by comparing the packet forwarding capabilities of a Click
router with and without WRAPS elements.
Fig. 3. WRAPS elements on a Click packet forwarding path.
In this experiment, we connected two computers to a
router (a computer installed with Click software router)
through 2-Gbit NICs. One of them generated a constant and
high-rate flow of 64-byte UDP packets, and the other
received these packets from the router. We utilized Clicks
UDP traffic generator [43], [45] as a traffic source; it works
in the Linux kernel and is capable of generating more traffic
more evenly than a user-level program. Our original design
does not verify UDP, though to test the performance of
IPVerifier, we had the router check the MAC on every
UDP packet in this test according to the last octet of the
destination IP address and the port number.
Our experimental results are presented in Fig. 4. Using a
Pentium-4 2.6-GHz computer as the router, we observed a
maximal forwarding rate of 350k packets per second (pps)
over the standard packet forwarding path. Surprisingly,
this rate did not change after we added in WRAPS
elements. This could result from the constraints of hard-
ware: the 1-Gbyte memory and PCI bus of the router might
become performance bottlenecks before the CPU did. When
this happens, the router reaching its performance limits
might still have sufficient CPU cycles to check the MAC for
every packet. Such a conjecture was confirmed after we
moved the software router to a slower computer (Pentium-3
800 MHz). This time, a difference emerged: we observed
290 kpps for the normal forwarding path
and 220 kpps
for the one with WRAPS elements.
The experimental results show that verification of the
MAC affected the performance of edge routers. However,
such overheads seem to be affordable. In a Pentium-3
computer, running SHA-1 on a 5-byte input and an 8-byte
key takes about 1.39 microseconds, which is reduced to
0.33 microsecond in a Pentium-4 system. Since IPVerifier
applies the MAC to just a few bytes per packet, the rate of
verification should keep up with the forwarding rate.
That said, our implementation of SHA-1 is not particu-
larly fast. An optimized AES program written in assembler
code is reported to be able to work at 17 cycles/byte over a
Pentium-3 system [49]. In our setting, this might lead to over
one million pps even with a Pentium-3. Another option is
UMAC [50], a very fast MAC algorithm. A hardware
implementation of this scheme is reported to achieve a
throughput of 79 Gbps [51]. Note that WRAPS requires
computing a MAC for only a few bytes, not the whole
packet. Therefore, we tend to believe that our scheme could
work at the link speed given a right choice of MAC and
hardware implementation.
6.2 Performance under Flooding Attacks
We evaluated the performance of WRAPS under intensive
bandwidth-exhaustion attacks. Our experimental setting
includes six computers: a Pentium-4 router was linked to
three Pentium-4 attackers and a legitimate client through
4-Gbit interfaces, and to a target website through a 100-Mbit
interface. We deliberately used Gigabit links to the outside
to simulate a large group of ISPs edge routers which
continuously forward attack traffic to a link on the path
toward the website. In addition, though our test was on the
link between a router and an end host, given priority queues
were used to protect both router-to-router and router-to-
host privileged traffic, we believe the same results also
apply to the setting when the congested link is the one
connecting two inside routers.
Under the above network setting, we put WRAPS to the
test under UDP and TCP flooding attacks. In the UDP
flooding test, we utilized Clicks UDP generators to
produce attack traffic. Three attackers attached to the router
through Gigabit interfaces were capable of generating up
to 1.5 Mpps of 64-byte UDP flows, which amounts to
0.75 Gbps. On the other hand, the 100 Mbit channel to the
web server can only sustain up to 148,800 pps of 64-byte
packets, considering 64-bit preamble and 96-bit interframe
gap. The flooding rate could be set to 10 times as much as
the victims bandwidth. Evidently, these attackers can
easily saturate the victims link.
On the legitimate client, a test program continuously
attempted to connect to the target website, either to port 80 or
to a privileged URL which was translated to port 22433 by
the router, until it succeeded. The overall waiting time for a
connection attempt is called connection time. We computed
average connection time over 200 connection attempts. Our
experiment compared the average connection times of
unprivileged connections (to port 80) with those of privi-
leged connections (through capability tokens). Fig. 5 de-
scribes the experiment results. Note that the scale of y-axis is
As illustrated in the figure, average connection times of
both normal and privileged channels jump when the attack
rate hits 150 kpps, roughly the bandwidth of the target
websites link. Above that, latency for connecting to port 80
8. The Click project reported a faster forwarding speed (about 350 kpps)
over a similar hardware setting [43], [45]. This could be because they used a
simple forwarding path to test the router, without processing IP
checksum, option, fragmentation, and ICMP errors.
Fig. 4. (a) Comparison of packet forwarding rates (without versus with
WRAPS elements) on a Pentium-4 router. (b) Comparison of packet
forwarding rates on a Pentium-3 router.
keeps increasing with the attack rate. When the attack rate
goes above 1 Mpps, these unprivileged connections no
longer have any reasonable waiting times; e.g., a monstrous
404 seconds was observed at 1.5 Mpps. Actually, under
such a tremendous attack rate, we found it is extremely
difficult to get even a single packet through: an attempt to
ping the victim server was effectively prevented, with
98 percent of the probe packets lost. Connections through
the privileged channel, however, went very smoothly: the
average connection delay stays below 8 ms while the attack
rate goes from 150 kpps to 1.5 Mpps. Comparing this with
the latency when there was no attack, we observed a decent
increase (0.8 ms to 8 ms) for the connections under WRAPS
protection, versus a huge leap (0.8 ms to 404,000 ms) for
unprotected connections.
ATCP-basedflooding attack differs fromUDPflooding in
that TCP packets will go through the IPVerifier on the router,
potentially adding more cost to forwarding. In our experi-
ment, TCP-flooding traffic was provided by an attack
program which generates packets through socket system
calls as a typical DDoS attack tool does. This application-level
generator cannot work as fast as the Click UDP generator
does in the kernel. We got a peak rate of 1.14 Mpps, which
nevertheless was enoughtodeplete a 100-Mbit channel. Fig. 5
presents the experimental results. Similar to what we
observed in the UDP flooding, privileged connections
effectively circumvented flooding flows, with the worst-case
delay of about 7.7 ms. On the other hand, the average
connection time for unprotected clients is about 323 seconds.
6.3 Overheads on Referrer Website
We evaluated the performance of a referrer website when it
is making referrals to a website under a flooding attack. Our
experimental setting was built upon the setting for
bandwidth-exhaustion attacks, as illustrated in Fig. 6. We
added two computers, along with the original client, and
connected them through a 100-Mbit switch to the router.
One of these three computers acted as a referrer web server
and the other two were used as clients. On every client, a
script simulated clicks on the referrer websites privilege
referral link. It is capable of generating up to 100 concurrent
referral requests. One client was also continuously making
connections to the referral website to collect connection
statistics. The attackers kept on producing TCP traffic of
1.14 Mpps to saturate the victims link.
The experimental results are presented in Fig. 6, which
demonstrate the cumulative distributions of connection
delays (to the referrer website) in response to the number of
concurrent referral requests. The results were obtained from
10,000 connections for each number of concurrent clicks,
ranging from 0 to 200. Through the experiment, we found
that making referrals adds a negligible cost to the referrer
website. Even in the situation when 200 trusted users were
simultaneously requesting referrals, the average latency for
connecting to the referrer website only increased by less
than 40 microseconds. Recall that only authenticated clients
or those that passed CAPTCHA tests are allowed to click on
Fig. 5. (a) UDP flooding. (b) TCP flooding.
Fig. 6. (a) Experimental setting. (b) The cumulative distributions of
connection delays in the presence of referrals. The distribution of
connection delays describes how fast a referrer web server can
accomplish the connection requests from its web clients when it is
providing referral services. For example, the figure shows that more than
90 percent of connections were completed within 365 microseconds,
even when the server was serving up to 200 concurrent referral
a privilege referral hyperlink. Therefore, the rate of referral
requests will not be extremely high in practice. In this case,
it seems that the normal operation of the referrer website
will not be affected noticeably by our mechanism.
A fundamental question for WRAPS is whether a website
under DoS threats is able to find enough referral websites to
protect it. In our research, we studied this problem using a
real sitegraph, the .gov data collection distributed by
CSIRO [52]. This data collection was crawled from the
.gov domain in early 2002, and has been widely used in
research on web retrieval. At a first glance, the statistical
properties from the .gov domain seem biased from a
snapshot of the whole Internet. Contrary to this intuition,
research on other Internet domains (.com and .uk) actually
revealed many similar features [53].
The .gov collection is basically a web graph, which
presents the interlinkage structure of web pages. From it,
we extracted a sitegraph of 7,719 websites. Over the graph,
we studied several properties of WRAPS related to its
security, which we describe in the following sections. In the
discussion below, we assume that a government website
trusts all its neighboring government sites to refer clients to
it. Though primarily a simplification for illustrative pur-
poses, it is somewhat reasonable because the trust relations
among government agencies would make it difficult for an
adversary to establish a site in the .gov domain just for the
purposes of launching attacks.
7.1 Number of Neighbors
We first investigated the distribution of the number of
websites neighbors, which is illustrated in Fig. 7. We found
it is a Zipfian distribution (or more generally, a power-law
distribution). The Zipfian distribution would not seem
suited to WRAPS because it implies that many websites
have only a small number of neighbors. A closer look at the
sitegraph, however, reveals that most of these less-protected
websites are actually less important. Importance of a web
page is measured with pagerank [54], which describes the
chance that the random web surfer visits a page.
This is
directly related to the number of visitors a page gets.
Important Internet search engines, such as Google, output
the outcomes of a search in the order of pageranks [55].
Computed using the same algorithm over all web pages
within a site, siterank [1] roughly describes the importance of
a website. A website with higher siterank has more visitors,
and more likely to be found through search engines. For
example, if you search the keyword bank in Google, you
will find these important sites, such as Bank of America and
CitiBank, appearing in the first two pages of the output.
We analyzed the relationship between the number of a
websites neighbors and its siterank, and discovered a
positive relationship (Fig. 7b). This suggests that in general,
more important websites have more neighbors.
When choosing defense mechanisms, a website may
trade off its potential loss under a flooding attack to the
costs of protection. An important website with a large
customer pool usually has more at stake. Interestingly, the
topology of the sitegraph actually helps such security
management: the more important a website is, the more
neighbors it usually has, and the more protection it can
expect from WRAPS.
A less important website has fewer
neighbors, but may also have fewer visitors and thus is less
likely to suffer a substantial loss from a flooding attack.
Such websites can also improve the protection they receive
from WRAPS by attracting referrers through other means.
For example, the hosts that publish these websites
advertisements might also agree to operate a privilege
referral service for them, as the service is extremely
7.2 Interprotection Structure
Many important websites are interconnected by hyperlinks.
A website with a high siterank is very likely to have some
neighbors that are also highly ranked. Searching these
neighbors neighbors may further discover other important
websites. In this way, we can construct a protection tree for
an important website inductively, as follows: the website is
at the root, and an important site which neighbors an
9. Pagerank does not always reflect a websites importance. Some
big organizations may have low ranks while some unknown sites may
have high ranks. However, reputable organizations do in many cases
have good ranks. As an example, we checked the pageranks of the
largest 10 US banks http://www.infoplease.com/ipa/A0763206.html) in
http://www.prsitecheck.com/index.php?getrank=url, and found that
eight of them are highly ranked (above 6).
10. Note that not all the neighbors of a website will become its referrers:
it may just select from them those which have proper security protection
and thus are less likely to fall prey to an adversary. The number of
neighbors matters for this purpose because it is related to the number of
such referrers the website may find.
Fig. 7. (a) Distribution of number of neighbors. (b) Siteranks versus
number of neighbors. (We calculated a websites siterank using the
algorithm proposed in [1].)
existing tree node can be added as child of that node.
Finally, less important neighbors of existing tree nodes can
be placed as the leaves of the tree, each a child of a tree node
to which it links. If both the root and one of its neighbors i
have implemented the protection mechanism, then i can
refer clients to the root on the basis of referrals it receives
from its children. In this way, all the nodes and leaves
(a large number of small websites) on that tree could assist
in referring legitimate clients to the root.
In our analysis, we took a set of websites with the
highest siteranks to study their interlinkage properties. We
found that all these important sites were connected
together. That is, starting from any important site, all the
rest of the important sites were on that sites protection tree.
The size of such a tree is enormous, covering almost all the
data collection. In Fig. 8, we illustrate the relation between
the number of the most important sites (r-axis) and the
coverage of the protection tree in the whole sitegraph
From the figure, we can see that the 56 most important
sites are linked to by 85 percent of the total 7,719 websites
and the top 125 sites are linked to by 89 percent of the whole
domain. This observation suggests that deploying the
protection mechanism on a very small set of most important
websites enables a huge number of less important websites
to help protect them.
The existence of such an interlinkage structure not only
makes WRAPS more robust, offering better protection to
websites, but also helps address the incentive problem in
deployment. In WRAPS, the referral protocol is very
lightweight for a referrer, while the protection mechanism
is more costly. It is unreasonable to ask an unimportant
website to install the protection mechanism just for
acquiring more referrer websites to protect an important
site. Under the interprotection structure, however, indivi-
dual websites deploy the protection mechanism just for
protecting themselves. The web topology enables them to
protect each other.
7.3 Trusted Neighbors
Perhaps different from the websites in the .gov domain, a
commercial website will not trust most of its neighbors. That
said, our research indicates that an important site might also
have many outbound links to those who link to it: on
average, a top 10 website in the CSIRO data set connects to
about 334 such neighbors. These links are widely perceived
as trust transfer from the important site to another site, as
discovered by studies in operations research [56], [57]. Since
other studies show that the topology of other domains such
as .combears a strong resemblance to that of the .gov domain
[53], we conclude that important commercial websites are
likely to have many trusted neighbors.
7.4 Referral Depth
With a large number of neighbors, an important website
could be very close to many other websites in the domain.
This was justified in our empirical analysis. We found that
the average distance between an important website and
another website is very short (Fig. 9a).
The figure shows that the average distance from any
website in the domain to an important website is between 1.5
and 2. In other words, starting fromany website, a legitimate
client could reach its target website in one or two referrals.
We further studied the distribution of such referral depths.
Fig. 9b gives an example using the most important website
(the one with the highest siterank). We can observe from the
figure that more than50.3 percent of the websites are onlyone
hop away from the site and the other 48.4 percent of sites are
two hops away. In total, almost 99 percent of the websites are
within two hops to the target site.
Small referral depth reduces the costs of referrals and
allows clients to find a referral to the target website easily.
This property enhances the practicality of WRAPS (Table 1).
Fig. 8. Coverage of the protection tree.
11. WRAPS can also be deployed over existing social networks that
also have such interlinking structures, such as webrings (http://
Fig. 9. (a) Average distance to the important websites. (b) Distribution of
referral depths.
7.5 Rewarding Links
Another avenue for an important website to increase the
number of its referrers is to use its high siterank as an
asset to attract small websites. Trust can be established in
this case through some external means, for example, a
contract. The reward the website offers to its referrers
could be as small as a reciprocal link. A websites siterank
uses interlinkage structure as an indicator of the sites
value. A link from one site to another site is interpreted as
a vote for the latters importance. This vote is weighed by
the importance of the voter, which is indicated by its
siterank. Therefore, a link from an important website will
greatly improve the siterank of an unimportant site.
We observed this improvement from CSIROs .gov data
set. As an example, we added a rewarding link from the
second most important site to six unimportant websites.
The table below presents the change of these sites ranks
among all 7,719 sites.
Sucha rewardinglinkwill make anunknownwebsite look
more trustworthy and help it to establish its reputation, with
the trust transferred from the important website [56], [57].
This could encourage many small websites to participate as
referrers for an important site. To facilitate search of these
referrers, a website can also list all them on a web page to let
search engines, such as Google, index and cache that page.
Legitimate clients, therefore, can easily discover these
referrers even during a DoS flooding attack.
As described, WRAPS supports only clients that use fixed (or
infrequently changed) IP addresses. An extension of WRAPS
that supports dynamic NAT users is to generate the clients
capability using its IP prefix, instead of its whole IP address.
This approach, however, treats all the users behind that
IP prefix as a single client, and so shares the resources for one
client IP prefix among all of them. Bandwidth utilized by
these users could be better managed by collaborating with
the providers of such services. For example, we may allocate
to those providers a set of capability tokens associated with
bandwidth quotas and let them control the usage of those
tokens among their clients.
A user must access the target site using its privileged
URL, if she wants privileged service. That is, the domain
name of the server cannot be resolved via DNS, and
moreover the user must save (e.g., bookmark) her privilege
URL from the server when it is updated at the end of a
privilege period. In these ways, WRAPS is not transparent
to users, and would require client-side modifications to
make it transparent.
WRAPS could also break SSL/TLS service because
privilege URLs are encoded as IP addresses instead of
domain names; by contrast, the target site certificate will
typically certify a key for a site name, not its address. This,
however, is not a problem for applications that perform
reverse lookup on the IP address to find a domain name
before verifying the SSL/TLS certificate. Another way to
solve this problem is to confine the capability token to the
port number field, with the only cost being some degrada-
tion in defense against floods with randomly spoofed
addresses. For example, PayPal can redirect a client to
https://www.ebay.com:<capability> to allow the client to
establish a TLS session with eBay.
Discovery of referrer websites is not transparent to
clients in WRAPS. One way to mitigate this problem is to let
a clients ISP be its referrer. For example, the websites of all
academic institutions could agree to offer referral services
to each other. Whenever an academic client sends a request
to a server within an institutions domain, that request will
be automatically redirected by the clients ISP (its institu-
tion) to the local referral website. Moreover, we believe that
a technique similar to Dynamic DNS could be used to allow
a target website to dynamically map its domain name to its
referrer sites IP addresses when it is undergoing a DoS
attack. We plan to further investigate these approaches in
future research.
The effectiveness of WRAPS as a DoS defense at least
partially rests on the confidentiality of privileged URLs, and
so any method of leaking these to an attacker could diminish
its effectiveness. One risk is the HTTP Referer field,
specifically if this field carries a privileged URL for one site
in a page request to another, attacker-controlled site. For
example, if a WRAPS-protected site permits clients to post
HTML content, then when this content is retrieved and
rendered by a privileged client browser, the privilege URL
could be divulged to sites from which the browser retrieves
linked content, in the Referer field of those retrievals. As
such, it would be prudent for a WRAPS-protected site to
serve user-contributed content as plain text. More generally,
though, cross-site scripting vulnerabilities or any other
method for an attacker to learn the privileged URL by
which a user accesses a WRAPS-protected site would be
WRAPS requires modifying edge routers to add mechan-
isms for capability verification and address translation. This
potentially affects it deployment. However, compared with
other capability-based techniques (e.g., [5] and [53]), our
approach does not require changes to core routers and
clients, and therefore could be easier to deploy than those
techniques. It is also possible to avoid changing edge
routers by attaching to them an external device capable of
performing these tasks at a high speed.
DDoS flood attacks continue to harass todays Internet
websites. Our research shows that such threats could be
mitigated through exploring the enormous interlinkage
relationships among the websites themselves. In this paper,
Improvement of Siteranks with a Rewarding
Link from an Important Website
we propose WRAPS, a web referral infrastructure for
privileged service, that leverages this idea. WRAPS has
been constructed upon the existing web sitegraph, elevating
existing hyperlinks to privilege referral links. Clicking on
referral links, a trusted client can get preferential access to a
websites under a flooding attack.
We presented the design and implementation of
WRAPS, a web referral infrastructure for privileged service,
and empirically evaluated its performance. Our study
shows that WRAPS enables clients to evade very intensive
flooding attacks, connecting to a website smoothly even
when any normal connection became unrealistic. The
overheads of WRAPS are affordable to routers and almost
negligible to referrer websites. We utilized a real web
sitegraph to analyze the security properties of WRAPS, and
found that the sitegraph had many features that would
benefit WRAPS. We also discussed a simple approach to
encourage many small websites to help protect an im-
portant website and facilitate the search for referrers during
DoS attacks.
The authors thank Nianli Ma for helping implement part of
their prototype and conduct part of the experiments,
particularly those related to web topology.
[1] J. Wu and K. Aberer, Using Siterank for p2p Web Retrieval,
Technical Report IC/2004/31, Swiss Fed. Inst. Technology,
Mar. 2004.
[2] X. Wang and M. Reiter, Wraps: Denial-of-Service Defense
through Web Referrals, Proc. 25th IEEE Symp. Reliable Distributed
Systems (SRDS), 2006.
[3] L. von Ahn, M. Blum, N.J. Hopper, and J. Langford,
CAPTCHA: Using Hard AI Problems for Security, Advances
in CryptologyEUROCRYPT 03. SpringerVerlag, 2003.
[4] E. Kohler, R. Morris, B. Chen, J. Jannotti, and M. Kaashoek, The
Click Modular Router, ACM Trans. Computer Systems, vol. 18,
no. 3, Aug. 2000.
[5] A. Yaar, A. Perrig, and D. Song, An Endhost Capability
Mechanism to Mitigate DDoS Flooding Attacks, Proc. IEEE Symp.
Security and Privacy (S&P 04), May 2004.
[6] T. Anderson, T. Roscoe, and D. Wetherall, Preventing Internet
Denial-of-Service with Capabilities, Proc. Second Workshop Hot
Topics in Networks (HotNets 03), Nov. 2003.
[7] R. Mahajan, S. Bellovin, S. Floyd, J. Ioannidis, V. Paxson, and
S. Shenker, Controlling High Bandwidth Aggregates in the
Network, Computer Comm. Rev., vol. 32, no. 3, pp. 62-73, July 2002.
[8] J. Ioannidis and S. Bellovin, Implementing Pushback: Router-
Based Defense against DDoS Attacks, Proc. Symp. Network and
Distributed System Security (NDSS), 2002.
[9] S. Floyd and K. Fall, Promoting the Use of End-to-End
Congestion Control in the Internet, IEEE/ACM Trans. Networking,
Aug. 1999.
[10] R. Mahajan, S. Floyd, and D. Wetherall, Controlling High-
Bandwidth Flows at the Congested Router, Proc. Ninth IEEE Intl
Conf. Network Protocols (ICNP 01), Nov. 2001.
[11] P. Ferguson and D. Senie, RFC 2267: Network Ingress Filtering:
Defeating Denial of Service Attacks Which Employ IP Source
Address Spoofing, ftp://ftp.internic.net/rfc/rfc2267.txt, Jan.
[12] J. Li, J. Mirkovic, and M. Wang, Save: Source Address Validity
Enforcement Protocol, Proc. IEEE INFOCOM, 2002.
[13] R. Stone, An IP Overlay Network for Tracking Dos Floods, Proc.
USENIX Security Symp., 2000.
[14] C. Jin, H. Wang, and K. Shin, Hop-Count Filtering: An Effective
Defense against Spoofed Traffic, Proc. 10th ACM Conf. Computer
and Comm. Security (CCS), 2003.
[15] A. Yaar, A. Perrig, and D. Song, Pi: A Path Identification
Mechanism to Defend Against DDoS Attacks, Proc. IEEE Symp.
Security and Privacy (S&P 03), http://www.ece.cmu.edu/
~adrian/projects/pi.ps, May 2003.
[16] H. Burch and B. Cheswick, Tracing Anonymous Packets to Their
Approximate Source, Proc. 14th USENIX System Administration
Conf., Dec. 1999.
[17] S. Bellovin, M. Leech, and T. Taylor, The ICMP Traceback
Messages, InternetDraft, draft-ietf-itrace-01.txt, ftp://ftp.ietf.org/
internet-drafts/draft-ietf-itrace-01.txt, Dec. 1999.
[18] S. Savage, D. Wetherall, A. Karlin, and T. Anderson, Network
Support for IP Traceback, Proc. ACM SIGCOMM 00, Aug. 2000.
[19] D. Song and A. Perrig, Advanced and Authenticated Marking
Schemes for IP Traceback, Proc. IEEE INFOCOM 01, Apr. 2001.
[20] D. Dean, M. Franklin, and A. Stubblefield, An Algebraic
Approach to IP Traceback, Proc. Network and Distributed System
Security Symp. (NDSS 01), Feb. 2001.
[21] M. Adler, Tradeoffs in Probabilistic Packet Marking for IP
Traceback, Proc. 34th ACM Symp. Theory of Computing (STOC),
[22] A. Snoeren, C. Partridge, L. Sanchez, C. Jones, F. Tchakountio,
S. Kent, and W. Strayer, Hash-Based IP Traceback, Proc.
ACM SIGCOMM 01, Aug. 2001.
[23] A. Juels and J. Brainard, Client Puzzle: A Cryptographic Defense
against Connection Depletion Attacks, Proc. Symp. Network and
Distributed System Security (NDSS 99), S. Kent, ed., pp. 151-165,
[24] X. Wang and M. Reiter, Defending against Denial-of-Service
Attacks with Puzzle Auctions, Proc. IEEE Symp. Security and
Privacy (S&P 03), May 2003.
[25] X. Wang and M. Reiter, Mitigating Bandwidth-Exhaustion
Attacks Using Congestion Puzzles, Proc. 11th ACM Conf.
Computer and Comm. Security (CCS 04), Nov. 2004.
[26] A. Keromytis, V. Misra, and D. Rubenstein, SOS: Secure Overlay
Services, Proc. ACM SIGCOMM 02, Aug. 2002.
[27] D. Andersen, Mayday: Distributed Filtering for Internet Ser-
vices, Proc. Fourth USENIX Symp. Internet Technologies and
Systems (USITS), 2003.
[28] W. Morein, A. Stavrou, D. Cook, A. Keromytis, V. Misra, and
D. Rubenstein, Using Graphic Turing Tests to Counter
Automated DDoS Attacks against Web Servers, Proc. 10th
ACM Conf. Computer and Comm. Security (CCS), 2003.
[29] D. Adkins, K. Lakshminarayanan, A. Perrig, and I. Stoica,
Taming IP Packet Flooding Attacks, Proc. Second Workshop Hot
Topics in Networks (HotNets 03), Nov. 2003.
[30] M. Waldvogel and R. Rinaldi, Efficient Topology-Aware
Overlay Network, Proc. First Workshop Hot Topics in Networks
(HotNets 02), Oct. 2002.
[31] J. Han, D. Watson, and F. Jahanian, Topology Aware Overlay
Networks, Proc. IEEE INFOCOM 05, Mar. 2005.
[32] A. Stavrou and A. Keromytis, Countering DoS Attacks with
Stateless Multipath Overlays, Proc. 12th ACM Conf. Computer and
Comm. Security (CCS), 2005.
[33] X. Yang, D. Wetherall, and T. Anderson, ADos-Limiting Network
Architecture, Proc. ACM SIGCOMM 05, pp. 241-252, 2005.
[34] V. Gligor, Guaranteeing Access in Spite of Service-Flooding
Attack, Proc. Security Protocols Workshop (SPW 04),
R. Hirschfeld, ed., SpringerVerlag, 2004.
[35] Defeating Captcha, http://en.wikipedia.org/wiki/Captcha#
Defeating_Captchas, 2004.
[36] J. Xu and W. Lee, Sustaining Availability of Web Services under
Severe Denial of Service Attacks, IEEE Trans. Computers, special
issue on reliable distributed systems, vol. 52, no. 2, pp. 195-208,
Feb. 2003.
[37] A.W. Jackson, W. Milliken, C.A. Santivanez, M. Condell, and
W.T. Strayer, A Topological Analysis of Monitor Placement,
Proc. Sixth Intl Symp. Network Computing and Applications
(NCA), 2007.
[38] I. Stoica, S. Shenker, and H. Zhang, Core-Stateless Fair Queueing:
Achieving Approximately Fair Bandwidth Allocations in High
Speed Networks, Proc. ACM SIGCOMM, 1998.
[39] R. Sailer, X. Zhang, T. Jaeger, and L. van Doorn, Design and
Implementation of a TCG-Based Integrity Measurement
Architecture, Proc. 13th USENIX Security Symp., pp. 223-238,
Aug. 2004.
[40] B. Bloom, Space/Time Tradeoffs in Hash Coding with Allowable
Errors, Comm. ACM, vol. 13, no. 7, pp. 422-426, 1970.
[41] Idle Scanning and Related IPID Games, http://insecure.org/
nmap/idlescan.html, 2008.
[42] G. Tsudik, Message Authentication with One-Way Hash Func-
tions, Proc. IEEE INFOCOM, 1992.
[43] E. Kohler, The Click Modular Router, PhD thesis, MIT,
Nov. 2000.
[44] The Netfilter/Iptables Project, http://www.netfilter.org, 2008.
[45] The Click Modular Router Project, http://pdos.csail.mit.edu/click,
[46] F. Baker, RFC 1812: Requirements for IP Version 4 Routers,
ftp://ftp.internic.net/rfc/rfc1812.txt, June 1995.
[47] J. Postel, RFC 791: Internet Protocol, ftp://ftp.internic.net/rfc/
rfc791.txt, Sept. 1981.
[48] J. Postel, RFC 792: Internet Control Message Protocol, ftp://
ftp.internic.net/rfc/rfc792.txt, Sept. 1981.
[49] B. Gladman, AES and Combined Encryption/Authentication
Modes, fp.gladman.plus.com/AES/index.htm, 2008.
[50] P. Rogaway, UMAC: Fast and Provably Secure Message Authentica-
tion, http://www.cs.ucdavis.edu/ rogaway/umac/, 2008.
[51] B. Yang, R. Karri, and D.A. Mcgrew, An 80 GBPS FPGA
Implementation of a Universal Hash Function Based Message Authen-
tication Code, Third Place Winner, 2004 DAC/ISSCC Student
Design Contest, 2004.
[52] CSIRO, Web Research Collections (TREC Web and Terabyte Track),
http://es.csiro.au/TRECWeb/, 2008.
[53] K. Bharat, B. Chang, M. Henzinger, and M. Ruhl, Who Links to
Whom: Mining Linkage between Web Sites, Proc. Intl Conf. Data
Mining (ICDM 01), pp. 51-58, 2001.
[54] L. Page, S. Brin, R. Motwani, and T. Winograd, The Pagerank
Citation Ranking: Bringing Order to the Web, Stanford Digital
Library Technologies Project, technical report, 1998.
[55] Our Search: Google Technology, http://www.google.com/
technology, 2008.
[56] K.J. Stewart, Trust Transfer on the World Wide Web, Organiza-
tion Science, vol. 14, no. 1, 2003.
[57] K.J. Stewart and Y. Zhang, Effects of Hypertext Links on Trust
Transfer, Proc. Fifth Intl Conf. Electronic Commerce (ICEC 03),
ACM Press, pp. 235-239. 2003.
XiaoFeng Wang received the PhD degree in
computer engineering from Carnegie Mellon
University in 2004. He is an assistant professor
in the School of Informatics, Indiana University,
Bloomington. His research interests include all
areas of computer and communication security.
Particularly, he is carrying out active research
on system and network security (including
automatic program analysis, malware detection
and containment, countermeasures to denial-of-
service attacks), privacy-preserving techniques and their application to
critical information systems (such as health information systems), and
incentive engineering in information security. His publications regularly
appear in the mainstream venues in system and network security. He
also serves on various conference committees in the area. He is a
member of the IEEE.
Michael K. Reiter received the BS degree in
mathematical sciences from the University of
North Carolina at Chapel Hill (UNC) in 1989 and
the MS and PhD degrees in computer science
from Cornell University in 1991 and 1993,
respectively. He is the Lawrence M. Slifkin
Distinguished Professor in the Department of
Computer Science, UNC. He joined AT&T Bell
Labs in 1993 and became a founding member
of AT&T LabsResearch when NCR and
Lucent Technologies (including Bell Labs) were split away from AT&T
in 1996. He then returned to Bell Labs in 1998 as the director of Secure
Systems Research. In 2001, he joined Carnegie Mellon University as a
professor of electrical and computer engineering and computer
science, where he was also the founding technical director of CyLab.
He joined the faculty at UNC in 2007. His research interests include all
areas of computer and communications security and distributed
computing. He regularly publishes and serves on conference organizing
committees in these fields, and has served as program chair for the
flagship computer security conferences of the IEEE, the ACM, and the
Internet Society. He currently serves as the editor-in-chief of ACM
Transactions on Information and System Security and on the board of
visitors for the Software Engineering Institute. He previously served on
the editorial boards of the IEEE Transactions on Software Engineering,
the IEEE Transactions on Dependable and Secure Computing, and the
International Journal of Information Security, and as a chair of the IEEE
Technical Committee on Security and Privacy. He is a senior member
of the IEEE.
> For more information on this or any other computing topic,
please visit our Digital Library at www.computer.org/publications/dlib.