Академический Документы
Профессиональный Документы
Культура Документы
Morton Swimmer
A trojan (short for Trojan horse) is a program that is presumed by the user to
be bona fide, but in which a malicious and undocumented payload has been
intentionally placed.
The standard definition of a trojan is:
Definition 1 (trojan). A Trojan horse is defined as a piece of malicious
software that, in addition to its primary effect, has a second, non-obvious
malicious payload.
The definition hinges on the payload as there are no other measurable
characteristics.
Backdoor trojan
Spyware
Droppers
A further type of trojan is purely of the type malware dropping. This can be
useful for various reasons. For one, some malware cannot exist naturally as a
file and must be inserted into memory by the dropper so that they can run as
was the case with the CodeRed worm. Another reason to use a dropper is to
heavily obfuscate the contained malware payload to avoid detection. Droppers
are always local-acting.
Exploits
We use the term exploit to mean those programs that inject code into a running
program. The job of the exploit is to establish communication with the target
program and bring it to the point where the code can be inserted usually
via a buffer-overflow vulnerability. The insertion requires padding the code
2
see http://www.realvnc.com
3
see http://www.bo2k.com
4
End User License Agreement
4 Morton Swimmer
1.1.2 Viruses
Viruses are not the oldest malware threat, but they have lived up to the fears
of the pioneers of anti-virus research in the mid 1980s. They have become a
major security issue for all who must maintain computer systems and net-
works (which includes a surprising number of ordinary users). Furthermore,
while well thought through attacks used to be rare, malware has become more
sophisticated and sinister recently, leading one to suspect that virus writing
is no longer the realm of amateurish kids with no real agenda. Instead, it has
become a money-making endeavor, by compromising machines to be used as
spam-bots, for collecting private and confidential information or other uses.
Right from the genesis of the virus problem, there was already a robust
formal definition of the virus property that had been established by Cohen [2].
However, an acceptable human readable definition of a computer virus took
a while to establish. Cohen informally defined viruses as:
Definition 6 (computer virus). A computer virus is a program that can
infect other programs by modifying them to include a possibly evolved copy of
itself.
Cohen also defines the infection property as:
Definition 7 (infection property). With the infection property, a virus
can spread throughout a computer system or network using the authorizations
of every user using it to infect their programs. Every program that gets infected
may also act as a virus and thus the infection spreads.
However, over the course of years and experience with actual viruses from
the wild, a newer (informal) definition was developed:
1 Malicious Software in Ubiquitous Computing 5
The insituacy of a virus is also very core to the definition and refers to
how the target of a virus is affected and this concept can also be applied to
other malware.5 A virus may be parasitic, exositic (Def. 5 and 12), as well as
non-parasitic (Def. 11).
The core characteristics of a virus are therefore transitive and either par-
asitic or exositic. Of course, we are talking about a stereotypical virus and
there have been some borderline cases not included in this. Other properties
that viruses have exhibited in the past are polymorphism and stealth (Def. 13
and 14).
Worms
Viruses and worms are very similar in certain ways. The similarity to viruses
comes from a worm’s transitivity. The core and secondary characteristics of
worms are essentially the same as for viruses. The main differences are in
locality and insituacy. Worms are always remote-acting and can also be non-
parasitic, i.e., the worm can create an entire new object that has no depen-
dence on an existing object, whereas parasitic infection is extremely rare with
worms unless it operates as a virus within the local machine.
Historically, the platform on which malware proliferated the best, PCs run-
ning MS-DOS, at first had the worst memory protection possible: none. The
“pioneering” work done by the malware writers on the MS-DOS platform can-
not be easily replicated on current platforms, but there are certainly holes in
current designs that are being exploited as they become known.
At the very least, we need to protect the operating system processes from
the user space, but also protecting user spaces from each other is important.
Usually, these protections are built into the CPU and the operating system
uses them.
1 Malicious Software in Ubiquitous Computing 7
Buffer overflows (BoF) occur when the size of input data exceeds the size of
the buffer the program has reserved for it (see Fig. 1.1). If the buffer has been
allocated on the stack, the buffer is overrun and other data on the stack will be
overwritten. Used maliciously, as in the case of an exploit, the buffer is filled
until the data overruns the return address of the function. When the function
returns, it incorrectly jumps to the location that has been overwritten in place
of the original return address. By carefully crafting the insertion buffer, the
exploiter can cause arbitrary code to be executed.
An important approach is to prevent code execution on the stack, where
normally we should only assume transient data to reside. Recently, the AMD
and Intel processors for PCs have acquired an execution protection but require
operating system support, which is not universal yet. Prior to that, projects
such as PaX6 and OpenWall7 implemented the same functionality in software.
The problems with the approach revolves around the fact that code is
legitimately (and surprisingly) being executed on the stack in some applica-
tions. GNU’s C compiler generates so-called trampoline code on the stack for
implementing nested functions. This is not actually a frequently used feature
but it does occur often enough to make stack protection problematic. The
next problem is with Codecs 8 code, which are often implemented by generat-
ing an optimized piece of code in a buffer and then executing it. However, if
this sort of coding can be avoided, execution restrictions can help, even if not
all types of BoF attacks9 can be prevented.
An alternative approach to stack execution protection is to only protect
the call return address that must be overwritten for a BoF attack to work.
6
http://pageexec.virtualave.net/
7
http://www.openwall.com/
8
Coding/Decoding of multimedia bitstreams.
9
The class of return-into-libc attacks is one prominent example.
8 Morton Swimmer
.. .. .. ..
. . . .
esp+532 *argv[] ebp+12 esp+532 (*argv[]) ebp+12
esp+528 argc ebp+8 esp+528 (argc) ebp+8
esp+524 return ebp+4 esp+524 (return) ebp+4
esp+520 SFP ebp+0 esp+520 (SFP) ebp+0
esp+516 ebp-4 esp+516 ebp-4
esp+512 ebp-8 esp+512 addresses ebp-8
.. ..
. .
esp+292 ebp-228
esp+288 ebp-232
.. ..
. buffer . .. ..
. shellcode .
esp+248 ebp-272
esp+244 ebp-276
.. ..
. NOPs .
esp+0 ebp-520 esp+0 ebp-520
.. .. .. ..
. . . .
Fig. 1.1. Stack contents before and after a stack buffer is overflowed by the attack
string comprising of no-ops, the shellcode and finally candidate addresses to facilitate
the jump to the NOP section. In this diagram that refers to an Intel x86 CPU, esp
and ebp are the stack and base pointers, respectively, and ebp+4 will always point
to the return in typical use.
The most common approach is to encrypt the return address in a way that is
difficult to anticipate by the attacker, but not overly computationally inten-
sive for the operating system. Before the function returns, the return address
is decrypted. If the memory where the return address is stored has been over-
ridden by a BoF attack and the perpetrator has not correctly anticipated the
encryption scheme, the ‘decrypted’ return address will not contain a valid
address and an illegal instruction or memory access control exception occurs
when the CPU jumps there (e.g., see [3, 4]).
Potentially, boundschecking all allocated buffer could be used to prevent
the buffer overflow in the first place, but without fast hardware support, a
software-only solution will be too slow to be useful.
In dealing with malware that acts from remote our preference is to prevent
it it from entering the system in the first place. One very common method
is to place a firewall, an email filter or some other defense on the edge of an
intranet or at the network interface of a machine.
1 Malicious Software in Ubiquitous Computing 9
Firewalls are filters that act as a conduit for all traffic in and out of the
organization. They are very good at directionally limiting traffic of certain
types of services or from ranges of IP addresses. A very common setup is,
in broad terms, to block all external traffic into the intranet except to a few
services to certain machines in the so-called DMZ.10 A firewall will be able
to block worms coming in from the outside if set up like this and the DMZ
machines are not vulnerable.
So-called, Intrusion Prevention Systems (IPS) are conceptually similar to
firewalls except that they can also filter based on content. They can be used to
strip out code in requests coming in from the outside to prevent malware-based
attacks. Blocking worms using an IPS is increasingly done even at boundaries
inside an intranet for performance reasons: the added worm traffic had been
using too much network bandwidth and hinders the clean-up process.
Much malware infiltrates an organization via news and email. It still is a threat
that will probably never completely go away and the opportunities to prevent
malware spread are evident. Email worms usually spread via executable at-
tachments as the email body is by default text-only. It should be noted that
some email systems, use a rich text format that may allow code in the text
as well. In more standard systems, the email or news is already in MIME or
RFC 822 formats, and the malware resides as an attachment that is either
automatically executed where this is possible or is clicked upon by the user.
Note that nothing in the Internet standards require executable attachments
to be executed automatically—this is done at the discretion of the vendors
and it is a practice that needs to be discouraged.
Configuration control
with boot control, it is hoped that the operating system can be booted into a
known secure state and verified remotely. The effect on malware is to prevent
successful stealth efforts if the important parts of the system configuration
are tightly controlled. Currently it is not self-evident that hardware assisted
security will provide the protection against malware that is hoped.
Another form of access control are the code signing schemes that exist in
a few variations for Java and Windows ActiveX applets. In such a scheme,
12 Morton Swimmer
compiled programs, or a package of programs and data files are signed by some
trusted authority. Provided the user is in possession of the signer’s public key,
he can verify the authenticity of the program or bundle, leaving him or her
to decide if the signer can be trusted. The user is faced with decisions like
(1) whether the third party can be trusted and (2) what to do if the code
just isn’t signed at all. The latter problem is troubling as such a scheme must
mandate only using signed code to be effective. Any exception may undermine
the entire scheme. Once malware has been able to subvert such a mechanism,
the scheme is useless. For this reason, it is most frequently used to verify the
validity of code that is being imported into a system, but not for installed
code.
The problem of trust in the third party is probably the most troubling as it
is precisely what such a scheme wanted to counter. We want to establish trust
in the code, however, there was at least one case where a malicious party
registered its public key with the trusted third party under a very similar
name as a more well-known company. As the nuances in the company name
are lost on most users and the trusted third party does not check for such
misrepresentations, many users were being duped.12
Finally, code signing is only effective in an uncompromised system. Once
malware has gained control of the system, it may compromise the code veri-
fication process.
Anti-virus
From the very first anti-viruses to the present ones, all vendors must do the
same thing: after receiving a virus sample, they must verify it and then inte-
grate detection for it into their product. While the detection methods are no
longer simple pattern matching, the paradigms are still the same. Recently,
anti-viruses have also expanded their scope to include other malware, leading
to problems of verifying that a sample is malware in some borderline cases.
Anti-viruses can take various forms depending on how they are deployed.
The classical scanner is called an on-demand scanner, as it is needed to scan
partial or complete file-systems. Another common form is the on-access scan-
ner which scans objects as they are used. Scanners can also be deployed at
12
Verisign wrote about this problem on: http://www.verisign.com/developer/
notice/authenticode/
1 Malicious Software in Ubiquitous Computing 13
Apart from the explosion in the number of devices, the nature of the de-
vices will be much more diverse than current systems, which are very much
dominated by Intel CPU hardware and the Microsoft Windows operating sys-
tem. There is no denying that malware thrives on a mono-culture. Greer et
al. make the argument that the dominance of a single operating system is
a significant security risk and therefore a threat to a society that relies on
computer services [11]. Certainly, the economics of scale offered by the cur-
rent mono-culture benefit the malware writers by increasing the value of each
14 Morton Swimmer
vulnerability they find on that platform. Surely, the UbiComp future can only
be better?
Unfortunately, there is no simple solution to the mono-culture problem.
Trivially, we need more variety in systems. Realistically, we need a variety
much larger than we can possibly achieve to come close to the rich diversity
of nature where every instance of a species is truly an individual. Certainly,
UbiComp will introduce greater variety, but in is unrealistic to believe that
the devices will not share a common operating system over a great diversity
of devices. Economics alone will dictate this.
In conclusion, UbiComp will introduce a richer diversity of devices, but
this is unlikely to be enough to serve as a protection mechanism alone.
Current computer systems vary from being connected for minutes to contin-
ually. Many of the UbiComp devices will only be able to connect when they
are activated for a short period of time and by some event. Either a hub polls
such devices on a regular basis by providing them power and the opportunity
to respond, making the devices dependent on this ‘hub’, or they are activated
by some external or internal event (e.g., a floor tile is stepped on generating
power with a piezo-electric device) and broadcast their status.
Therefore, the topology in UbiComp will be rich in ad-hoc hierarchies of
dependent systems and most nodes in the network will not be directly reach-
able. This, and the huge address space, will change how malware
(self-) deployment occurs. Currently, the strategy for many worms is to target
random addresses in the entire address space (e.g., the CodeRed II worm). It
is highly unlikely that this strategy will create the critical mass to create an
epidemic this way [12], so more context aware strategies will be needed.
Manual deployment of trojans will probably not be feasible at all because
the devices may be hard to reach. However, there is no reason to believe that
the malware writers will not adapt and write specialized installation programs.
What a less homogeneous topology will give us are better opportunities
to detect malware by their spread patterns. Topological detection, e.g., as
proposed by Cheung et al. [13], will become more relevant in this environment.
The need for ‘hubs’ also give us logical points of defense where anti-viruses
and IDS software can be deployed in the network, although conversely they
will be popular attack targets.
Many UbiComp devices are mobile by virtue of a user having physical posses-
sion of it and moving from location to location. Apart from being a nightmare
for routing purposes, if this becomes necessary for the devices to communicate,
it poses a great opportunity for malware to spread.
1 Malicious Software in Ubiquitous Computing 15
1.4 Conclusions
The malware problem looms large for system administrators that must main-
tain security in our current systems, but the mélange of tools has kept the
problem at bay. UbiComp is a game changer though, and could disrupt this
fragile balance. On one hand, there are opportunities for better defenses in
UbiComp, but at the same time, the problem will be many magnitudes larger
and more complex. Since the UbiComp world will not be upon us overnight,
but will be a long process with many long-lived components, it is vital that
security is designed into every part of the standards and the devices right
from the start.
13
http://www.zeroconf.org/
16 Morton Swimmer
References
1. Dr. Klaus Brunnstein. From antivirus to antimalware software and beyond:
Another approach to the protection of customers from dysfunctional system
behaviour. In 22nd National Information Systems Security Conference, 1999.
2. Fred Cohen. Computer Viruses. PhD thesis, University of Southern California,
1985.
3. Mike Frantzen and Mike Shuey. StackGhost: Hardware facilitated stack protec-
tion. In Proceedings of the 2001 USENIX Security Symposium, August 2001.
4. Tzi-cker Chiueh and Fu-Hau Hsu. RAD: A compile-time solution to buffer over-
flow attacks. In Proceedings of the 21st International Conference on Distributed
Computing Systems (ICDCS-01), pages 409–420, Los Alamitos, CA, USA, April
2001. IEEE Computer Society.
5. Wietse Venema. TCP WRAPPER: Network monitoring, access control and
booby traps. In Proceedings of the 3rd UNIX Security Symposium, pages 85–92,
Berkeley, CA, USA, September 1992. USENIX Association.
6. Dean Povey. Enforcing well-formed and partially-formed transactions for UNIX.
In Proceedings of the 8th USENIX Security Symposium. USENIX Association,
August 1999.
7. Paul A. Karger. Limiting the damage potential of discretionary Trojan horses.
In Proceedings of the 1987 IEEE Symposium on Security and Privacy, pages
32–37, April 1987.
8. D. Bell and L. LaPadula. The Bell-LaPadula model. Journal of Computer
Security, 4:239–263, 1996.
9. B. Le Charlier, A. Mounji, and M. Swimmer. Dynamic detection and classifica-
tion of computer viruses using general behaviour patterns. In Proceedings of the
Fifth International Virus Bulletin Conference, Boston, MA, USA, 1995. Virus
Bulletin, Ltd.
10. Melanie R. Rieback, Bruno Crispo, and Andrew S. Tanenbaum. Is your cat in-
fected with a computer virus? In Fourth Annual IEEE International Conference
on Pervasive Computing and Communications, 2006.
11. Dan Geer, Rebecca Bace, Peter Gutmann, Perry Metzger, Charles P. Pfleeger,
John S. Quaterman, and Bruce Schneier. Cyberinsecurity: The cost of monopoly.
Technical report, CCIA, September 2003.
12. Jeffrey O. Kephart and Steve R. White. Directed-graph epidemiological models
of computer viruses. In Proceedings of the 1991 IEEE Computer Society Sympo-
sium on Research in Security and Privacy, pages 343–359, Oakland, CA, USA,
May 1991.
13. Steven Cheung, Rick Crawford, Mark Dilger, Jeremy Frank, Jim Hoagland, Karl
Levitt, Jeff Rowe, Stuart Staniford-Chen, Raymond Yip, and Dan Zerkle. The
design of GrIDS: A graph-based intrusion detection system. Technical report
CSE-99-2, Department of Computer Science, University of California at Davis,
Davis, CA, January 1999.
14. E. Guttman. Service location protocol modifications for ipv6. Internet Request
for Comment RFC 3111, Internet Society, May 2001.