Академический Документы
Профессиональный Документы
Культура Документы
Copyright Information
© 2010 The Tower at Georgia Tech. Office of Student
Media. 353 Ferst Drive. Atlanta, GA 30332-0290
Acknowledgements
Special Thanks Dr. Manu Platt Biomedical Engineering, GT/Emory
The Tower would not have been possible without the Dr. Lakshmi Sankar Aerospace Engineering
assistance of the following people: Dr. Franca Trubiano College of Architecture
If you have questions, please e-mail review@gttower.org. For more information, including de-
tailed submission guidelines and samples, visit gttower.org.
Staff
Want to be involved with The Tower behind the scenes? Become a member of the staff ! The Tower
is always accepting applications for new staff members. Positions in business, production, review,
and web development are available. Visit gttower.org or email editor@gttower.org for more infor-
mation on staff position availabilities.
Table of Contents
Perspectives
9 value sensitive programming language design
Nicholas marquez
Articles
25 Network forensics using piecewise polynomials
Sean Marcus Sanders
Advisor:
Charles L. Isbell
School of Computer Science
Georgia Institute of Technology
Spring 2010: The Tower
10
Introduction ings within an environment. In designing this language,
A programming language is a user interface. In design- we believe that working closely with our intended us-
ing a system’s user interface, it is not controversial to ers is crucial to the development of tools that will meet
assert that a thoughtful consideration of the system’s their needs and be adopted. To guide our design interac-
users is paramount. Though there is a large body of re- tions with our users we are applying the Value Sensitive
search from the Human-Computer Interaction (HCI) Design (VSD) ( Friedman et al. 2006; Le Dantec et al.
research community studying just how best to consider 2009) methodology from HCI. In this paper we give a
a system’s users in the design of its interface there is little short description of VSD and discuss how it may be ap-
history of applying any of these methodologies from plied to the design of our programming language. This
HCI to the design of general-purpose programming work is currently at an early stage, and our understand-
languages. Ken Arnold has argued that, since program- ing and application of VSD is evolving. Nevertheless we
mers are human, programming language design should believe that the application of HCI methodologies in
employ techniques from HCI (Arnold 2005). While general, and VSD in particular, will be extremely valu-
there has been some work in applying HCI to the de- able in the development of languages and software tools
sign of languages for non-programmers, for example, that are intended for non-programmers, that is, profes-
for children’s programming environments (Pane et al. sionals for whom programming is an important activity
2002), general purpose programming languages have but not the primary focus of their work.
not suffered much from a lack of HCI methodology
In the next section we provide a brief description of
in their design because programming languages have
Value Sensitive Design (VSD), then we propose a way
been designed by programmers, for programmers. In
of applying VSD to programming language design and
other words, programming language design has not had
conclude with a discussion of how we are applying it in
much need for disciplined HCI methodology because
our own language design project.
programming languages have been designed by pro-
gramming language users themselves. But what happens
when programmers design languages for non-program-
Value Sensitive design
mers? How does the language designer know which
In this section we briefly describe VSD as detailed in
design decisions to take? We claim that these questions
Friedman, et. al. (2006). We begin with their definition
can and should be answered with the help of a disci-
of VSD:
plined application of design methodologies developed
in the HCI community.
Value Sensitive Design is a theoretically ground-
We are designing a language for non-programmers who ed approach to the design of technology that
use computational models in the conduct of their non- accounts for human values in a principled and
programming work, in particular social scientists and comprehensive manner.
game developers who write intelligent agent-based pro-
grams. Agent-based programming has, as one of the pri- In this context, a value is something that a person con-
mary abstractions, “agents” who interact with each other siders important in life. Values cover a broad spectrum
and their environment asynchronously, maintain their from the lofty to the mundane, encompassing things
own state, and are generally analogous to individual be- like accountability, awareness, privacy, and aesthetics –
Perspective: Marquez
11
anything a user considers important. While VSD uses a ogy to be developed will be situated. In keeping with the
broader meaning of value than that used in economics, iterative and integrative nature of VSD, empirical inves-
it is important to rank values so that conflicts can be re- tigations will refine and add to the conceptualizations
solved when competing values suggest different design specified during conceptual investigations.
choices.
Because empirical investigations involve the observa-
VSD employs an iterative, interleaved tripartite meth- tion and analysis of human activity, a broad range of
odology that includes conceptual, empirical, and tech- techniques from the social sciences may be applied. Of
nical investigations. In the following sections we de- all the aspects of VSD, empirical investigation is per-
scribe each of these types of investigations. haps the most foreign to the typical technically focused
programming language designer. However, as computa-
Conceptual Investigations
tional tools and methods reach deeper into realms not
We think of conceptual investigations as analogous to
previously considered, we believe empirical investiga-
domain modeling. In conceptual investigations we spec-
tions are crucial to making these new applications suc-
ify the components of particular values so that they can
cessful.
be analyzed precisely. We specify what a value means in
terms useful to a programming language designer. Con- Technical Investigations
ceptual investigations may be done before significant Technical investigations interleave with conceptual and
interaction with the target audience takes place. As is empirical investigations in two important ways. First,
characteristic of VSD, however, conceptualizations are technical investigations discover the ways in which us-
revisited and augmented as the design process proceeds ers’ existing technology supports or hinders the values
in an iterative and integrative fashion. of the users. While these investigations are similar to
empirical investigations, they are focused on technolog-
An important additional part of conceptual investiga-
ical artifacts rather than humans. The second important
tion is stakeholder identification. Direct stakehold-
mode of technical investigations is proactive in nature:
ers are straightforward – they are the people who will
determining how systems may be designed to support
be writing code in your language using the tools you
the values identified in conceptual investigations.
provide. However, it is important to consider indirect
stakeholders as well. For example, working programs
may need to be delivered by your direct stakeholders Applying vsd to programming
to third parties – these third parties constitute indirect language design
stakeholders. The characteristics of indirect stakehold- In this section we discuss the ways in which we are ap-
ers will implicate values that must be supported in the plying VSD to the design of a programming language.
design of your language. If the indirect stakeholders are First we discuss the language itself and the target audi-
technically unsophisticated, for example, then the lan- ence of our language
guage must support the delivery of code that is easy to
install and run. AFABL: A Friendly Adaptive Behavior Language
Empirical Investigations AFABL (which is the evolution of Adaptive Behavior
Empirical investigations include direct observations of Language) integrates reinforcement learning into the
the human users in the context in which the technol- programming language itself to enable a paradigm that
Perspective: Marquez
13
tency means that when users encounter a new language Empirical Investigations of Agent Modelers
construct for the first time, they should be able to apply Solving a problem requires an understanding of the
their knowledge of analogous constructs they already problem. The problem in our case is the experience of
know. External consistency means that AFABL should agent modelers in using the computational tools at their
use programming constructs that users already know disposal. To understand the problems agent modelers
form other languages and require users to learn as few face and their desiderata for computational tools, we are
new language constructs as possible. joining their teams and using their existing tools along-
• Power. A language is sufficiently powerful if it allows side them. In doing so we hope to gain an appreciation
its programmers to reasonably and easily write all the for the goals of their work, the expertise they bring to
programs they want to write in the language. If a lan- the task, and the difficulties they have in using existing
guage makes it hard to write certain types of programs, tools to accomplish their goals. We hope to gain a level
then those programs will usually not be written, thus of empathy that will help us develop a language and
limiting the scope of use of the language. Naturally, tools that will meet their needs very well.
power trades off with simplicity, but simplicity at the Technical Investigations of Agent Modelers
expense of essential power is unacceptable to our target What do they already use? How do their existing
audience. In the design implications section below we tools support or hinder their values? What technol-
discuss strategies for dealing with the power versus sim- ogy choices do we have at our disposal to support their
plicity issue. values? These are the kinds of questions we address in
• Participation. Our user communities are eager to con- technical investigations. In our case, there is a rich tap-
tribute to the design of AFABL and to its documenta- estry of software tools already in use by our users. These
tion and development of best practices. We welcome tools include virtual worlds — simulation platforms
this participation and believe that it will positively im- and game engines — and editing tools for the programs
pact adoption, both with the users with whom we are they currently write. Some of these tools are essential
already working and new users that will be influenced to their work and some may be replaced with tools we
by our early adopters. VSD directly supports and en- develop. One overriding value that stems from our us-
courages this participation in the design process. ers’ existing tool base is interoperability. Any language
or tool we develop must support interoperability with
• Growth. The language we develop and the theoreti- their essential tools.
cal models of intelligent and believable agents that we
employ today may not be the last words. It is important Implications of Values on Progamming
that AFABL be able to accommodate new models and Language Design
applications. We are already familiar with many values supported by
• Modeling Support. A modeling tool imposes a struc- the general purpose programming languages we use. C
ture on the way an agent modeler thinks about agents. supports values like efficiency and low-level access. Lisp
AFABL should do so in a helpful way, if possible, but supports values like extensibility and expressiveness. Py-
certainly not hinder particular ways of thinking about thon supports simplicity. In this section we discuss how
agents. some of the values we identified above may impact the
design of our language.
Perspective: Marquez
15
guage into the hands of users early in its development. ing, we have singled out Value Sensitive Design and
That way users can experiment with the language and described how it can be used in the design of program-
provide feedback throughout its development. Put an- ming language and tools for a non-traditional popula-
other way, AFABL will be developed with agile software tion of programmers, in our case agent modelers like
development practices. social scientists and game designers.
• User-accessible documentation system. Many lan-
guages already provide programmers with the means Acknowledgements
to automatically generate documentation from source I wish to thank David Roberts for suggesting the use
code. Many language communities also provide user- of Value Sensitive Design, and Doug Bodner and Mark
accessible documentation systems, such as Wikis and Riedl for allowing us to participate in their projects and
web forums, whereby users can share their knowledge their help in designing AFABL.
and contribute directly to the documentation base of
the language. We will employ similar mechanisms for
AFABL.
Conclusion
In this paper we have taken the position that design
methodologies from the HCI research community can
be of great benefit in the development of programming
languages. Among the design processes we are employ-
Pane, J.F., Myers, B.A., and Miller, L.B.. Using hci tech-
niques to design a more usable programming system.
In Symposium on Empirical Studies of Programmers
(ESP02), Proceedings of 2002 IEEE Symposia on Hu-
man Centric Computing Languages and Environments
(HCC 2002), Arlington, VA, September 2002.
Perspective: Marquez
Synthetic Biology: Approaches and
Applications of Engineered Biology
ROBERT LOUIS FEE
School of Chemistry and Biochemistry
17
Georgia Institute of Technology
Advisor:
Friedrich C. Simmel
School of Physics
Technical University Munich
Spring 2010: The Tower
18
the rise of synthetic biology synthetic bioorganisms, it is similarly driving scientists
Several remarkable hurdles in the life sciences have been towards a deeper level of understanding of biology.
cleared during the last half of the 20th century, from
the discovery of the structure of DNA in 1959, to the Applications of engineered
deciphering of the genetic code, the development of re- organisms
combinant DNA techniques, and the mapping of the It is expected that advances in synthetic biology will
human genome. Scientists have routinely tinkered with create important advances in applications too diverse
genes for the last 30 years, even inserting a cold-water and numerous to imagine. Applications of bioengi-
fish gene into wheat to improve weather resistance; neered microorganisms include detecting toxins in
thus, synthetic biology is by no means a new science. air and water, breaking down pollutants and danger-
Synthetic biology is a means to harness the biosynthetic ous chemicals, producing pharmaceuticals, repairing
machinery of organisms on the level of an entire ge- defective genes, targeting tumors, and more. In 2008,
nome to make organisms do things in ways nature has genomics pioneer Dr. Craig Venter secured a $600 bil-
never done before. lion grant from ExxonMobil to develop hydrocarbon-
Synthetic biology, despite its long history, is still in the producing microorganisms as an alternative to crude oil
early stages of development. The first international con- (Borrell 2009).
ference devoted to the field was held at M.I.T in June Scientists are engineering microbes to perform complex
2004. The leaders sought to bring together “researchers multi-step syntheses of natural products. Jay Keasling, a
who are working to design and build biological parts, professor at the University of California, Berkeley, re-
devices, and integrated biological systems; develop cently demonstrated genetically engineered yeast cells
technologies that enable such work; and place this sci- (Saccharomyces cerevisiae) that manufacture the imme-
entific and engineering research within its current and diate precursor of artemisinin, a malarial drug widely
future social context” (Synthetic Biology 101, 2004). used in developing countries (Ro et al, 2006). Before,
The field is growing quickly, as evidenced by the rapidly this compound was chemically extracted from the sweet
increasing number of genetic discoveries, the exploding wormwood herb. Since the extraction is expensive and
number of research teams exploring the field, and the the wormwood herb is prone to drought, the availabil-
funding from government and industrial sources. ity of the drug is reduced in poorer countries. Once the
Akin to the descriptive-to-synthetic transformation of engineered yeast cells were fine-tuned to produce high
chemistry in the 1900s, biological synthesis forces scien- amounts of the artemisinin precursor, the compound
tists to pursue a “man-on-the-moon” goal that demands was made quickly and cheaply. This same method could
they discard erroneous theories and compels scientists be applied to the mass-production of other drugs cur-
to solve problems not encountered in observation. Data rently limited by natural sources, such as anti-HIV drug
contradicting a theory can sometimes be excluded for prostratin and anti-cancer drug taxol (Tucker & Zilin-
the sake of argument, but doing the same while build- skas, 2006).
ing a lunar shuttle would be disastrous. Synthetic biol- The most far-sighted effort in synthetic biology is the
ogy comes at an important time; by creating analogous drive towards standardized biological parts and circuits.
“man-on-the-moon” engineering goals in the form of Just as other engineering disciplines rely on parts that are
well-described and universally used — like transistors
Perspective: FEE
19
and resistors — biology needs a tool box of standard- Despite the enormous progress seen in the last five years
ized genetic parts with characterized performance. The and some highly publicized and heavily funded feats,
Registry of Standard Biological Parts comprises many the systematic and widespread design of biological sys-
short pieces of DNA that encode multiple functional tems remains a formidable task.
genetic elements called “BioBricks” (Registry ofStan-
dard Biological Parts). In 2008, the Registry contained current challenges
over 2000 basic parts comprised of sensors, input/out-
put devices, regulatory operators, and composite parts Standardization
of varying complexity (Greenwald, 2005). The M.I.T. Standards underlie engineering disciplines: measure-
group made the registry free and public (http://parts. ments, gasoline formulation, machining parts, and so
mit.edu/) and has invited researchers to contribute to on. Certain biotechnology standards have taken hold
the growing library. in cases such as protein crystallography and enzyme
nomenclature, but engineered biology lacks a univer-
Some genetic parts code for a promoter gene that begins sal standard for most classes of functions and system
the transcription of DNA into mRNA, a repressor that characterization. One research group’s genetic toggle
codes a protein that blocks the transcription of another switch may work in a certain strain of Escherichia coli
gene, a reporter gene that encodes a readout signal, a in a certain type of broth, while another’s oscillatory
terminator sequence that halts RNA transcription, and function may work in a different strain when cells are
a ribosome binding site that begins protein synthesis. grown in supplemented minimal media (Endy, 2005).
The goal is to develop a discipline-wide standard and It is unclear whether the two biological functions can
source for creating, testing, and combining BioBricks be combined despite the different operating parameters.
into increasingly complicated functions while reducing The Registry of Standard Biological Parts and new Bio-
unintended interactions. fab facilities have recently emerged to begin addressing
To date, BioBricks have been assembled into a few sim- this issue, and a growing consensus is emerging on the
ple genetic circuits (McMillen & Collins, 2004). One best way to reliably build and describe the function of
creates a film of bacteria that is sensitive to light so it can new genetic components.
capture images (Levskaya et al). Another operates as a
type of battery, producing a weak electric current. Bio- Abstraction
Bricks have been combined into logic gate devices that Drawing again from other engineering disciplines, and
execute Boolean operations, such as AND, NOT, OR, specifically from the semiconductor industry, synthetic
NAND, and NOR. An AND operator creates an out- biology must manage the enormous complexity of natu-
put signal when it gets a biochemical signal from both ral biological systems by abstraction hierarchies. After
inputs; an OR operator generates an output if it gets a all, writing “code” with DNA letters is comparable to
signal from either input; and a NOT operator changes a creating operating systems by inputing 1’s and 0’s. Lev-
weak signal into a strong one, and vice versa. This would els could be defined as DNA (genetic material), Parts
allow cells to be small programmable machines whose (basic functions, such as a terminating sequence for an
operations can be controlled through light or various action), Devices (combinations of parts), and Systems
chemical signals (Atkinson et al, 2003). (combinations of devices). Scientists should be able
to work independently at each hierarchy level, so that
Figure 1. The Registry of Standard Biological Parts. This registry offers free access to basic biological functions that are used to
create new biological systems. Pictured is a standard data sheet on a gene regulating transcription, with normal performance and
compatibility measurements, plus an extra biological concern: system performance during evolution and cell reproduction. The
registry is part of a conscious effort to standardize gene parts in the hopes of creating interchangeable components with well-
characterized functions when implanted in cells. The project is open source; anybody can freely use and add information to the
Registry.
Perspective: FEE
21
device-level workers would not need to know anything designed to be metabolically deficient so they cannot
about phosphoramidite chemistry, or genetic oscilla- survive in the wild. Still, some have suggested that an
tors, etc. (Canton, 2005). incomplete understanding and emergent properties
arising from unforeseen interactions between new genes
Engineered Simplicity and Evolution could be problematic. Such dangers have given rise to
The rapid progress made by mechanical engineering in fears of a dystopian takeover by super-rugged plants that
this century was made possible by creating easily under- overwhelm local ecosystems.
standable machines. Engineered simplicity is helpful
not only for repairs but for future upgrades and rede-
signs. While a modern automobile may seem complex, Bioterrorism
the level of complexity pales in comparison to a living Research in synthetic biology may generate “dual-use”
cell, which has far more interconnected pathways and findings that could enable bioterrorists to develop new
interactions. Cells evolved in response to a multitude of biological warfare tools that are easier to obtain and far
evolutionary pressures and mechanisms were developed more lethal than today’s military bioweapons. The most
to be efficient, not necessarily easy to understand (Alon, commonly cited example of this is the resurrection syn-
2003). A related problem is that other engineered sys- thesis of the 1918 pandemic influenza strain by CDC
tems don’t evolve. Organisms such as E. coli reproduce researchers (Tumpey et al, 2005) and the possibility of
and have genetic mutations within hours. While this of- recreating smallpox from easily-ordered DNA (Venter,
fers possibilities to the biological engineer (for instance, 2005). There has been a growing consensus that not all
human-directed evolution for fine-tuning organism be- sequences should be made publicly available, but the
havior), it also increases the complexity of designing and fact remains that such powerful recombinant DNA
predicting the function of these new genetic systems technologies could be used for harm.
(Hasteltine, 2007).
Attempts to limit access to the DNA synthesis tech-
risks associated with biological nology would be counterproductive, and a sensible ap-
engineering proach might include some selective regulation while
allowing research to continue. Now, as SARS, bird
Accidental Release influenza, and other infectious disease emerge, these
Researchers first raised concerns at the Asilomar Con- recombinant DNA techniques enhance our ability to
ference in California during the summer of 1975 and manage this threat today compared to what was possible
concluded that current genetic experiments carried just 30 years ago. The revolution in synthetic biology is
minimal risk. The past 30 years of experience in genet- nothing less than a push in all fronts of biology, whether
ically-manipulated crops demonstrated that engineered that impacts environmental cleanup, chemical synthesis
organisms are less fit than their wild counterparts, and using bacteria, or human health.
they either die or eject their new genes without con-
stant assistance from humans. However, researchers conclusion
concluded that the abilities to replicate and evolve re- At present, synthetic biology’s myriad implications can
quired special precautions. It was recommended that all be glimpsed only dimly. The field clearly has the poten-
researchers work with bacterial strains that are specially tial to bring about epochal changes in medicine, agri-
Figure 2. Abstraction Hierarchy. Abstraction levels are important for managing complexity and are used extensively in engineering
disciplines. As biological parts and functions become increasingly complex, writing ‘code’ with individual nucleotides is rapidly
becoming more difficult. Currently, researchers spend considerable time learning the intricacies of every step of the process, and
stratification would allow for specialization and faster development. Ideally, individuals can work on individual levels, one can
focus on part design without worrying about how genetic oscillators work, while others could string together parts to construct
whole systems for possible biosensor applications. Image originally made by Drew Endy.
Perspective: FEE
23
References Atkinson, Mariette R., Savageau, Michael A., Myers,
(2004). Paper presented at the Synthetic Biology 1.0: Jesse T., & Ninfa, Alexander J. (2003). Development of
The First International Meeting on Synthetic Biology, Genetic Circuitry Exhibiting Toggle Switch or Oscil-
Massachusetts Institute of Technology latory Behavior in Escherichia coli. Cell, 113(5), 597-
607.
Borrell, B. (2009, July 14). Clean dreams or pond scum?
exxonmobil and craig venter team up in quest for algae- Endy, Drew. (2005). Foundations for engineering biol-
based biofuels. Scientific American, Retrieved from ogy. [10.1038/nature04342]. Nature, 438(7067), 449-
http://www.scientificamerican.com/blog/60-second- 453.
science/post.cfm?id=clean-dreams-or-pond-scum-exx-
onmobi-2009-07-14 Canton, B. (2005). Engineering the Interface Between
Cellular Chassis and Integrated Biological Systems.
Ro, Dae-Kyun, Paradise, Eric M., Ouellet, Mario, Fish- Ph.D., Massachusetts Institute of Technology.
er, Karl J., Newman, Karyn L., Ndungu, John M., . . .
Keasling, Jay D. (2006). Production of the antimalarial Alon, U. (2003). Biological Networks: The Tinkerer
drug precursor artemisinic acid in engineered yeast. as an Engineer. Science, 301(5641), 1866-1867. doi:
[10.1038/nature04640]. Nature, 440(7086), 940-943. 10.1126/science.1089072
Tucker, J.B., & Zilinskas, R.A. (2006, Spring). The Haseltine, Eric L., & Arnold, Frances H. (2007). Syn-
Promise and perils of synthetic biology. The New At- thetic Gene Circuits: Design with Directed Evolu-
lantis, 12, 25-45. tion. Annual Review of Biophysics and Biomolecular
Structure, 36(1), 1-19. doi: doi:10.1146/annurev.bio-
Registry of standard biological parts. Retrieved from phys.36.040306.132600
http://parts.mit.edu/
Tumpey, Terrence M., Basler, Christopher F., Aguilar,
Morton, O. (2005, January). How a Biobrick works. Patricia V., Zeng, Hui, Solorzano, Alicia, Swayne, David
Wired, 13(01), Retrieved from http://www.wired. E., . . . Garcia-Sastre, Adolfo. (2005). Characterization
com/wired/archive/13.01/mit.html?pg=5 of the Reconstructed 1918 Spanish Influenza Pandemic
Virus. Science, 310(5745), 77-80. doi: 10.1126/sci-
Hasty, Jeff, McMillen, David, & Collins, J. J. (2002). ence.1119392
Engineered gene circuits. [10.1038/nature01257]. Na-
ture, 420(6912), 224-230. Venter, J. C. . (2005). Gene Synthesis Technology. Paper
presented at the State of the Science National Science
Levskaya, Anselm, Chevalier, Aaron A., Tabor, Jeffrey J., Advisory Board on Biosecurity. http://www.webcon-
Simpson, Zachary Booth, Lavery, Laura A., Levy, Mat- ferences.com/nihnsabb/july_1_2005.htmll
thew, . . . Voigt, Christopher A. (2005). Synthetic biolo-
gy: Engineering Escherichia coli to see light. [10.1038/
nature04405]. Nature, 438(7067), 441-442.
Network Forensics Analysis Using
Piecewise Polynomials
Sean Marcus Sanders
School of Electrical and Computer Engineering
25
Georgia Institute of Technology
Advisor:
Henry L. Owen
School of Electrical and Computer Engineering
Georgia Institute of Technology
Spring 2010: The Tower
26
Introduction abnormal activity, which does not necessarily imply ma-
licious traffic. Anomaly detection is more difficult to
Problem implement compared to signature detection because it
Network forensics deals with the capture, recording, must flag traffic as abnormal and discern the intent of
and analysis of network events to determine the source the traffic. Abnormal traffic does not necessarily imply
of security attacks and other network-related problems malicious traffic.
(Corey, 2002). One must differentiate malicious traffic
from normal traffic based on the patterns in the data Electronic devices such as notebooks and cellular
transfers. Network communication is ubiquitous, and phones communicate by transferring data across the In-
the information transferred over these networks is vul- ternet using packets. A packet is an information block
nerable to attackers who may corrupt systems, steal valu- that the Internet uses to transfer data. In most cases,
able information, and alter content. Network forensics the data being transferred across the Internet must be
is a critical area of research because , in the digital age, divided into hundreds, even thousands of packets to
information security is vital. With sensitive information be completely transferred. Similar to letters in a postal
such as social security numbers, credit card information, system, packets have parameters for delivery such as a
and government records stored on a network, the po- source address and destination address. Packets include
tential threat of identity theft, credit fraud, and national other parameters such as the amount of data being sent
security breaches increases. During July of 2009, North in a packet and a checking parameter to ensure that the
Korea was the main suspect behind a campaign of cyber data sent was not corrupted. The Internet is modeled as
attacks that paralyzed the websites of US and South Ko- a discrete collection of individual data points because
rean government agencies, banks and businesses (Parry, the Internet uses individual packets to transfer data.
2009). As many as 10 million Americans a year are vic- Discrete processes are difficult to model and analyze as
tims of identity theft, and it takes anywhere from 3 to opposed to continuous processes because there is not a
5,840 hours to repair damage done by this crime (Sor- definite link between two similar events. For example,
kin, 2009). In order to effectively prosecute network at- the concept of a derivative in calculus can only give a
tackers, investigators must first identify the attack, and logical result if the data is continuous. In many cases,
then gather evidence on the attack. experimental results are given as discrete values. Scien-
tists, engineers, and mathematicians sometimes use the
The process of identifying an attack on a network is least squares approximation to give a continuous model
known as intrusion detection. The two most popu- of the data given. Continuous models that represent
lar methods of intrusion detection are signature and discrete data are often preferred because they can be
anomaly detection (Mahoney, 2008). Signature detec- used for different types of analysis such as interpolation
tion is a technique that compares an archive of known and extrapolation.
attacks on a network with current network traffic to
discern whether or not there is malicious traffic. This Many forensic investigators use graphs and statistical
technique is reliable on known attacks but has a great methods, such as clustering, to model network traf-
disadvantage on novel attacks. Although this disadvan- fic (Thonnard, 2008). These graphs and statistics help
tage exists, signature detection is well understood and classify complex networks into patterns. These patterns
widely applied. Anomaly detection, on the other hand, are typically stored and represented in a discrete fash-
is a technique that identifies network attacks through ion because networks transfer data in a discrete manner.
article: Sanders
27
These patterns are used in combination with signature a forensic investigation. Ilow et al. (2000) and Wang
and anomaly detection techniques to identify network et al. (2007) both used modeling techniques to try to
attacks (Shah, 2006). In many cases these network predict network traffic. Wang et al. took a polynomial
patterns are archived and kept for extended periods approach that utilized Newton’s Forward Interpolation
of time. This storage of packets is needed to compare method to predict and model the behavior of network
past network traffic with current network traffic, in or- traffic. This technique used interpolation polynomials
der to effectively classify network events. Despite this of arbitrary order to approximate the dynamic behav-
necessity, the storage of packet captures is not desired ior of previous network traffic. Wang et al.’s technique
because packet captures use a significant amount of is useful for modeling general network behavior, but
memory storage, a limited and costly resource. After a using the polynomial approach for intrusion analysis is
variable amount of time, the archived network data is another issue. Wang et al.’s technique proved that gen-
deleted to free memory for future network patterns to eral network behavior can be predicted and modeled us-
be archived (Haugdahl, 2007). Detailed records of net- ing polynomials, but did not prove whether individual
work patterns can be stored for longer periods of time network events can be distinguished and categorized
by increasing the amount of free memory or decreasing through the use of polynomials.
the amount of archived traffic.
A continuous polynomial representation of a network Proposed Solution
is preferred to a discrete representation because discrete Network data is discrete, scattered, and difficult to ap-
representations are limited by the types of analysis and proximate; however, approximation and modeling tech-
statistics that can be performed. Polynomial approxima- niques are necessary to define networks and to perform
tions of data have limitations as well, such as failing to important statistics on the network data. Such statistics
represent exact behavior, which can be vital depending include the average amount of data each packet carries,
on the system being modeled. In order to effectively dif- the average rate packets arrive to a computer, and how
ferentiate traffic, a continuous polynomial approxima- many packets are lost before delivery. These values are
tion must be robust enough to reveal enough details used to adequately classify network traffic as normal or
about network traffic. Polynomial representations of malicious. When a system is approximated as a polyno-
data should require less memory storage than discrete mial, it is faster to perform basic mathematical opera-
representations. For instance, the polynomial, y=x2, tions and statistics such as derivatives, integrals, standard
could represent a million data points but take up little deviation, and variance. The ease of the computation of
memory. This observation is important because, in the a parameter allows for a more efficient analysis of the
area of network forensics, memory storage space is a data. Networks send an enormous amount of data each
critical factor. day, and precious time is required to process this data.
While the polynomial approximation is fairly accurate,
forming a long, a complex approximated polynomial is
Related Work
not practical for the purposes of network forensics since
Shah et al. (2008) applied dynamic modeling techniques
a network will seldom have identical behavior in each
to detect intrusions using anomaly detection. This par-
session. Assuming each of the five segments of points
ticular form of modeling was only used for identifying
shown in Figure 1 represents network events (i.e., web
intrusions and not for analyzing them or conducting
sites visited), investigators can approximate and classify
Article: Sanders
29
identify which network events behaved in a certain way.
A piecewise polynomial model will address this issue by
modeling each network event as an individual polyno-
mial. If the order of the network events (segments) were
changed, the individual polynomials would just occur
at different time intervals, but each segment will remain
the same. In other words, in a piecewise polynomial ap-
proximation each segment is represented by a distinct
polynomial.
The basic concept is that while the network will not
behave the same all the time, it will behave the same in
certain pieces. If network traffic can be quantified using
piecewise polynomials, investigators can apply signature
and anomaly detection techniques to identify and in-
vestigate events from a forensics perspective. Piecewise
Figure 3. Piecewise polynomial plot of data represented in polynomial approximations will be effective because
Figure 1. they should approximate the behavior pattern of a net-
work with enough resolution to differentiate network
to modeling clusters of events. The modeling of event
traffic.
clusters is not desired because it will increase the diffi-
culty in differentiating network traffic based on a single The primary goal is to test whether or not a piecewise
event. Such a scenario will result in a malicious event be- polynomial approach can approximate network data
ing clustered with a normal event, which could lead to with enough precision to distinguish network traffic. If
failure in identifying an attack. A piecewise polynomial there are no distinct differences in piecewise polynomi-
approximation should effectively classify every network al approximated network traffic then this approach will
event that has transpired using a unique piecewise ap- not be valid for this application. Conversely, if a piece-
proximation. The piecewise polynomial approximation wise polynomial approximation can effectively differen-
of the data shown in Figure 1 is shown in Figure 3. tiate network traffic then it can be applied to intrusion
analysis, because intrusion analysis is primarily focused
It is clear that while both polynomial approximations
on classifying traffic. This application is beneficial be-
in Figure 2 and Figure 3 can model the data represent-
cause polynomial-represented data should occupy less
ed in Figure 1, the piecewise polynomial (Figure 3) is
memory storage than discrete data, and polynomial
more accurate and robust than a single polynomial. A
data have lessfewer limitations on the type of analysis
single polynomial should not be used to model more
that can be performed.
than one network event, because it will not be able to
represent the individual different network events that
it is composed of. This example is meant to emphasize Methodology
that if a sequence of 100 network events were defined Tools and Algorithms
using one single polynomial, it would be difficult to Wireshark was used to capture network traffic in packet
Article: Sanders
31
points used to define the polynomial. The number of to the hardware limitations of physical machines, virtu-
data points must be at least one more than the desired al machines and physical machines do not execute com-
order to yield an accurate polynomial approximation. mands simultaneously. From a networking perspective
In most cases, the higher the order of the polynomial, the execution of commands is not a problem, because
the more accurate the approximation is. On the other once connected, networks utilize protocols to send and
hand, a polynomial of too high of an order may yield sometimes regulate the flow of network traffic. In other
unrealistic results. Thus finding a balance of polynomial words, the network does not know that there is a virtual
order that yield both of approximate and realistic results machine operating on a physical machine and thus sup-
is important. ports multiple simultaneous network connections.
Packet captures were performed using Wireshark on
Experiments the Macbook operating with three virtual machines on
Closed/Controlled Network Behovior the ethernet interface. A variety of packet captures were
The first step to determine whether a polynomial can made to compare and contrast network behavior using
accurately approximate and differentiate network be- web pages. If the resulting piecewise polynomials could
havior is to analyze the behavior of a closed/controlled effectively compare and contrast network traffic based
network. As opposed to open networks, closed net- on various behaviors, then the polynomial approxima-
works are not connected to the Internet. The designed tion will be considered a success. The descriptions of
closed network was composed of two Macbooks, with these packet capture files are listed below.
four virtual machines operating on the separate Mac- • Idleclosed.pcap— a .pcap file that captures the ran-
books. Figure 4 gives a visual representation of the de- dom noise that is captured when the network is idle.
signed closed network.
• Icmpclose.pcap— a .pcap file that is composed pri-
A virtual machine is a software implementation of a ma- marily of ping commands from one Macbook to the
chine that executes programs like a physical machine. other. Ping commands are used to test whether a par-
Virtual machines operate on a separate partition of a ticular computer is reachable across a network. This test
computer and utilize their own operating system. Due is performed by sending packets of the same length to a
computer, and waiting to receive a reply from that com-
puter.
• Httpclose.pcap— a .pcap file that includes a brief ping
command being sent from one Macbook to the other
Macbook, but is dominated by HTTP traffic (basic
website traffic). This file also includes a period of idle
behavior where the network is at rest.
• Packet Capture A— a .pcap file that contains the
network data for visiting a specific site hosted on one
Figure 4. Visual representation of designed closed network Macbook.
with virtual machines. VMS circled.
Article: Sanders
33
relationship of the first segments of data is that they are
constant around the same value, while the second seg-
ments of the data are both decreasing, concave down,
and share similar values.
Significance of Order
Internet.pcap was plotted using zero, second, and fifth
orders to discern the effect order has in the approxima-
Figure 6. Single polynomial comparison plots of similar out
of order traffic. tion of a polynomial.
Figure 8 shows that the higher the order of the polyno-
Article: Sanders
35
The open network single polynomial approximation Future work
was unable to differentiate and link network events, as Piecewise polynomials will be applied to the area of
shown in Figure 6. The plot given in Figure 6 shows two network forensics for intrusion analysis. This analysis
similar curves of different ordered network traffic. Al- will require collection of known data that are classified
though this result is not desired, it was expected that a as either malicious or normal. Also, more information
single polynomial approximation would not be able to about packets will have to be quantified, to further
classify out of order traffic effectively. Conversely, Figure classify and to distinguish network traffic because ap-
7 shows that a piecewise polynomial approximation was proximating packet length and protocols are not suf-
able to distinguish each section of the network traffic ficient to perform a thorough analysis. The malicious
that was captured. These results show that a piecewise data will be modeled as piecewise polynomials and used
polynomial approximation can be used to classify and for signature detection. The normal network traffic will
differentiate network traffic. also be modeled as piecewise polynomials and used for
anomaly detection.
Memory storage is also of primary concern when mod-
eling network data. The Internet packet capture shows Future research also includes identifying what certain
that the discrete representation of data utilized 72Kb of traffic patterns represent, such as web browsing traffic,
memory storage, while the polynomial representation video streaming traffic, or file downloading traffic. This
utilized 12Kb of memory storage. This result shows that classification of network events will enhance a forensics
polynomial processes utilize roughly six times less mem- investigator’s ability to quickly determine what events
ory storage than discrete processes. This size difference have transpired on a network.
indicates that storing network traffic as polynomials
instead of a collection of individual points significantly Acknowledgements
saves memory. This outcome is important in network This research was conducted with the guidance of Kev-
forensics because network events can be archived for a in Fairbanks and Henry Owen and supported in part
longer amount of time than before. This extra storage by a Georgia Tech President’s Undergraduate Research
allows for more extensive and detailed investigations. Award as a part of the Undergraduate Research Oppor-
tunities Program. This research was also supported in
conclusion part by the Georgia Tech Department of Electrical and
Networks can be approximated using piecewise polyno- Computer Engineering’s Opportunity Research Schol-
mials with enough detail to aid forensic investigators. ars Program.
The precision of the approximation depends directly
on the order of the polynomial used to approximate the
data. In general, the higher the order the more details
are revealed. Networks behave differently and therefore
every network analyzed needs its own set of polynomi-
als to approximate their respective network events. The
use of piecewise polynomials is also beneficial because
polynomials use roughly six times less memory than in-
dividual data points.
Article: Sanders
Characterization of the biomechanics
37
of the GPIbα-vWF tether bond using
von Willebrand Disease causing mutations
R687E and wt vWF A1A2A3
Venkata Sitarama Damaraju
School of Biomedical Engineering
Georgia Institute of Technology
Advisor:
Larry v. Mcintire
School of Biomedical Engineering
Georgia Institute of Technology
Spring 2010: The Tower
38
Introduction joints or soft tissues including muscle and brain (Sadler,
Circulating platelets have an important role in healing 1998).
vascular injuries by tethering, rolling, and adhering to
the vascular surface in response to a vascular injury. Un- vWF is a multimer of many monomers, with each con-
der normal physiological conditions, platelets respond taining eleven (11) domains (Figure 1) (Berndt, 2000).
to a series of signaling events that cause bound plate- In this experiment, biomechanics of two of the 11 do-
lets to aggregate and spread across the exposed surface mains, in particular, gain of function (GOF) R687E
to form a hemostatic plug (Andrews, 1997). These re- vWF-A1 and wild type (wt) vWF-A1A2A3 were stud-
sponses are mediated by receptor-ligand interactions ied. Biomechanics of the GPIbα-vWF tether bond of
between the platelet and the molecules exposed on these molecules was studied using videomicroscopy in
the surface. GPIbα is the platelet receptor that medi- parallel plate flow chamber experiments. One of the two
ates this initial response to vascular injuries. In arteries surfaces of the flow chamber was a 35-mm tissue culture
this response is initiated when platelet receptor GPIbα dish coated with the vWF ligand (Figure 2). Fluid con-
tethers to von Willebrand factor, a blood glycoprotein,
on exposed subendothelium—the surface between en-
dothelium and artery membrane. When GPIbα initially
tethers to von Willebrand factor (vWF), platelets first
roll and then firmly adhere to the surface through the
GPIbα and GPIIb-IIIa integrins present on the plate-
let. GPIbα and GPIIb-IIIa integrins are the first two
platelet integrins to interact with vWF molecule (Kroll,
1996). Aggregation of bound platelets with additional
platelets from the plasma forms a hemostatic plug that
seals the injury site (Ruggeri, 1997).
Mutations in either of these binding partners can result
in changes in the initial step of the vascular healing pro-
cess. Diseases associated with these mutations are called
von Willebrand diseases, which can either decrease (loss
of function) or enhance (gain of function) the binding
activity between the GPIbα and vWF molecules. von
Willebrand diseases (VWD) result in a platelet dys-
function that can cause nose bleeding, skin bruises and
hematomas, prolonged bleeding from trivial wounds,
oral cavity bleeding, and excessive menstrual bleeding.
Though rare, severe deficiencies in vWF can have symp-
toms characteristic of hemophilia, such as bleeding into Figure 1. The vWF molcule. It is a multimer of many mono-
mers, with each containing 11 domains. Image adapted from
Sadler.
Article: Damajaru
39
Non-interacting platelets Figure 2. TParallel plate flow chamber setup.
Upper flow chamber surface
The floor (bottom) plate in the setup was a
Platelets
35-mm tissue culture dish coated with vWF
expressing ligand. Fluid containing either CHO cells or
GPIbα platelets was perfused at varying shear stresses
Fluid flow
(0.5 dynes/cm2 to 512 dynes cm2) across the
ligand coated surface.
Flow chamber floor
coated with vWF ligand
Interacting Platelets
taining either platelets or Chinese Hamster Ovary cells contain specific integrin that interact with the vWF li-
were perfused at varying shear stresses across this ligand gand, whereas β9 cells do not. Hence, β9 cells served as
coated surface and the interactions were recorded using a control group when CHO cells were used instead of
high speed videomicroscopy. Analysis of these interac- platelets.
tions with cell tracking software allowed insight on
Preparation of groth media
bond lifetime of the cells and helped suggest the type of
Two types of growth media were prepared for the two
bond present (Yago, 2004).
types of CHO cells: αβ9 and β9. Both media formula-
By studying the biomechanics of individual vWF do- tions consisted of alpha-Minimum Essential Medium
mains, it will allow a better understanding of the whole (α-MEM) solution (with 2mM L-glutamine and NaH-
vWF molecule, and more importantly, VWD. With this CO3), 10% Fetal Bovine Serum (FBS) solution, peni-
enhanced understanding of vWF, better and more ac- cillin solution (50X), streptomycin solution (50X),
G-418 (Geneticin) solution (50 mg/mL), and metho-
curate treatments for VWD can be designed in future.
trexate powder. The only difference between the two
This knowledge can also be used in studying and pre-
media types was the addition of hygromycin B solution
venting life threatening thrombosis and embolism. (50 mg/mL) in the αβ9 media.
Materials and methods
Passaging cells
All materials were obtained from the McIntire labora-
Proper sterile techniques and precautions were used
tory stock room. Proper sterile techniques and precau-
while passaging CHO αβ9 and CHO β9 cells. CHO
tions were used for each of the following procedures.
cells were cultured in 75 cm2 flasks and incubated at
37° Celsius, 5% CO2 using the growth media prepared.
Cells Used
These cells were passaged every 2-3 days in order to
Either Chinese Hamster Ovary (CHO) cells or fresh
maintain 80-90% confluency of cells at all time.
platelets were used to study the vWF ligand interac-
tions. Fresh platelets were isolated from blood donors
Hepes-Tyrode buffer formations
an hour before the experiment. For CHO cells, two
Hepes-Tyrode buffer (also referred as 0% Ficoll) was pre-
specific lineages, αβ9 and β9, were used. CHO αβ9 cells
Article: Damajaru
41
Figure 4. A mean rolling velocity of platelets on gain on function (GOF) R687E A1. The x-axis rep-
resents the logarithmic shear stress (dynes/cm2) while the y-axis represents the mean rolling velocity
(µm/s). The error bars represent the Standard error of means. SEM. Increasing velocity
suggests a slip bond behaviors of GOF R687E.
Figure 5. Mean rolling velocity of platlets on gain of function (GOF) R687E A1. The x-axis repre-
sents the logarithmic shear stress (dynes/cm2) while the y-axis represents the mean rolling velocity
(µm/s). Error bars represent the Standard error of means, SEM. The increasity velocity
suggests a slip bond behavior of GOF R687E.
Figure 6a and 6b. Mean rolling velocity platelets on wt A1A2A3 vWF. Y-axis in both 6a and 6b represent the mean rolling velocity
(µm/s), whereas the x-axis in 6a represents the logarithmic shear stress (dynes/cm2) and in 6b represents the logarith-
mic shear rate (s-1). The error bars represent the Standard error of means, SEM. This cycle of decreasing and increasing mean
rolling velocity is indicative of a catch-slip bond interaction between GPlba and vWF ligand.
The results from MetaMorph Offline were processed locity is characteristic of a slip bond interaction, because
through MATLAB to compute the mean rolling ve- the molecules tend to “slip off ” the ligand more readily
locities for each shear stress used (0.5 dynes/cm2 to 512 at higher shear stress than at lower shear stress.
dynes/cm2). In order to learn about the GPIbα-vWF
A separate experiment performed with platelets from a
tether bond interaction, mean rolling velocities were
different donor on GOF R687E vWF showed a simi-
plotted versus shear stress for each individual experi-
lar slip bond interaction (Figure 5). Although less data
ment.
points were collected in this experiment, it showed a
Plotting the results for platelets interacting on gain of similar increase in mean rolling velocity with increas-
function (GOF) mutant R687E vWF-A1 molecule re- ing shear stress. A statistical analysis of these two data
vealed a trend of increasing mean rolling velocities with sets revealed a pearson correlation factor of 0.98 and a
increasing shear stress (Figure 4). The x-axis represents p-value greater than 0.05 for a paired t-test. Therefore,
the logarithmic shear stress (dynes/cm2) while the y-axis reproducability of this trend affirmed the slip bond
represents the mean rolling velocity (µm/s). The error characteristic of GOF R687E vWF-A1 molecule.
bars are the standard error of mean (SEM), which is cal-
Outliers at high shear stress are attributed to the fact
culated by dividing the standard deviation by the square
that bond lifetime significantly decreases at higher shear
root of number of samples (stdev/√(N)).
stress. As a result, fewer platelets interact at those shears
Intuitively, with increasing shear stress the bond lifetime and thus fewer data points were collected at higher
decreases for each individual bond (one-to-one mole- shear stress compared to at lower shear stresses. This
cule interaction between GPIbα and vWF ligand); con- is reflected with the large SEM bars for data points at
sequently, causing the mean rolling velocity to increase higher shear stress end. Similarly, mean rolling veloc-
at higher shear stresses. This increase in mean rolling ve- ity at the lowest shear stress are also variable because of
Article: Damajaru
43
the diffuclty in distinguishing interacting platelets from results because CHO cells contain isolated GPIbα re-
non-interacting platelets. For both experiments, plate- ceptors, whereas platelets have many molecules on their
lets were suspended in Hepes Tyrode buffer. surface. Thus, the mean rolling velocities are compara-
bly different between them and not comparable.
In contrast, platelets interacting on wild type (wt)
A1A2A3 vWF molecule (Figures 6a and 6b) showed a
different trend compared to GOF R687E vWF. Y-axis Discussion
in both Figures 6a and 6b represent the mean rolling Fresh platelets and wild type (wt) Chinese Hamster
velocity (µm/s), whereas the x-axis in 6a represents the Ovary (CHO) cells were used on gain of function
logarithmic shear stress (dynes/cm2) and in 6b repre- (GOF) R687E vWF or wt A1A2A3 vWF in order to
sents the logarithmic shear rate (s-1). The error bars study some aspects of the GPIbα-vWF tether bond. Par-
represent the SEM. As illustrated by the graphs, mean allel plate flow chamber experiments were the same for
rolling velocity initially decreased, then increased and each vWF molecule. The only difference was the fluid
decreased only to increase again with increasing shear passing through had either platelets or wt CHO cells.
stress (and shear rate). This cycle of decreasing and in- All rolling interactions were observed at 250 frames per
creasing mean rolling velocity is indicative of a catch- second.
slip bond interaction between GPIbα and vWF ligand. It was previously found that wild type-wild type (wt
A decrease in mean rolling velocity correlates with an GPIbα on wt vWF) interactions differ from wt-GOF
increase in bond lifetime of individual bond, and thus (wt GPIbα on GOF vWF) interactions. An additional
indicating a catch bond because the platelet is “caught” experiment (Appendix A, Figure A1) shows platelets
by the ligand. Likewise, an increasing mean rolling ve- on wt vWF-A1. This graph shows a transition of bond-
locity implies a decrease in bond lifetime of the indi- ing behavior from a region of decreasing rolling veloc-
vidual bond as a slip bond interaction. Platelets on wt ity to an increasing rolling velocity as the shear stress
A1A2A3 vWF exhibited two complete cycles of catch- increases. This trend is indicative of a catch-slip bond
slip bond interaction for the range of shear stress mea- transition because the rolling velocity decreases (catch
sured (0.5 dynes/cm2 to 512 dynes/cm2). For this par- behavior) and then increases (slip behavior) with in-
ticular experiment, platelets were suspended in Hepes creased shear stress.
Tyrode buffer (0% Ficoll) and 6% Ficoll solution. Sus- However, results from platelets on GOF R687E vWF
pending platelets in a more viscous solution was used to (Figures 4 and 5) showed an increase in rolling veloci-
verify whether the catch-slip bond was force dependent ties with increased shear stress—indicating only a slip
or transport dependent. bond behavior. This suggests that a catch bond governs
A similar catch-slip bond interaction was illustrated low force binding behavior between wt GPIbα and wt
with Chinese Hamster Ovary (CHO) cells interacting vWF-A1; whereas a slip bond governs binding of GOF
on wt A1A2A3 vWF (Figure 7). Although fewer data R687E at high shear stresses. One possible reason for
points were collected for this experiment, it still dem- this could be the differential force response of the bond
onstrated two cycles of decreasing and then increasing lifetime.
mean rolling velocity with increasing shear stress. No Results of platelets rolling on wt A1A2A3 vWF (Fig-
statistical analysis was performed between these two ures 6a and 6b) showed two complete cycles of bonding
Figure A1. Results of platelets on wt-A1 vWF. Y-axis in both left and right plots represent the mean rolling velocity (µm/s),
whereas the x-axisin left represents the logarithmic shear stress (dynes/cm2) and in right represents the logarithmic
shear rate (s-1). These plots show a catch-slip bond interaction, the the rolling velocity decreases and then increases
with increases shear stress (and shear rate).
Article: Damajaru
45
region where the data points for the two different so- uted to the differential force response of bond lifetime
lutions (with 0% Ficoll and 6% Ficoll) align or overlap between GPIbα and GOF vWF ligand with increasing
when plotted together. Since shear stress data aligns the shear stress.
best compared to shear rate data, it indicates that force,
In addition, studying wt A1A2A3 vWF on platelets and
which regulates shear stress, is probably what governs
wt CHO cells revealed two complete cycles of catch-slip
this catch-slip bond interaction.
bond behavior (Figure 6-7). Based on previous knowl-
A similar catch-slip bond interaction was observed be- edge, this catch-slip bond behavior can be identified
tween wt CHO cells on wt A1A2A3 vWF (Figure 7). with the presence of wt A1domain in A1A2A3 ligand.
The bond behavior transitions from a region of decreas- However, having two cycles of catch-slip bond behav-
ing rolling velocity to a region of increasing rolling ve- ior can be due to the structural complexity of A1A2A3
locity. CHO cells have isolated GPIbα receptors, which vWF ligand.
allows for the isolation of the GPIbα receptor’s contri-
In future studies, more experiments need to be per-
bution to the rolling velocity parameter, since platelets
formed with wt A1A2A3 vWF on platelets and CHO
have many molecules on their surface. Thus, this trend
cells in order to confirm the reproducibility of the results
could be attributed to the GPIbα receptor’s interactions
achieved. More data is needed to support the claim that
with the vWF molecules and A1A2A3 structure.
having two cycles of catch-slip bond can be attributed
Overall, bond behaviors of the two vWF domain, GOF to the structural complexity of A1A2A3 vWF ligand.
R687E and wt A1A2A3, were successfully characterized. Similarly, more experiments involving GOF R687E
Although the bonding trends of the vWF ligand appear vWF on platelets and CHO cells will further substanti-
very obvious, more testing will help further substantiate ate the slip bond behavior of GOF vWF. By studying
these claims. Based on the results, the next step would biomechanics and bond behavior of each domain of the
be assessing how these bond types adversely affect plate- vWF molecule, it will allow a better understanding of
let aggregation in presence of a vascular injury. By deter- vWF and VWD.
mining the adverse effects of different bond type in each
vWF domain, it will further help understand VWD and
its causes and potentially lead to a treatment.
Conclusion
Some valuable information on tether bonding between
GPIbα-vWF, specifically GOF R687E vWF and wt
A1A2A3 vWF, was acquired from the four set of ex-
periments performed. Results from two experiments
revealed a pure slip bond behavior for platelets rolling
on GOF R687E (Figure 4-5). Statistical analysis also
showed a strong correlation and p-value greater than
0.05 between the two experiments involving GOF
R687E vWF; hence, confirming the reproducibility of
slip bond behavior. This slip bond behavior is attrib-
Article: Damajaru
Moral Hazard and the Soft Budget
47
Constraint: A game-theoretic look at the
primal cause of the sub-prime
mortgage crisis
Akshay Kotak
School of Economics and School of Industrial & Systems Engineering
Georgia Institute of Technology
This paper addresses one of the major causes of the sub-prime mortgage crisis prevalent
in large American mortgage houses by the end of 2006. The moral hazard scenario and
consequent malpractices are addressed with respect to the soft budget constraint. This
analysis is done by first looking at the Dewatripont and Maskin model (1995), and
then suitably modifying it to model the scenario at a typical mortgage lender. This sim-
plistic model provides useful insight into how heightened bailout expectations, caused
by precedent actions by the Federal Reserve, fueled risky behavior at banks who thought
themselves to be “too-large-to-fail.”
Advisor:
Emilson C. Silva
School of Economics
Georgia Institute of Technology
Spring 2010: The Tower
48
Introduction used to explain several phenomena and crises in the
Over the last two decades there has been considerable capitalist world. While initially used to explain short-
interest in the study of financial crises and instabil- age in socialist economies, the SBC has been usefully
ity owing largely to the prevalence of financial crises in sought to provide explanations for the Mexican crisis of
the recent past. As Alan Greenspan observed, after the 1994, the collapse of the banking sector of East Asian
collapse of the Soviet Bloc at the end of the Cold War, economies in the 1990’s, and the collapse of the Long
market capitalism spread rapidly through the develop- Term Credit Bank of Japan.
ing world, largely displacing the discredited doctrine of
The soft budget constraint syndrome is said to arise
central planning (Greenspan 2007). This abrupt transi-
when a seemingly unprofitable enterprise is bailed out
tion led to explosive growth that was at times too hot
by the government or its creditors. This injection of
to handle and inadequately controlled, causing several
capital in dire situations ‘softens’ the budget constraint
crises in the Third World, most notably in East Asia in
for the enterprise – the amount of capital it has to work
1997 and Russia in 1998. Additionally, there have been
with is no longer a hard, fixed amount. There is a host of
periods of economic tumult in the developed world in-
literature, primarily developed from a model designed
cluding the near collapse of Japan in 1990’s, the bailout
by Mathias Dewatripont and Eric Maskin, which fo-
of Long Term Capital Management by the Federal Re-
cuses on the moral hazard issues brought about when
serve in 1998, and most recently, the subprime mort-
a government or central bank acts as the lender of last
gage crisis of 2007-08.
resort to financial institutions (Kornai et al. 2003).
As Dimitrios Tsomocos highlights in his paper on finan-
cial instability, “[t]he difficulty in analyzing financial
Background
instability lies in the fact that most of the crises mani-
The subprime mortgage crisis of 2007 was marked by
fest themselves in a unique manner and almost always
a sharp rise in United States home foreclosures at the
require different policies for their tackling” (Tsomocos
end of 2006 and became a global financial crisis during
2003). Most explanations, however, are modeled on a
2007 and 2008. The crisis began with the bursting of
game-theoretic framework involving a moral hazard
the speculative bubble in the US housing market and
scenario brought about by asymmetric information.
high default rates on subprime adjustable rate mortgag-
This choice of framework has been popular because of
es made to higher-risk borrowers with lower income or
its ability to predict equilibrium behavior (under rea-
lesser credit history than prime borrowers.
sonable assumptions) for a given scenario and explain
qualitatively and mathematically why and when devia- Several causes for the proliferation of this crisis to all
tions from this behavior occur. sectors of the economy have been delineated, including
excessive speculative investment in the US real estate
This paper aims to perform a similar introductory anal-
market, the overly risky bets investment houses placed
ysis of one of the underlying causes of the current global
on mortgage backed securities and credit swaps, inac-
economic crisis — subprime mortgage lending activity
curate credit ratings and valuation of these securities,
in the US from 2001-07 — in light of the soft budget
and the inability of the Securities and Exchange Com-
constraint (SBC). The soft budget constraint syndrome,
mission to monitor and audit the level of debt and risk
identified by János Kornai in his study of economic be-
borne by large financial institutions. It would be fair to
havior of centrally-planned economies (1986), has been
Article: kotak
49
say, however, that one of the most fundamental causes in August ’07 to 3.0% in February ’08 and subsequently
of the entire debacle was the lending practices prevalent down all the way to 0.25% in December ’08 (Historical
in mortgage houses in the US by the end of 2006 and Changes, 2008).
the free hand given to these lenders to continue their
As the single largest mortgage financing institution
practices. While securitization produced complex de-
in the US, Countrywide Financial felt the heat of the
rivatives from these mortgages that were incorrectly val-
subprime crisis more than a lot of the other affected fi-
ued and risk-appraised, it was ultimately the misguided
nancial institutions. Faced with the double whammy of
decisions made by mortgage lenders that caused default
a housing market crash and the stiff credit crunch, the
rates to rise when the housing bubble burst, eroding the
company found itself in a downward spiral, with a rise
value of the underlying assets and setting off a chain re-
in readjusted mortgage rates increasing the number of
action in the financial sector.
foreclosures which eroded profits.
With housing prices on the rise since 2001, borrowers
In the case of Countrywide Financial and other large
were encouraged to assume adjustable-rate mortgages
finance corporations who considered themselves “too-
(ARM) or hybrid mortgages, believing they would be
large-to-fail,” the expectation of the downside risk cov-
able to refinance at more favorable terms later. How-
erage was raised to a level that promoted substantial
ever, once housing prices started to drop moderately in
risk-taking. This expectation was based off of precedent
2006-2007 in many parts of the U.S., refinancing be-
actions by the Federal Reserve in bailing out distressed
came more difficult. Defaults and foreclosures increased
large firms – dubbed the Greenspan (and now, the Ber-
dramatically as ARM interest rates reset higher. During
nanke) put. Thomas Walker (2008), in his article in The
2007, nearly 1.3 million U.S. housing properties were
Wall Street Journal, aptly says,
subject to foreclosure activity, up 75% versus 2006. (US
Foreclosure Activity 2007).
There is tremendous irony, and common sense,
Primary mortgage lenders had passed a lot of the de-
in the realization that multiple successful rescues
fault risk of subprime loans to third party investors
of the financial system by the Fed over several
through securitization, issuing mortgage-backed securi-
decades will eventually create a risk-taking cul-
ties (MBS) and collateralized debt obligations (CDO).
ture that even the Fed will no longer be able to
Therefore, as the housing market soured, the effects
single-handedly save, at least not without serious
of higher defaults and foreclosures began to tell sig-
inflationary consequences or help from foreign-
nificantly on financial markets and especially on major
ers to avoid a dollar collapse. Eventually the cul-
banks and other financial institutions, both domesti-
ture will overwhelm the ability of the authorities
cally and abroad. These banks and funds have reported
to make it all better.
losses of more than U.S. $500 billion as of August 2008
(Onaran 2008). This heavy setback to the financial sec-
Ethan Penner of Nomura Capital provides a succinct
tor ultimately led to a stock markets decline. This dou-
and veracious definition of the moral hazard dilemma
ble downturn in the housing and stock markets fuelled
in saying that, “Consequences not suffered from bad de-
recession fears in the US, with spillover effects in other
cisions lead to lessons not learned, which leads to bigger
economies, and prompted the Federal Reserve to cut
failings down the road (Penner 2008).”
down short term interest rates significantly, from 5.25%
Article: kotak
51
downturn, large private banks, the central bank, or the for bad market conditions. This revenue shrinkage fac-
government would be forced to bail them out to avoid tor (δ) can be thought of as an indicator of the bank’s
a financial meltdown. This insurance against downside downside risk coverage. In the current framework, it is
risk stimulates the moral hazard scenario and gives in- affected by two key factors:
centive to these financial institutions to make much
1. Collateral requirements: Higher collateral would
riskier bets with higher potential return.
imply more downside risk coverage (i.e. higher δ) but
would also reduce the quantity of loans demanded since
Methodology fewer people would be able to pay the required collater-
The game-theoretic model used in this study has two al for the same loan. The bank would therefore evaluate
key players – the borrowing entity (“borrower”) and the benefit (potential revenue) of additional loans with
the lending entity (“the bank”). Additionally, the study the cost (increased risk) to choose the ideal collateral
looks at the effects of the presence of a lender of last requirement for the ARM. This cost-benefit analysis is
resort. Borrowers, assumed to be identical, can choose however outside the scope of this study, and δ is there-
from two types of loans offered by the bank – a fixed fore assumed to be exogenous.
rate loan with principal Lf and an adjustable rate loan 2. Bail-out expectations: Increased expectations of a
with principal La. Customer utility (U(x)) in the typical bail-out (i.e. a cash injection in case of bad market con-
concave functional form – increasing with decreasing ditions) would also raise the value of δ, but without
marginal returns (i.e. U'>0, U''<0), is simplified in this shrinkage in loan demand.
model to be the natural logarithm function.
The game is played between borrowers and the bank
The fixed rate loan has an interest rate rf . The adjustable with equilibrium being reached by the bank setting λ
rate loan is assumed to have an initial low fixed interest such that borrowers are indifferent to either of the two
rate r0a which is readjusted after a period λ. The remain- loans, and the borrowers opting for a mixed strategy.
der of the adjustable rate loan is paid off at the rate de- The indifferent borrower chooses a fixed rate loan with
termined at the end of period λ. If market conditions are a probability α such that the expected payoff from either
good at this time, the interest rate is adjusted to r1g, and loan is the same for the bank.
if they are bad, the rate is adjusted to r1b. Market condi-
tions are represented in the model by an exogenous vari- This study analyzes the equilibrium of this game under
able θ, which is the probability of the market conditions two scenarios – with and without the presence of a lend-
being good, i.e. of the interest rate reset to being r1g. er of last resort. The presence of a lender of last resort
who is expected to bail the lender out with a cash injec-
The bank and customer convene before a loan is offered tion increases the (perceived) value of δ even though the
to discuss the terms of the ARM. Based on the bank’s level of protection offered to the bank through collateral
expectations about the economy (i.e. θ) and of the val- remains the same. So, in this case, the revenue shrinkage
ues of r1b and r1g, the bank and the customer decide for the second collection period is reduced (Figure 2).
on a fixed initial rate r0a and a period λ for which the
loan is kept fixed. The computation for λ also involves a The optimal loan amount for a fixed rate loan (L*f )
parameter, δ, which reflects the increase in default rate maximizes net utility for the borrower. Net utility is
the difference between the utility gained from the loan Therefore, the optimal loan amount for an adjustable
amount less the total interest paid over the lifetime of rate loan,
the load. The borrower therefore solves, (2)
{
max U (L f )- rf ·L f } L*a =
1
(λ·r ( ))
Lf
0
a + (1 - λ )· θ ·rg1 + (1 -θ )·δ ·rb1
i.e.
Lf
{
max ln (L f )- rf ·L f } In order to ensure that a mixed strategy is employed at
equilibrium i.e. to have 0<α<1, the bank sets λ such that
borrowers are indifferent to fixed and adjustable rate
which yields,
loans.
(1)
U (L f )- rf ·L f = U (La )- (λ ·ra + (1 - λ )·(θ ·rg + (1 -θ )·δ ·rb ))·La
0 1 1
* 1
L =
f
rf Substituting values from (1) & (2), we obtain,
With an adjustable rate loan, the interest payment for
the average borrower would be
( (
- ln rf -1 = - ln λ ·ra0 + (1 - λ )· θ ·rg1 + (1 -θ )·δ ·rb1 -1 ))
( (
La · λ ·ra0 + (1 - λ )· θ ·rg1 + (1 -θ )·δ ·rb1 )) i.e.
( (
r f = λ ·ra0 + (1 - λ )· θ ·rg1 + (1 -θ )·δ ·rb1 ))
Article: kotak
53
giving us, Given condition (4), the sign on the above expression is
(3) dependent on the sign of (r1g – δ·r1b). Therefore, if
λ = * (
rf - θ ·rg1 + (1 -θ )·δ ·rb1 )
δ<
rg1
ra0 - (θ ·r 1
g + (1 -θ )·δ ·r )
1
b
rb1
then the right hand side of equation (6) would be posi-
tive, implying that an increase in the probability of a
Analysis good market conditions would cause an increase in the
In deriving equation (3) above, we also find that, at equi- amount of time that the loan is kept at the low fixed rate
librium, the net interest rate charged for a fixed loan and r0a. This makes intuitive sense because if
an adjustable loan are the same.
i.e. rg1
δ<
( ))
rb1
0
(
rf = λ ·r + (1 - λ )· θ ·r + (1 -θ )·δ ·r
a
1
g
1
b
then the bank is not adequately covered against down-
Since all interest rates and parameters are positive, and side risk, so even though the probability of good market
since r0a is assumed to be less than rf , the above can only conditions increases, the bank keeps the loan at the fixed
hold true if low rate longer and decreases the length of the period of
(4) uncertain collection, which is subject to downside risk.
One concern that arises is why the bank takes any risk
(
ra0 < rf < θ ·rg1 + (1 -θ )·δ ·rb1 ) in the first place by offering an adjustable rate loan even
though the payoff for this is the same as that for the less
Also, it must hold that, risky fixed rate loan. The reasoning here would be that
(5) adjustable rate loans earn higher commissions, which
compensates to some level for this risk. Additionally,
r < rf < rg1 < rb1
a
0
ARMs are preferred by more customers, and they there-
This is derived from equation (4) and from the fact that, fore add intangible value in terms of higher volumes,
as market conditions worsen, liquidity becomes harder which may lead to lower costs, better customer satisfac-
to obtain and therefore the cost of debt increases. tion and a broader clientele. Also, since the function for
λ is a rational function in θ (see equation 3), we can see
Equilibrium behavior that is of interest is the nature of that the values of r0a, r1b, and r1g need to fall within a
the change in λ with changes in the exogenous param- certain range to ensure that an ARM is feasible i.e. λ lies
eters – θ and δ. The rate of change of λ with respect θ between 0 & 1.
to is
Conversely, if
(6)
rg1
ðλ
=
(
rf - r · r - δ ·r a
0
)( 1
g
1
b ) δ>
rb1
( )
2
ðθ ra0 - δ ·rb1 -θ ·rg1 + δ ·θ ·rb1
Article: kotak
55
mathematically, that an increase in the expectation of a
bailout by a lender of last resort tends to encourage risky
behavior in such mortgage offering agencies in multiple
ways. That being said, there is plenty of scope for fur-
ther elaboration and sophistication of the model. The
market structure currently under investigation is both
simplistic and insular, but a more elaborate structure of
markets and corresponding interactions could be de-
signed. For instance, a good example of possible market
stratification is illustrated in Tsomocos (2003). Also, the
current loan structure is a two period model with loans
changing rates at the end of period one to a new fixed
rate for period two. A more complex, multi-period loan
structure could be investigated with the adjustable rate
set as a random variable and a Markov chain approach
used to study the equilibrium behavior in this scenario.
In their investigation of “federalism,” Qian and Ro-
land (1988) observe that giving fiscal authority to local
governments instead of the central government works
to limit the effects of the soft budget syndrome. They
propose a three-tiered structure with local governments
working between the central government and state and
non-state enterprises. The competition among local
governments to attract enterprises forces funds to be
diverted in infrastructure development, increasing the
opportunity cost of a bailout and thereby hardening
the budget constraint for enterprises. A similar scenario
could be envisioned where the Federal Reserve dis-
tributes the decision making authority (and funds) to
bailout corporations between the twelve regional Fed-
eral Reserve Banks; and would be of interest to study
the subsequent change in the behavior of the lending
banks.
Compact Car Regenerative Drive Systems:
Electrical or Hydraulic
QUINN LAI
School of Mechanical Engineering
57
Georgia Institute of Technology
The objective of the research is to address the power density issue of electric hybrids and
energy density issue of hydraulic hybrids by designing a drive system. The drive system
will utilize new enabling technologies such as the INNAS Floating Cup pump/pump
motors and the Toshiba Super Charge Ion Batteries (SCiB). The proposed architecture
initially included a hydraulic-electric system, where the high power braking power is
absorbed by the hydraulic system while energy is slowly transferred from both the Inter-
nal Combustion Engine (ICE) drive train and the hydraulic drive train to the electric
accumulator for storage. Simulations were performed to demonstrate the control meth-
od for the hydraulic system with in-hub pump motors. Upon preliminary analysis it is
concluded that the electric system alone is sufficient. The final design is an electric system
that consists of four in-hub motors. Analysis is performed on the system and MATLAB
Simulink is used to simulate the full system. It is concluded that the electric system has
no need for a frictional braking system if the Toshiba SCiBs were used. The regenerative
braking system will be able to provide an energy saving from 25% to 30% under the
simulated conditions.
Advisor:
Wayne J. Book
School of Mechanical Engineering
Georgia Institute of Technology
Spring 2010: The Tower
58
INTRODUCTION foundation of further calculations in other parts. The
With around 247 million on-road vehicles traveling initial approach to solve the problem was to incorpo-
around 3 trillion miles (Highway Statistics, 2009) ev- rate an electrical system in an existing hydraulic hybrid
ery year, the efficiency of on-road vehicles are of major system. Hydraulic Hybrid Drive System presents the
concern. As a result, hybrid drive trains which dramati- Hydraulic Hybrid system engineering level analysis,
cally increase urban driving efficiency of vehicles have and Hydraulic Accumulator Analysis investigates the
been developed and implemented in vehicles. Existing hydraulic accumulator. It was confirmed that the hy-
on-the-road hybrids have their secondary regenerative draulic accumulator does not have a sufficient energy
systems (Electric Motors and batteries) installed on density for braking energy capture and therefore elec-
their primary drive trains (ICE drive train) to provide trical accumulators were introduced to capture the ac-
the regenerative braking capability. Recently efforts cess energy. Battery Analysis investigates the electrical
have been put in designing drive train systems that have accumulators. Upon the completion of Analysis, it was
either hydraulic or electric components as integral parts concluded that the electrical system alone is sufficient.
of the systems. For example, in the Chevy Volt, a series As a result, an in-hub motor driven electric drive system
electric hybrid system, the ICE is used to charge the (Figure 7) was chosen, and the analysis and simulations
electric accumulator, which in turn drives the electric are presented in Electrical Systems of the paper.
motor (Introducing Chevrolet, 2009).
Electric hybrid drive trains have been implemented in braking power analysis
passenger vehicles while hydraulic hybrids have been Braking power analysis is performed to serve as a foun-
implemented in commercial vehicles. Since electric dation for accumulator analysis in Hydraulic Accumu-
hybrid systems can operate quietly, enhancing passen- lator Analisis and Battery Analysis. Analysis is conduct-
ger comfort, this system is implemented in passenger ed with an assumption of negligible rolling friction, air
vehicles. However, current battery technologies in the drag and other losses. The driving analysis is performed
market prevent high power charging and thus prevent on a mid size passenger sedan, such as a Honda FCX
the electric system from replacing frictional brakes. As Clarity. The Honda FCX Clarity fuel cell car was se-
a result a significant amount of braking energy is lost lected because the weight of the components in the car
to the surroundings through heat. Hydraulic hybrids, in closely resembles the weight of the suggested drive sys-
contrast, have the ability to capture most of the braking tem. The assumed vehicle mass is 1625 kg (Honda at the
power. However, due to the characteristics of hydraulic Geneva, 2009). The ECE-15 Driving Schedule is shown
components, the hydraulic systems suffer from the accu- in Figure 1.
mulator’s low energy density; the Noise, Vibration and
A 6 second 35 mph to 0 mph deceleration is assumed.
Harshness (NVH) also significantly affect the driving
The assumed braking slope resembles a rapid urban brak-
experience.
ing is more rapid than ECE-15 Urban Driving Schedule
In an attempt to address the charging power density braking. A rapid 60 mph to 0 mph deceleration is also
challenge faced by Electrical Hybrids and the energy assumed. Under normal driving conditions, a passenger
density challenge faced by Hydraulic Hybrids, different vehicle will take about 200 ft to decelerate (2009 Driv-
drive systems were designed. Braking Power Analysis of er’s Manual, 2009). The deceleration time involved can
the report presents simple braking power analysis as the be obtained using Equation (1) and Equation (2)
Article: lai
59
Figure 1. ECE-15 Driving Schedule; x-axis: time (s); y-axis: vehicle velocity (mph).
the secondary power plant allows engine off operation, stant. During braking, the pump (CPR side) stroke is
and the Infinitely Variable Transmission (IVT) allows kept constant while the pump motor stroke is varied
the engine to rotate at optimum RPM for efficiency. The to charge up the accumulator. The presented control
INNAS HyDrid utilizes the INNAS Hydraulic Trans- method is presented in Figure 3.
formers (IHT) (Achten, 2002) in a Common Pressure
A simulation is performed to demonstrate the control
Rail (CPR) (Vael & Acten, 2000). The IHT is claimed
method. The system assumes that the vehicle has a 4 cyl-
to have unmatched efficiency due to the Floating Cup
inder gasoline engine, a 0.85 volumetric efficiency for
Principle that it utilizes. The starting torque efficiency,
variable pump/motor, and a 0.92 volumetric efficiency
according to Achten, is up to 90% (Achten, 2002) or
for constant displacement pumps and inactive pressure
above. The control method of the HyDrid is not pub-
accumulators. It is also assumed ideal pipe lines and no
lished; therefore, a possible control method is presented
force is involved in the varying of the pump stroke. The
to demonstrate how the IHT functions as an IVT and
simulation shows how by varying the IHT pump stroke
thus converting the varying pressure from the accumu-
the vehicle speed can closely follow a desired trajectory
lator into the desired pressure for the in wheel constant
with minimal ICE rpm variation. The ICE rpm and
displacement motor/pumps. When accelerating, ei-
the pump stroke variation are shown in Figure 4 and 5
ther the pressure accumulator or the ICE will provide
the required pressure in the CPR which will in turn be
transmitted by the IHT to drive the in-wheel constant vi(m/s) vf(m/s) t(s) Braking Power (kW)
displacement motor/pump. The IHT is assumed to be
35 0 6 66.0
a variable pump coupled with a variable pump/motor.
A possible method of controlling the acceleration is to 60 0 11 99.8
vary the stroke of the variable pump in the IHT while Table 1. Deceleration details for the assumed vehicle.
keeping the pump motor stroke and the ICE RPM con-
Article: lai
61
respectively. The resulting vehicle velocity is shown in tify the energy storage capacity of a typical size hydrau-
Figure 6. lic accumulator for a hydraulic hybrid vehicle so that
the proposed additional battery pack can be correctly
The simulated vehicle velocity closely matches with the
sized. A 38L EATON hydraulic accumulator (Product
desired velocity trajectory, which is the ECE-15 driving
Literature, 2009) is assumed (Used in CCEFP Test Bed
schedule (Figure 1). The simulated velocity trajectory is
3: Highway Vehicles). The parameters used for energy
idealized because of the idealized assumptions made in
calculations are tabulated in Table 2.
creating the simulation model. The pressure values pro-
vided from the simulation are also observed to be faulty.
This simulation’s values cannot be used for quantitative
purposes. However, it is sufficient for the demonstra- Volume (m3) 0.038
tion of the variation between the stroke of the pump in Precharge Pressure (MPa) 10.7
the IHT and the vehicle velocity.
Precharge Nitrogen Volume (m3) 0.038
Maximum Nitrogen Pressure (MPa) 20.6
hydraulic accumulator analysis
Nitrogen Volume at Maximum Pressure (m ) 3
0.0176
The hydraulic accumulator has sufficient power density
but a low energy density. An attempt was made to quan- Table 2. EATON 38 L hydraulic accumulator.
Figure 4. Engine RPM for HyDrid simulation; x-axis: time (s); y-axis: ICE rpm.
Figure 5. Pump stroke variation for HyDrid simulation; x-axis: time (s); y-axis stroke (m).
Article: lai
63
The assumed relationship between pressure and volume involved. Using Equation (5) and Equation (6) we can
is shown in Equation (5) calculate the total energy storage of the EATON 38L
pressure accumulator, which is 293.6 kJ. Using Equa-
(5) tion (3) and assuming a vehicle with the weight of 1625
n
pV = constant kg (from Braking Power Analysis), a 38L accumulator is
sufficient for the acceleration from 0mph to 42.5mph.
where p is the pressure of the nitrogen in the accumula- It is assumed that no energy is lost due to friction, drag,
tor, V is the volume of nitrogen in the accumulator, and and inertia changes. As the main purpose of the hydrau-
n is an empirical constant. Using this relationship, the lic system in a Hydraulic Hybrid is to capture urban
total energy involved in completely pressurizing or de- braking and to accelerate the car to a velocity where
pressurizing the accumulator is shown in Equation (6) the ICE can be started, 293.6 kJ is sufficient. However,
(6) if the vehicle is braking from a speed higher than 42.5
mph, or the duration of braking is long, the hydraulic
p f V f - piVi system will not be able to capture the braking energy.
W=
1- n Therefore an electrical system is introduced to capture
where is the initial pressure, is the final pressure, is the the excess energy.
initial volume, is the final volume, and W is the energy
Figure 6. Vehicle velocity variation for HyDrid simulation; x-axis: time (s); y-axis: velocity (mph).
Article: lai
65
(Achten, 2009; Valøena & Shoesmith, 2009) are pro-
vided in Table 5 for comparison. The 4 in-hub motor de-
sign also allows the vehicle to enjoy a very small turning
radius and other advantages of 4WD vehicles, such as
increased traction performance and precision handling.
To validate the design, a simulation is done for the sys-
tem suggested. Because of the mechanical components
removed, a lighter car is selected for simulation. The
selected vehicle is a Honda Civic, with a vehicle mass
Figure 7. Selected Electric Car architecture. of 1246 kg (Complete Specifications, 2009) and a CdA
value of 0.682 m2. The air drag of the vehicle can be cal-
for the regenerative braking and driving cycles allowed culated using Equation (9) (Larminie & Lowry, 2003)
before the capacity of the SCiB drops below 90%. Us- (9)
ing the same assumptions we can also find that 137.6 kg
1
of SCiB is sufficient for the maximum charging power Fad = ρ Cd Av 2
involved in the 11 second 60 mph to 0 mph highway ac- 2
cident braking. The weight of the battery pack required
is slightly heavier than the 70kg battery pack in a Toyota where is the drag force, is air density, is the drag coef-
Prius electric hybrid vehicle. ficient, A is the cross sectional area of the vehicle facing
the front, and v is the velocity of the vehicle. The rolling
electrical systems friction of the vehicle can be obtained using Equation
As shown in Battery Analysis calculations, the Toshiba (10) (Larminie & Lowry, 2003).
SCiBs have a power density that is more than sufficient
for regenerative braking. As a result neither the hydrau- (10)
lic system nor the frictional braking system is necessary Frr = µrr mg
in an electric vehicle equipped with the Toshiba SCiBs.
A mechanical emergency brake should be installed to
prevent accidents in case of regenerative braking system
failure. Approx. Efficiency
The possible simplest design is a plug in electric or a fuel ICE 20%
cell vehicle that has 4 in-hub motors. The simplified Transmission (automatic) 85%
system is shown in Figure 7. As shown in the Figure 7, Transmission (manual) 92% to 97%
the 4 in-hub wheel motors are directly connected to the
Differential 90%
wheel. With mechanical components such as the ICE,
differentials, and the transmission removed, the vehicle Motor 90%
weight can be reduced, and the efficiency of the whole Battery recharge 80% to 90%
driving train can be increased by at least a factor of 3
Table 5. Efficiency values of integral components of ICE
(Clean Urban Transport, 2009). Some efficiency values drive train and electric drive train.
Figure 8. Vehicle Velocity Variation for Electric Drive System; x-axis: time (s); y-axis: velocity (mph).
Article: lai
67
Figure 9. X-axis: time (s); y-axis: power(W) plot for Electric Drive System Simulation.
Figure 10. Energy required to complete the ECE 15 driving cycle without regenerative braking; x-axis: time(s); y-axis: energy( J).
Figure 11. Energy required to complete the ECE driving cycle with regenerative braking; x-axis: time(s); y-axis: energy( J).
Article: lai
69
energy storage. In one of the intermediate designs, elec-
trical accumulators were introduced into the system to
capture excess energy that cannot be captured by the
hydraulic system. The Sony LFP and the Toshiba SCiB
were considered. The Toshiba SCiB was chosen as a re-
sult of its superior charging power density performance.
Upon further analysis, it was concluded that the batter-
ies have a sufficient charging power density to capture
braking power. It was then suggested that the electric
system can fully replace the hydraulic components, the
ICE drive train, and the frictional braking system. With
the convoluted hybrid system, which consists of a lot
of inefficient components replaced by a simple electric
only drive train, the vehicle drive train efficiency can be
increased. An electrical system is simulated. The simu-
lated models showed energy savings of around 25~30%
with regenerative braking. The final drive system design
consists of an electric/fuel cell vehicle with four in-hub
motors.
Larminie, J., & Lowry, J. (2003). Electric Vehicle Tech- Berdichevsky, G., Kelty, K., Straubel, J.B., & Toomre,
nology Explained. Chichester: John Wiley & Sons Ltd. E. (2006). The Tesla Roadster Battery System. In Tesla
Motors (Ed.).
Driver’s Manual. (2009). Government of Georgia. Electric Vehicles. (2009). Retrieved from http://
Retrieved from http://www.dds.ga.gov/docs/forms/ ec.europa.eu/transport/urban/vehicles/road/elec-
FullDriversManual.pdf. tric_en.htm.
Acten, P.A.J. (2007). Changing the Paradigm. Paper Valøen, L.O., & Shoesmith, M.I. (2007). The ef-
presented at the Proc. of the Tenth Scandinavian Int. fect of PHEV and HEV duty cycles on battery and
Conf. on Fluid Power, SICFP’07, Tampere, Finland. battery pack performance. Paper presented at the
Plug-in Hybrid Electric Vehicle 2007 Conference,
HyDrid. Retrieved December 5, 2009, from http:// Winnipeg, Manitoba. http://www.pluginhighway.ca/
www.innas.com/HyDrid.html PHEV2007/proceedings/PluginHwy_PHEV2007_
PaperReviewed_Valoen.pdf
Achten, P.A.J. (2002). Dedicated design of the hydrau-
lic transformer. Paper presented at the Proc. IFK.3, Complete Specifications. Civic Sedan Retrieved De-
IFAS Aachen. cember 5, 2009, from http://automobiles.honda.com/
civic-sedan/specifications.aspx
Vael, G.E.M., Achten, P.A.J., & Fu, Z. (2000). The
Innas Hydraulic Transformer, the Key to the Hydro- Performance Specifications. (2010). Tesla Roadster Re-
static Common Pressure Rail. Paper presented at the trieved December 5, 2009, from http://www.teslamo-
International Off-Highway & Powerplant Congress & tors.com/performance/perf_specs.php
Exposition, Milwaukee, WI, USA.
Article: lai
Switchable Solvents: A Combination of
Reaction & Separations
GEORGINA W. SCHAEFER
School of Chemical and Biomolecular Engineering
71
Georgia Institute of Technology
Advisor:
Charles A. Eckert
School of Chemical and Biomolecular Engineering
Georgia Institute of Technology
Spring 2010: The Tower
72
INTRODUCTION achieve different solvent properties (Heldebrant et al.,
A common problem for chemical synthesis is the re- 2005; Phan et al., 2007).
action of an inorganic salt with an organic substrate,
Industrial chemical production usually requires multi-
which is an important reaction in the production of
ple reaction and separation steps, each of which usually
many industrial chemicals and pharmaceutical prod-
requires the addition and subsequent removal of a dif-
ucts. Typically, a phase transfer catalyst (PTC), such as
ferent solvent for each step. For example, the synthesis
a quaternary ammonium salt, is used and must subse-
of Vitamin B12 is achieved in 45 steps. The application
quently be separated from the product after the reaction
of switchable solvent systems to industrial produc-
has proceeded. However, the separation of a PTC from
tion processes of major chemicals and pharmaceuticals
the product is very difficult. In fact, solvents such as di-
would significantly lower the associated pollution and
methyl sulfoxide (DMSO) or ionic liquids, liquid salts
cost of these processes by eliminating the need to add
at or near room temperature, that are capable of dissolv-
and remove multiple solvents for each reaction step.
ing both the organic and inorganic components of the
(Heldebrant et al., 2005; Phan et al., 2007).
reaction still inhibit simple separation of the product
from the catalyst. (Heldebrant et al., 2005)
Project description
Now imagine a smart solvent that can reversibly change
Switchable solvents convert between a non-ionic liquid,
its properties on command through a built in “switch”.
which has varying polarity, to an ionic liquid whose
Our goal in designing such a solvent is to minimize the
properties include higher polarity and higher viscosity.
economic and environmental impact of such industrial
As discussed in previous research, the ideal properties
processes while creating a solvent that remains highly
of the solvent as a reaction medium include a usable
polar. These solvents are able to dissolve both the or-
liquid range, chemical stability, and the ability to dis-
ganic and inorganic components of the reaction while
solve both organic species and inorganic salts. In terms
highly polar and then change properties for easier sepa-
of the solvent’s role in separations, the solvent should
ration and effective product isolation after the reaction
be decomposable at moderate conditions with a reason-
is complete.
able reaction rate, the decomposition products should
Switchable solvent systems are capable of doing just have very high or very low vapor pressures, and recom-
that. These systems involve a non-ionic liquid, an alco- bination to form solvent should be relatively easy. Our
hol and amine base, which can be converted to an ionic principle aims in designing a switchable solvent system
liquid upon exposure to a “switch”. The switch chosen to optimize reactions and separations were to eliminate
to induce this change in solvent properties is carbon multiple reaction steps, reverse solvent properties to
dioxide. CO2 reacts with the alcohol-amine mixture facilitate better separations, and minimize the cost and
to form an ammonium carbonate. Furthermore, it is environmental impact by optimizing catalyst and sol-
cheap, readily available, benign, and easily removed by vent recycle. (Heldebrant et al., 2005; Xiao, Twamley,
heating and purging with nitrogen or argon. Switchable & Shreeve, 2004; Phan et al, 2007).
solvent systems therefore should facilitate chemical syn-
Ionic liquids have gained popularity in their technolog-
theses involving reactions of inorganic salts and organic
ical applications as electrolytes in batteries, photoelec-
substrates by eliminating the need to add and remove
trochemical cells, and many other wet electrochemical
different solvents after each synthetic step in order to
Article: Schaefer
73
devices. They are particularly attractive solvents because
of dramatic changes in properties such as polarity, which
may be elicited through a “switch”. On the other hand,
changes in conditions such as temperature and pressure
usually can only elicit negligible to moderate changes
in a conventional solvent’s properties making the use of Figure 1. The Heck reaction of bromobenzene and styrene in
multiple solvents for a single process necessary. In addi- the presence of palladium catalyst and ionic liquid.
tion, ionic liquids have low vapor pressures essentially
eliminating the risk of inhalation. In particular, guani-
dinium-based ionic liquids have low melting points and
good thermal stability, properties which make these
high nitrogen materials attractive alternatives for ener-
getic materials. (Xiao, Twamley, & Shreeve, 2004; Gao,
Arritt, Twamley, & Shreeve, 2005).
Figure 2. Synthesis of the palladium catalyst used in the Heck
Our research focused on the application of switchable Reaction (Figure 1).
solvents for the Heck reaction in order to optimize the
reaction and separation. Specifically, we studied the re-
action of bromobenzene and styrene in the presence of
a palladium catalyst (PdCl2(TPP)2) and base to form
E-stilbene, an important pharmaceutical intermediate
in the production of many anti-inflammatories. (Hel-
debrant, Jessop, Thomas, Eckert, Liotta, 2005; Xiao,
Twamley, & Shreeve, 2004).
As illustrated in Figure 3, the nonionic liquid can be
Figure 3. Ionic liquid formation; note the reversibility of this re-
“switched” to an ionic liquid by exposure to carbon diox- action under Argon.
ide and reversed back to a non-ionic liquid by exposure
to argon or nitrogen. The reaction of bromobenzene
and styrene in the presence of palladium catalyst is run
in the highly polar ionic liquid which is able to dissolve
both the organic and inorganic components of this sys-
tem. The ionic liquid is a particularly effective media for
this reaction in that it is able to immobilize the palla-
dium catalyst while preserving the overall product yield.
Figure 4. Switchable solvent system comprised of 1,8-diazabi-
In addition, ionic liquids are nonvolatile, inflammable, cyclo-[5.4.0]-undec-7-ene (DBU) and hexanol.
and thermally stable making them an attractive replace-
ment for volatile, toxic organic solvents. (Heldebrant,
Jessop, Thomas, Eckert, Liotta, 2005; Xiao, Twamley, &
Shreeve, 2004; Phan et al., 2007).
Figure 5. Process Diagram for the Heck reaction performed in a switchable solvent system.
Article: Schaefer
75
10ml window autoclave. First the catalyst solution was results and conclusions
added and the solvent vacuumed out. Next the ionic The Heck reaction was optimized by running at various
liquid, bromobenzene, and styrene were added, and temperature and pressure conditions. In order to assess
the system was pressurized. The autoclave was then left the success of each system, the conversion percentages
stirring and heating for three days until the reaction were compared.
was completed. After three days, the autoclave was al-
Based on Figure 4, the optimal conditions for product
lowed to cool down and depressurize. Once the system
formation and catalyst + solvent recovery were a tem-
was back to room temperature and atmospheric pres-
perature of 115˚C and a pressure of 30 bar. The reac-
sure, the homogeneous ionic liquid/product/catalyst
tions run at these conditions show very acceptable and
solution was extracted from the autoclave with heptane
repeatable results, such as an 83% and an 84% yield,
under carbon dioxide in order to sustain the formation
the highest observed percent yields. However, the high
of the ionic liquid. The product in heptane phase was
variability of the product yield results at these condi-
then removed and the remaining ionic liquid/catalyst
tions can be explained by the poor extraction methods
phase was exposed to argon and heat. After exposure to
for this system. Extracting with large amounts of hep-
argon and heating, the byproduct salt precipitated out
tane (greater than 50mL) led to higher product yields
of the catalyst/reversed non-ionic solvent solution. The
(greater than 60%) whereas extracting with less heptane
catalyst and solution was then removed from the salt
(10-50mL) led to lower product yields at these condi-
byproduct and recycled. Any remaining product left
tions due to product loss in the system. Therefore, there
in the autoclave from the extraction with heptane was
was a tradeoff between the amount of heptane used in
then extracted with dichloromethane (DCM) for later
the extraction to recover the product, which adds to the
mass balance calculations.
Figure 6.
Palladium (PdCl2(TPP)2) cata-
lyzed Heck reaction of bromoben-
zene and styrene in a switchable
ionic liquid system. The catalyst
and ionic liquid solution used in
the reaction run at T=115˚C and
P=30 bar, which had 55% yield,
was recycled from the reaction
run at T=115˚C and P=30 bar,
which had 83% yield, demonstrat-
ing that the catalyst in ionic liquid
remained active in the reaction
and that was good recycle.
Article: Schaefer
77
References
Heldebrant, D.J.; Jessop, Philip G; Thomas, Colin A.;
Eckert, Charles A.; Liotta, Charles L. J. The Reaction
of 1,8-Diazobicyclo[5.4.0]undec-7-ene (DBU) with
Carbon Dioxide. Organic Chemistry 2005, 70, 5335-
5338.
79
Submission guidelines
ence section, or on the title page. Papers will be tracked submitting
by special software that will keep author information To submit a paper, authors must register on our Online
separate from the paper itself; be written in standard Journal System (OJS) at http://ejournals.library.gat-
U.S. English; utilize standard scientific nomenclature; ech.edu/tower. Once the author fills out the required
define new terms, abbreviations, acronyms, and sym- information and registers as an author, he or she will
bols at their first occurance; acknowledge any funding, have access tot he submission page to begin the multi-
collaborators, and mentors; not use footnotes — if step submission process.
footnotes are absolutely necessary to the integrity of
the paper, please contact the AESR at review@gttower. For more detailed submission guidelines, as well as cur-
org; reference all tables, figures, and references within rent deadlines and news, please visit gttower.org.
the text of the document; adhere to the Georgia Insti-
tute of Technology Honor Code regarding plagiarism
and proper referencing of sources; and keep direct quo-
tations to an absolute minimum — paraphrase unless a
direct quote is absolutely necessary.
Deadlines
Submissions are accepted on a rolling basis throughout
the year. The Tower publishes an issue per semester.
Due to the review and production process, for submis-
sions to be considered for each issue they must be
submitted before the publicized deadline, which can
be found at gttower.org. Submissions received after
this deadline will be considered for the following issue.
If the submission quality will be compromised in the
attempt to meet the deadline, authors are encouraged
to further develop their work and only submit it once
it is fully realized.
Eligibility
Submitters must be enrolled as undergraduate students
at the Georgia Institute of Technology to be eligible
for publication. Authors have up to three months after
graduation to submit papers regarding research com-
pleted as an undergraduate. The priciple investigator
may not be included among the co-authors.
InternationalBrotherhoodofElectricalWorkersLocal613
501PulliamStreet,SWSuite250Atlanta,Georgia30312
(404)523Ͳ8107
www.ibew613.org
GeneR.O'Kelley MaxMountJr
BusinessManager President
Advertisements
Spring 2010 : The Tower
We know your eyes are on the future.
So, look at Yokogawa.
We are looking for chemical, electrical and mechanical engineers.
For more information on employment opportunities,
log on to www.yokogawa.com/us
Advertisements
Compliments of
www.hooverfoods.com
Merial,
a world leading
animal health company,
is a proud
contributor to
Georgia’s growth
in the
biotech sector.
A Family Business
for 110 Years
21 East Broad Street
Savannah, GA 31401
912.236.1865
w w w. b a r n h a r d t . n e t
Fax: 912.238.5524
1-800-848-2270
Advertisements
The World’s Leading Conveyor Belt Company
Parsons Brinckerhoff
3340 Peachtree Rd. NE METROPOWER, INC.
Suite 2400, Tower Place100 1703 Webb Drive
Atlanta, GA 30326
(404) 237-2115
Norcross, GA 30093
www.pbworld.com Phone: 770-448-1076
Fax: 770-242-5800
Compliments of
Piedmont Center
1391 Cobb Parkway N.
Marietta, GA 30062
770-422-7118
www. stasco-mech.com
Advertisements
Albany~Augusta~Atlanta
Environmental Engineering
Water Resources Engineering
Sewer and Stormwater
Civil Site
Transportation Design
Operations and Permitting
Surveying / GIS / Mapping
Funding and Planning Assistance
Operations and Permitting www.speng.com
Cunningham Forehand
Matthews & Moore
Architects, Inc.
2011 MANCHESTER STREET, N.E.
ATLANTA, GEORGIA 30324
404.873.2152