Академический Документы
Профессиональный Документы
Культура Документы
" -
Louis C.K. No one can contend that technology has come a long way in a relatively short time. In
1945, the ENIAC, the world's first supercomputer, was developed. It was heralded as a "Great Brain"
and used by the military to calculate artillery firing tables. But really, it was just a thirty ton calculator
that could do about 5,000 simple addition or subtraction calculations per second1. Is that faster than a
human brain? Yes. But today's computers, including the outdated model this paper is being written on,
can do upwards of 50,000,000 simple calculations per second. Even the hand-held calculators grade-
school children run around with in their backpacks have more computation power than the ENIAC. It is
clear that these advancements, which have taken place in less than an average human's lifespan, are
very advantageous in many aspects of human life. Calculations that took a team of engineers a year to
complete, the ENIAC could do in 2 hours, and computers now can do in seconds. This is the subject
this paper will analyze: improving system function is critical in allowing for better performance, which
A good place to start the journey through improving system (the computer will be the primary
system discussed) function is the CPU, or Central Processing Unit. The earliest machines, using the
ENIAC as an example again, had to be physically rewired to do different tasks. For example, if you
wanted to do an artillery firing table then, after that was complete, do some calculations on the
hydrogen bomb, an engineer, or team of engineers, would physically have to go into the machinery and
rewire certain aspects of the system. This was seen, even before the completion of the ENIAC as a
severe limitation by John von Neumann. In his paper First Draft of a Report on the EDVAC2, Neumann
discusses the design of a "stored-program computer." Rather than the wiring determining which
programs the machine could or could not do, the system would use "high-speed computer memory"
thus ridding the need for a team to come out and rewire a system to change programs.
So what makes up a CPU? There are two parts: the arithmetic/logic unit (ALU) and the control
unit (CU). As the name suggests, the ALU carries out arithmetic and logic operations on the operands,
or inputs, in computer instruction words. The control unit is the part of the CPU that tells it what to do.
Operations vary, but in general the control unit performs the tasks of fetching, decoding, managing
execution, and then storing the results performed by the ALU. If one were to think of the CPU as the
"brain" of a computer, the CU is the "brain" of the CPU. However, we can't take that break-down logic
any further due to issues with infinity which will be discussed in my next paper on the philosophy of
advanced math.
We have discussed the CPU in a little bit of detail now so lets talk about how we can make it
faster. One of the most common terms when talking about a CPU's speed is "clock rate." Basically, this
is how fast a CPU can perform the basic functions listed earlier. Most common computers run what is
called a synchronous CPU. This means that it has an oscillating clock that regulates the rate at which
instructions are executed and synchronizes all the various computer components. Early clocks like the
one in EDVAC had a frequency range of 100 kHz to 4MHz. These clocks were significantly limited by
the material they could use to make switching mechanisms (specifically tubes and relays). Nowadays,
for a given CPU, the clock rates are determined at the end of the manufacturing process through actual
testing of each CPU. The usual range you will find most CPUs operating at is about 1.33 to 1.5 GHz,
Clock rate, however, is not necessarily always a good indicator of how fast a CPU works. For
instance, in practice, a 2MHz processor should work twice as fast as a 1MHz processor. However, if the
1MHz processor is more efficient, it might be able to do more tasks in a clock cycle than the 2MHz
processor. Again, in practice that should be true, but CPU speed is dependent on many other aspects of
computer design. Some of these aspects include, but are not limited to, parallelism, CPU cache, and,
maybe most of all, whether or not you have more than one processor.
The early CPUs were built such that they could only work on one task at a time; these CPUs are
referred to as sub-scalar. This seems logical, but is also inherently inefficient because the CPU must
wait for that one operation to finish before moving on to the next. This problem was fixed by
introducing methods to make CPUs act in parallel. Basically, this is a way to build a CPU such that it
can work on multiple tasks at once. There are two main ways to achieve this result: Instruction Level
Parallelism, which seeks to increase the speed of execution for each task, and Thread Level Parallelism,
which seeks to increase the number of tasks the CPU can execute simultaneously. Neither of these
techniques is inherently better than the other; they are just different ways to increase CPU parallelism.
The CPU cache also helps boost computer speed and performance in a significant way.
Basically the cache is a small amount of memory, maybe 512KB to 6MB, that the CPU has immediate
access to. Inside this cache, copies of frequently used data are stored. When the CPU needs to read or
write from a main memory location, like hard disk, it will check to see if a copy of that information is
in the cache. If there is a cache copy, the CPU immediately operates on it, but if not, the CPU has to go
to the actual storage location which takes longer based on the speed on the memory location (like a
Finally, the most striking way to increase overall CPU speed and efficiency, is to simply add
another processor. With today's electronics constantly getting smaller and smaller, it becomes possible
to put multiple processors on the same chip. This means that one should be able to double the amount
of work a single system can do. However, as was the case with clock rates, the amount of benefit
received is dependent on the software algorithms and their implementation. So, on average, a dual-core
processor (2 CPUs) will increase productivity by about 50%, not 100%3. So, in general, a dual-core
processor will work 1.5 times faster than if you were to just have a single core. But, technology always
advancing the way it does, quad-cores and even hex-cores are becoming more and more popular.
The CPU isn't the only device that can enhance the speed of a system. Another big performance
enhancer is memory. As was stated earlier, the first computers were basically really big calculators that
could do rather rudimentary operations. Today you can buy an external hard drive that can hold
upwards of 2 Terabytes worth of data. In comparison, in as far back as the 1990s, computers with hard
drive spaces of 32 Kilobytes were a big deal. As anyone familiar with computers would know, some
song files on their computer are larger than 32KBs. The amount of memory storage makes it possible to
run more programs, retain more data, and do more operations, however, none of that matters if the
system can't do it quickly. Therefore, let's examine the ways computer memory has been enhanced.
We have already looked at how the CPU's cache memory has improved system performance but
another type of memory, Random Access Memory, is also important for speeding up a system. RAM is
a type of data storage used in computers. The word “random” refers to the fact that any piece of
consistent manner. Obviously, the more RAM a system has, the more data it can retrieve for the CPU to
RAM usually comes in modules, or sticks, that are easily swappable out of and into a system.
This is perhaps one of the biggest benefits to RAM; it is easy to upgrade without affecting other
hardware inside a system. The only real downside to RAM is the fact that it is volatile, meaning that if
power is lost, the information stored in RAM will be lost. There are, however, new developments being
made to produce a non-volatile type of RAM called MRAM which will not lose any data when
powered down4.
access. Basically this procedure allows the CPU to read or write two to sixteen bytes simultaneously
instead of just one. This improvement is easily implemented by widening the bus data path and using a
larger data register. For a CPU using a parallel processing system, this makes operation run smoothly,
Finally, memory interleaving is another method for increasing the effective rate of memory
access. This means that the CPU can access multiple data storage locations at one time. This method is
particularly useful when multiple devices require access to the same memory. Thus, several different
components may make memory requests at the same time which increases efficiency and speed.
So far we have discussed the many ways that a system can be enhanced and sped up. The
techniques mentioned are, of course, not the only ones available, but let us turn now to how and why
speed is important for real world productivity. We can examine the business sector, scientific research,
and many other aspects of real life to see why speed and performance are important.
The business sector is a constantly changing and adapting aspect of human life. At the airport
any person can pick the businessmen out of the crowd. They're usually the people running around with
one or two blu-tooth headsets in their ears, talking on a blackberry, while checking email on their
laptops. All of these devices are very high-speed and useful for multiple aspects in the business world.
The amount of information available has to be processed quickly in order for a good businessman to
make the right decision on whether or not to sell or buy, in the case of a stock broker.
Another business and real-world issue solved by computers and technology is communication.
Email is probably the most common form of communication today. It seems like the only way to get
anything done anymore is to “send me an email” even when the person you need to talk to is two rooms
down the hall. Email is just that convenient! Easily millions of emails are sent per day ranging in topics
from business, like sending quarterly reports, to the most mundane and insignificant emails telling you
that someone “likes your facebook status.” This traffic has to all be sorted and sent by some rather
advanced machines. It takes just a couple minutes for an email to get from Korea to the United States
because of the computer enhancements that have been made in the past decade or so.
Scientific research is another area in which computer systems are hugely important. No matter
what science we talk about, medicine, astronomy, engineering, biology, or chemistry, computers are the
tools that all of these scientists use to get their work done and make our lives better. Computers are
constantly running programs to decode things the human genome or making calculations about distant
stars and planets, or running simulations on what could happen if some new virus was unleashed on the
public. All of these things would not be possible without high-powered and extremely efficient
machines.
It used to be that scientific papers and journals were only available to a select few people,
published, and then kept in one location. If someone wanted to read that paper, he or she would have to
put in a request to get it mailed to them which could take weeks or even months. Now, with the advent
of the internet and other networking technologies, what someone has to say on a subject in China can
be read the next day by any interested party anywhere in the world.
This access to information and being able to retrieve it at any time is a huge step up from where
we used to be just sixty years ago. Without the many highly sophisticated servers world-wide to pass,
store, and update this information, the many breakthroughs we take advantage of would be fewer and
farther in between longer spaces of time. As it stands today, it seems like every couple months a new
humans, make and produce cars, textiles, and other goods. This cuts down on the need for human labor
and saves companies thousands, if not millions, of dollars; there is only need to hire one or two
engineers to fix the automated machines when they break down. This has also made the manufacturing
industry more efficient because you can have a machine work all night without getting tired.
Production has increased, safety has increased due to less humans working and therefore not getting
hurt, and the company makes bigger profits because it doesn't need to hire and pay as many workers.
Another way computers improve productivity is by the use of work groups. If there are two
branches of a business or some other company sperated by a significant distance, the use of online
work groups makes it possible for those two, or more, branches to work together on a project by
pooling all their information in one, easy to access spot. This makes it very easy and time efficient for a
large project to be divided up, worked on by people hundreds of miles apart, and then put back together
to be presented.
Human services have also benefited from the use of computers and technology. It used to be
that people would have to spend a lot of time filing paperwork and organizing vast amounts of
information into drawers, cabinets, and lockers. When one piece of information was needed, it would
have to be manually retrieved from a drawer somewhere that someone thought was a logical place for
it. With the advent of computers filing papers became ridiculously more simple. A template for a
document could be created that an HR representative filled out for new prospective client. Or any
mandatory training that a new hire needed to complete could be done online and a copy of their
Easy retrieval is really the key to work-place efficiency and computers now are amazing at that
and only getting better. Medical records can be kept on a network so a nurse can bring up your name in
a matter of seconds and know what you've had wrong with you in the past and what medications don't
work for you. Accountants can bring up past tax information in case their business is being audited.
Stock brokers can tell you with just a few clicks how much money you have made or lost in the past
couple days, months, or years. Lawyers can bring up all the information needed for their trial at 10:00
and then with a couple click, be ready for their next trial at 11:00. The possibilities and uses are really
endless.
There are also computers in our everyday lives that we don't even think about or that we take
for granted. Chances are, there is probably a computer chip or something similar inside your car that
tells you there is something wrong with the engine, how much gas you have left, or how many miles
you have driven. Automated programs run the traffic signal lights on highways and roads. Advanced
computer programs can monitor patients on life-support. Today's smart phones are actually small
computers that can also make phone calls around the world. Computers run the switch boards that rout
those same phone calls. Electronic musical instruments probably made most of the music you listen to
in your car. It is very difficult to think of things that a computer or programmed robot couldn't do.
As you can see, significant advances have been made in computer and system technology. The
speed with which data can be accessed is only getting faster and the amount of data that can be stored
on a machine is only getting bigger. New technologies and techniques are being developed everyday to
make life easier, to make workers more productive, to make work-places more efficient, and, therefore,
make businesses more profitable. Not only that, but computers are part of our everyday lives in ways
we often don't even think about; from cars to cell phones to the automated programs that change the
traffic lights on the highway. Computers enhance our lives and we make them faster so we can do even
more.
Outside References:
http://ftp.arl.mil/~mike/comphist/eniac-story.html
2. "First Draft of a Report on the EDVAC" (PDF format) by John von Neumann, Contract No.W-670-
ORD-4926, between the United States Army Ordnance Department and the University of Pennsylvania.
http://www.virtualtravelog.net/wp/wp-content/media/2003-08-TheFirstDraft.pdf
http://searchdatacenter.techtarget.com/definition/multi-core-processor
http://www.crocus-technology.com/pdf/BH%20GSA%20Article.pdf