Вы находитесь на странице: 1из 20

COMPUTER AND INTERNET TECHNOLOGY

Computer Hardware
A report for
Geoff Wingfield
By
Andrew Hobbs
06/12/14

Table of Contents
1

Introduction..................................................................................................................................... 2

Computer Hardware....................................................................................................................... 2
2.1

Data transfer and the Computer Bus.....................................................................................2

2.2

Microprocessors and heat dissipation....................................................................................5

2.3

Power Consumption............................................................................................................... 5

Computer Components and speed................................................................................................. 6

Building a PC.................................................................................................................................. 9

Computer Benchmarking.............................................................................................................. 10

Telephone and Internet System.................................................................................................... 12

Abbreviations................................................................................................................................ 15

Bibliography.................................................................................................................................. 15

Computer Hardware
Computer and Internet Technology Report by Andrew Hobbs Year 1
HNC Engineering (Electronic Design) 06/12/14

1 Introduction
This report is being written to investigate the properties and electrical theory
behind computer hardware and the main components of a computer system. It
will also look into the internet and the telephone infrastructure that it is based
upon. This will be achieved in five research and analysis tasks, which, more
specifically, will take an overview of electrical theory and the common hardware
components found inside a modern computer and also investigate data rates,
heat dissipation, benchmarking and the internet and telephone system. On top of
this, a research task will highlight the pros and cons of building a custom PC and
the relative cost of purchasing a ready built PC.

2 Computer Hardware
2.1

Data transfer and the Computer Bus

The devices inside a computer are all interconnected with each other to allow
each component to communicate with each of the other components. Several
groups of copper tracks on the motherboard (PCB), each called a Bus, provide
these interconnections.
The Motherboard in a PC is the base of the unit, containing the Buses along with
all the other electronic circuitry that serve to link together and drive each of the
main hardware components needed to provide an Operating System and User
Interface. Typically, a PC would have a processor, hard disk drive, RAM, Graphics
Card, Optical Drive and a power supply, all connected to the motherboard to
allow the PC as a whole to function.

Computer Hardware
Computer and Internet Technology Report by Andrew Hobbs Year 1
HNC Engineering (Electronic Design) 06/12/14

Figure 1
Each arrow in the diagram (above) represents a Communication Bus system for
the transfer of data between the various hardware components of a PC. The BIOS
(Basic input/output system) is the firmware installed on the motherboard to allow
the user to access the machines boot options and drives without an operating
system. It also manages the hardware power up, checks the system is running ok
and chooses (or allows the user to choose) which hardware device to boot
from.
The speed, at which a computer can run, measured in Hertz, is defined by the
speed of each piece of hardware and the highest speed the buses can operate
at. The processor will typically be the fastest item in the machine and might run,
for example, at 3.1Ghz (Gigahertz) but the Hard drive (where all the data is
stored) is a mechanical item and may only run at 7200rpm which equates to a
much smaller frequency (or speed). The buses are ultimately limited in speed as
well. This is because a bus consists of multiple pairs of tracks. A pair of tracks
can be considered like the diagram below.

Computer Hardware
Computer and Internet Technology Report by Andrew Hobbs Year 1
HNC Engineering (Electronic Design) 06/12/14

Figure 2
The diagram illustrates that a pair of tracks actually has a resistance, a
capacitance and an inductance. The resistance is simply the opposing force,
measured in ohms, which the electrons moving through the copper must pass
through. A capacitance is present between the two copper tracks because the air
between them acts as a dielectric (just like in a normal capacitor), however the
capacitance is very tiny. The right hand thumb rule explains that any current
carrying wire produces a magnetic field which induces a back-EMF and therefore
a secondary current that opposes the first (for an AC signal). Both wires produce
magnetic fields which resolve to form a resultant (larger magnitude) magnetic
field. For an AC signal, both capacitance and inductance generate forces
opposing the flow of electrons depending on the frequency of the signal. This is
called reactance (inductive and capacitive). The equations for calculating the
reactance based on the capacitance, inductance and frequency along with the
corresponding graphs are shown below.

Figure 3: Capacitive and Inductive Reactance (learnabout-electronics.org, 2014)

Computer Hardware
Computer and Internet Technology Report by Andrew Hobbs Year 1
HNC Engineering (Electronic Design) 06/12/14

The graphs show that frequency is directly proportional to Inductive reactance


and indirectly proportional to capacitive reactance. This means that as the
frequency increases, the inductive reactance increases and the capacitive
reactance decreases. You can see from Figure 2 that this effectively tends
towards a short circuit between the two tracks as frequency increases because
the impedance across the capacitor is decreasing and the impedance across the
inductor is increasing. So, in conclusion, any parallel bus system will always be
limited to a maximum speed because as the frequency of an applied data
signal increases the impedance (overall resistance) of the bus increases
exponentially and tends towards infinity.

2.2

Microprocessors and heat dissipation

The CPU or central processing unit is the brains of any computer system. It is
made from millions of tiny electronic devices called transistors which are used to
complete addition and subtraction of binary numbers millions of times a second.
Each transistor is very tiny, some as small as 10nm, but they do have a tiny
resistance. As Ohms law states, when a voltage is applied to a resistive load and
a current is drawn, energy is transferred with the movement of electrons.
Although each transistor only draws a tiny amount of current, and therefore only
produces a tiny amount of power, when you add together the power used by all
the millions of transistors it equates to a relatively large amount. As with any
system that requires power, there is a percentage of wasted energy to contend
with. As the electrons travel through the conductor, they bump into each other
and transfer the energy they are carrying, however it is not 100% efficient and
some is wasted or excess and turned into heat energy. All the tiny amounts of
wasted heat from each energy transfer add together to become a large
temperature increase. This is why modern CPUs require a large heat sink and lots
of fans to draw the heat away from the chip itself and transfer it into the
surrounding environment to prevent the chip from overheating and melting.
The faster a CPU runs (the higher the frequency of the signal) the higher the
impedance of the chip for the same reasons as buses tends towards a maximum
speed. Due to the inductive and capacitive nature of electronic devices,
impedance increases with frequency and therefore so does current drawn and
energy transferred. A higher amount of energy being transferred means a higher
amount of wasted energy or excess energy that is given off as heat.

2.3

Power Consumption
Computer Hardware
Computer and Internet Technology Report by Andrew Hobbs Year 1
HNC Engineering (Electronic Design) 06/12/14

To investigate the running costs of an average computer lab in a school or


business, assume a PC consumes on average 200 watts per hour for 10 hours a
day. A unit of electricity costs 15 pence. How much would it cost to run 20 PCs
for a 5 day week?
1000W/Hour costs 15p
200W/Hour costs 3p
For 10 hours thats 30p
For 5 days that 150p
For 20 computers thats 3000p
So it costs 30 for 20 computers running at 200W per hour for 10 hours a day
during a 5 day week.
So to run an average computer laboratory for a week it will use 200kW of power
and cost the business 30. Taking into account the number of computer labs in
one building or even in one city, or even in the country, a lot of energy is being
used and spent. Globally, the impact on the environment will be huge because of
all the fuels being burnt to produce the energy required to run all the computers.

3 Computer Components and speed

Computer Hardware
Computer and Internet Technology Report by Andrew Hobbs Year 1
HNC Engineering (Electronic Design) 06/12/14

As previously stated, processors are made up of an intricate network of tiny


transistors that work together to perform thousands and thousands of simple
operations (like addition and subtraction) a second. Moores law is an
observation by Gordon E Moore that the density of transistors in an integrated
circuit will double in size every two years. This has proven to be a very reliable
forecast and has actually been a target for the developers of processors over the
last 30-40 years. This observation accurately portrays how the latest technology
is used to optimise the speed of processors. Over the past 40 years, processors
have increased in speed exponentially and in proportion to the number of
transistors built into a similarly sized package. A package is a piece of silicon
die that meets the requirements of the processor specification. The fundamental
principle of this process is that the more transistors in a package the more
operations per second can be carried out and therefore the higher clock speeds
can be achieved. Modern processors have been built to contain multiple cores
which allows them to complete multiple operations (or to run multiple threads) at
the same time whilst sharing peripheral circuitry with the other cores. Another
modern technique to improve overall processor speeds is called Hyper-threading.
Hyper-threading still only allows one thread to be completed at a time but it does
allow the next thread (operation) to start just after the first so that both
operations move through the process staggered. For example, if a single
operation runs from point A to Point B to Point C before completion,
hyperthreading allows the next operation to start running as soon as the first has
left point A to move to point B. In this way the processor becomes a lot more
efficient as the number of parts of the processor that are idling is reduced.
RAM (or Random Access Memory) is a temporary store of data that is directly
connected to the processor with its own dedicated bus system. RAM can be
accessed (read/written) much quicker than a Hard Drive as it works from
capacitors and transistors and is solid state whereas the hard drive is mechanical
and therefore limited in speed by its arm/rotor system. RAM is used to store data
from the hard drive that would otherwise have to be accessed often and slow the
processor down. Advances in RAM technology have led to the development of
multiple different types of RAM and cache memory. Amount of data storage
space and the speed at which the memory can be accessed are nearly always
inversely proportional.

Computer Hardware
Computer and Internet Technology Report by Andrew Hobbs Year 1
HNC Engineering (Electronic Design) 06/12/14

D-RAM (or dynamic RAM) is the standard and contains most of the intermediate
data between the hard drive and the processor. Its technology is based on
capacitive charging using FET transistors and the data is constantly being
refreshed (re-written) as the capacitors In the system will only hold charge for a
short amount of time.
S-RAM (or static RAM) is based on older bi-polar transistor technology and is used
for very fast cache memory.
To increase overall computer speed, very fast RAM or cache memory is used at
varying levels in the processor which allows for very fast access to progressively
smaller amounts of data. In most modern processors there are three levels of
cache memory, L1, L2 and L3. L3 cache is located in the processor core and can
run at speeds almost rivalling the processor itself but only contains tiny amounts
of data. L2 cache sits in the processor but outside of the core, has slightly more
storage and is a little slower than L3. L1 cache is external to the processor and
acts as an intermediary between the RAM and the processor for data that is
accessed most often.
Another technological technique to improve speeds is the stack. The stack is a
separate section of the RAM dedicated to managing the threads or current
operations of the processor. This is necessary because the processor can only
run one operation at a time so its designed to swap operations in and out of the
stack to achieve almost simultaneous operation.
A hard disk drive is a mechanical device used for storing large amounts of data.
The basic construction consists of one or more ceramic disc coated in thousands
of tiny magnetic filings and an actuating arm with a coil on the end to act as a
magnetic encoding head. The arm can either read or write by either using it
powered or unpowered. Data is stored by simply manipulating the magnetic
filings to either lay one way or the other which represent a 0 and a 1. The
manufacturers of hard drives will complete low level formatting which splits the
hard drive into sections called sectors or clusters. This is basically a quick
reference notation system to allow faster access to the file that is required rather
than searching the whole disk. A sector is 512 bytes and a cluster is made up of
8 sectors (4KB). The operating system of the computer system will set up highlevel formatting of the hard drive to organise the data storage at a software
level.

Computer Hardware
Computer and Internet Technology Report by Andrew Hobbs Year 1
HNC Engineering (Electronic Design) 06/12/14

Improvements to hard drive technology include increasing the number of disks


and heads in a system to allow for a greater amount of storage and increases to
read and write speed. Improvements to speed have been implemented by
building in technology to increase the speed at which the physical disc can turn
(RPM) and the seek speed of the arm which allows for quicker location of the
required data. A certain amount of fast access cache memory or RAM is added to
modern hard drives as a much faster intermediary data storage solution between
the HDD and the RAM. Frequently used data is preloaded into the cache while the
system would otherwise be idling to decrease the access time when the data is
called for.
Solid State Drives are another form of long term data storage that has been
recently developed in an attempt to increase the overall speed of computer
systems. The Solid State Drive has no moving parts (as the name suggests) and
is based on similar technology to RAM, with capacitive transistors being used to
store data rather than physical magnetic filings. The advantages of solid state
drives are that much greater access speeds can be achieved because the drive is
not limited to the speed of a physical rotating disc or actuating arm. However,
SSDs fall down on reliability and longevity. Due to the capacitive nature of SSDs
they have a limited number of read/write cycles that is far lower than a
conventional HDD which makes them less reliable. Also, as an emerging
technology, the SSD is still in feverish development and therefore has a higher
price tag than standard HDDs which are near to perfect or as good as they are
going to get.
Graphics cards have been designed to speed up the computer operation as
whole. As there is a developing requirement for large amounts of graphical
processing (for games or 3D image manipulation), the standard CPU began to
struggle to handle the number of computations need per second. The graphics
card, which contains a GPU, V-RAM and its own cooling fans, is built to handle
lots of simple graphics computations per second, leaving the CPU to process
other normal computer tasks. A GPU is generally slower speed than a CPU but
has as many as a thousand cores as opposed to just 1 4 in a normal CPU. VRAM or Video RAM is a special type of RAM designed to hold graphics data for the
GPU to access quickly.

4 Building a PC
Computer Hardware
Computer and Internet Technology Report by Andrew Hobbs Year 1
HNC Engineering (Electronic Design) 06/12/14

10

An example of a mid-high performance computer that could be used for media


manipulation or gaming is the Hermes.
(Ukgamingcomputers.co.uk, 2014)
Hermes I7 Gaming PC (859.99)
It can be purchased pre-assembled and ready to go for 859.99 presuming the
user has already obtained a display, keyboard, mouse and operating system.
Below is the price breakdown of all the parts needed to compare this with
building a custom PC with a similar specification.
Component
Coolermaster CM Storm Enforcer
Corsair CX750 750W
Intel Core i7 4790K 4.00Ghz [overclockable]
8GB Corsair DDR3 XMS3 1600MHz
Gigabyte Z97-HD3 R2.0
Nvidia Geforce GTX 760 2GB
Seagate Barracuda 1TB HDD 7200rpm 64mb
Cache
Asus DRW-24B5ST 24x DVD/CD +/- Re-Writer
Black
Total

Price (Individual)
63 (amazon)
60 (amazon)
243 (amazon)
63 (amazon)
80 (amazon)
159 (amazon)
42 (amazon)
14 (amazon)
724

Computer Hardware
Computer and Internet Technology Report by Andrew Hobbs Year 1
HNC Engineering (Electronic Design) 06/12/14

11

On top of these parts a self-build user would also need to purchase Thermal
compound, SATA cables, cable management and extra case fans.
Factoring in the time taken to put the PC together and the relative risk involved
with assembling the processor and motherboard manually without any guarantee
of money back if something goes wrong it is clearly better value for money to
pay for a pre-built PC. Another advantage of a pre-built machine is the warranty
that comes with it, for example, the Hermes comes with a 6 year limited
warranty covering component failures due to faulty parts or build quality.
It should be noted however, that the Hermes is bought from a retailer
specialising in pre-building computers and purchasing a pre-built of this
specification from a normal PC retailer could cost more so it pays to shop around.
A general trend in building computers is that the more expensive the cost of the
build is the more value for money it becomes for a user to build the computer
themselves.

5 Computer Benchmarking
Historically a Benchmark was literally a mark in a bench used for measuring
materials to a certain size quickly and accurately. Since then it has become the
term used for testing a system against a particular set standard to see how much
better or worse that system is relative to the standard.
In Computing, a benchmark is generally a series of tests or applications run in a
row that are specifically designed to stress the various components of a
computer and record the time taken to complete each task and the total time
taken and use the results as a measure of performance (or a benchmarking
score).

Computer Hardware
Computer and Internet Technology Report by Andrew Hobbs Year 1
HNC Engineering (Electronic Design) 06/12/14

12

There are really two types of computer benchmarking tests: Synthetic Tests and
Realistic Tests. Synthetic tests are specifically designed software programs that
usually perform very simple assembler level operations repeatedly to stress the
hardware at its maximum speed and capacity and measure the time taken to
complete the tasks. These types of test can be subject to bias and therefore do
not always provide a realistic performance rating (how the computer will actually
perform in the real world). Manufacturers of computer hardware can choose to
run a specific type of synthetic benchmark test on their system which may have
been purpose built to perform exceedingly well at whichever operation that
particular benchmark test is using. This will result in a very high benchmarking
score that wont be reflected in real world performance and could be misleading
to clients and customers.
For example, in 2008, Nvidia claimed that theyre GPUs could achieve any
operation the new high-end quad core CPUs could and still outperform them. To
prove this Nvidia showcased a video transcoding app that used Nvidias GPU to
convert video 19x faster than a high end quad core CPU. Very impressive,
however, it turned out that the particular application could only utilise one of the
four cores on the CPU and when the test was rerun using an industry standard
video transcoding application the difference in performance between the two
processors was minimal. (Rothman, 2014).
A realistic benchmarking test is run from a user environment and usually gives a
much more reliable result. Normally, a program or game that is not designed as a
benchmark test is used to complete an operation or set of operations repeatedly
and then timed to give a measure of performance. For example, a macro
(recording of a group of operations) can be set up in Adobe Photoshop to perform
the same set of operations on the same image over and over. This can then be
timed on multiple hardware configurations to see which performs the best.

Computer Hardware
Computer and Internet Technology Report by Andrew Hobbs Year 1
HNC Engineering (Electronic Design) 06/12/14

13

Using these two tests provides computer users with a quantitative measure for
each piece of hardware or a computer system as a whole that can be compared
with other hardware or systems to allow for a more accurate method of
determining how to build the highest performance computer. It is also useful for
users to know the benchmarking score of their PC and/or PC components to allow
them to select programs, applications and games that their system can run and
it will provide a measure of how well it will run each application or game. For
example, the minimum system requirements for Crysis 3 are a Pentium Dual
Core E6600 3.06GHz and 2GB RAM. (Game-debate.com, 2014)
If the user of a system has an AMD processor how will he/she know if they can
run the game acceptably? Checking the benchmarking score of their own CPU
and comparing it to a benchmarking score on the Intel Pentium would provide
the answer.
There are many system benchmarking programs available to download online
(Paid for or Free).
- NovaBench

Download

- FutureMark

Download

- PassMark

Download

- CPU Benchmark (cpubenchmark.net)

Online

One of the most accurate and popular free benchmarking software systems is
SiSofts SANDRA 2012. This software provides a multitude of testing options that
will either test your system as a whole in a number of different ways or run tests
that target individual components whilst minimising workload on other computer
components. This software also provides benchmarking scores for other popular
hardware components as well as a few example systems for comparison
purposes.
(PC World, 2014)

Computer Hardware
Computer and Internet Technology Report by Andrew Hobbs Year 1
HNC Engineering (Electronic Design) 06/12/14

14

6 Telephone and Internet System


The telephone and internet system largely consists of a huge network of routers
and cables that connect people together from all around the world. Large
telecommunications companies and ISPs (Internet Service Providers) provide
networks of routers and switches to control the flow of network traffic and
provide links to other telecommunication networks in other regions of the world.
At a local level, each household will have a router that is connected to the phone
line in some way. This is normally achieved through an ASDL filter which uses
Frequency Division Multiplexing (FDM) to separate phone calls, downloads and
uploads. The router can usually have unlimited connections (bandwidth and
throughput allowing) and each device connected will have its data send over the
telephone wire using Time Division Multiplexing (TDM). The phone line is then
connected the local exchange by a transmission media, along with all the other
properties in roughly the same postcode.
Transmission media can vary between standard copper cable (dial-up), coaxial
(broadband) or Fibre Optics. Broadband lines allow for multiplexing and is pretty
much the standard for modern life. Dial up enables using the internet over the
phone line but doesnt facilitate multiplexing which means a user can only use
either the phone or the internet and not both at once. Fibre optics is the latest in
internet speed developments and uses pulses of light to transfer data instead of
electrons through a copper cable. This achieves the fastest possible data
connection (the speed of light) but is an expensive system to implement and
ultimately the user is still limited by the speeds from the local exchange to the
main exchanges and ISPs.
The local exchange box will generally be situated no more than a kilometre away.
This is because most systems still use copper cabling and the longer the cable
the larger the impedance (resistance) to the current carrying the data. The
impedance of the wire is calculated using the resistance, capacitance and
inductance of the cable along with the frequency of the supplied signal. As the
frequency increases so does the impedance so it is necessary to either sacrifice
distance of the cable or the frequency of the signal.

Computer Hardware
Computer and Internet Technology Report by Andrew Hobbs Year 1
HNC Engineering (Electronic Design) 06/12/14

15

Bandwidth is a term that comes from analogue electronics and refers to the
range of frequencies able to pass through a conductor. In networking, the
useful or useable range of frequencies is defined by the -3dB point (or roughly
71% of the maximum signal amplitude). Any electronic media has a frequency
response curve, a therefore a bandwidth, that is based upon its characteristic
impedance when an AC signal is supplied. So there is a limited (large, but
limited) range of frequencies that signals can be sent down a wire.

Figure 4: Bandwidth and -3dB point (Images.elektroda.net, 2014)


Throughput is the amount of data being transmitted past a point per second and
is the definitive measure of speed in a networked system.
Multiplexing is the technique of chopping up the bandwidth to allow multiple
signals to be transmitted over the same wire almost simultaneously. There are
two types of multiplexing: FDM; and TDM.
FDM (Frequency Division Multiplexing) is the technique of using filter circuitry to
chop up the available bandwidth into channels of smaller bandwidth. This
allows many channels of data to transmit simultaneously and is the principle
technology used in ASDL filters to separate phone calls from broadband uploads
and downloads on the same phone line to the local exchange.
TDM (Time Division Multiplexing) is a method of multiplexing which allows each
signal to use whatever bandwidth or frequency range it needs but only provides
a set time period for data transmission per channel. A TDM system has a
multiplexor at one end and a de-multiplexor at the other which are linked to run
in synchronous. The multiplexor manages all the incoming signals by chopping
them up and sending a set range of data for each channel one after the other
and the de-multiplexor takes each chunk of data and directs it back to its
intended destination at the other end.
Computer Hardware
Computer and Internet Technology Report by Andrew Hobbs Year 1
HNC Engineering (Electronic Design) 06/12/14

16

From the local exchange data is sent to larger, centralised exchanges normally
located in large towns or cities to be routed to your ISP or telecommunications
company for routing towards the destination. Every device connected to internet
is allocated an IP address based on the router it is connected too, size of the
local exchange and geographical location. Data is sent in packets with a
destination IP address to aim for and is routed through large exchanges with
each packet potentially taking a different route depending on volume of traffic
down each route and availability of servers and routers to point the way.
The internet uses a packet switched system to help maintain even traffic flows
and good data transfer speeds even at times of peak activity. This is as opposed
to a continuous connection based system like the phone system utilises. Any
data sent over the internet is chopped into tiny packets of data about 4KB in
size and sent individually to its destination. The advantage of this is that any
data lost needs simply to be requested again and can be sent very quickly
instead of missing data leading to a corrupted file and having to re-download the
entire file. Also, individual data packets can be routed the quickest and least
busy way to help improve overall traffic flow and keep internet speeds higher.
A data packet is normally split into a header, the data and a trailer. The header
will contain information like the source IP address, the destination IP address, the
file address and a size. The trailer will denote the end of the packet transfer.
Whichever OS (Operating System) is being used will contain a protocol stack that
is used to packetize data. An application wishing to send data over the internet
will load the data into the protocol stack which will establish a connection, chop
the data into chunks, interface to the hardware, send each packet one by one
(really fast) and wait for acknowledgement of the packets arrival.

Computer Hardware
Computer and Internet Technology Report by Andrew Hobbs Year 1
HNC Engineering (Electronic Design) 06/12/14

17

Figure 5: Protocol Stack (Novell.com, 2014)

Computer Hardware
Computer and Internet Technology Report by Andrew Hobbs Year 1
HNC Engineering (Electronic Design) 06/12/14

18

7 Abbreviations
PCB Printed Circuit Board
RAM Random Access Memory
I/O Input/Output
BIOS Basic Input Output System
CPU Central Processing Unit
SATA Serial Advanced Technology Attachment
USB Universal Serial Bus
RPM Revolutions per minute
EMF Electromotive Force
IC Integrated Circuit
FET Field effect transistor
FDM Frequency Division Multiplexing
TDM Time Division Multiplexing
OS Operating System

8 Bibliography
Learnabout-electronics.org, (2014). Figure 3: Capacitive and Inductive Reactance
[online] Available at: http://www.learnabout-electronics.org/ac_theory [Accessed
1 Dec. 2014].
Images.elektroda.net, (2014). Figure 4: Bandwidth and -3dB point. [online]
Available at: http://images.elektroda.net/90_1310131721.png [Accessed 28 Nov.
2014].
Novell.com, (2014). Figure 5: Protocol Stack. [online] Available at:
https://www.novell.com/documentation/nw65/ntwk_ipv4_nw/graphics/con_022a.g
if [Accessed 5 Dec. 2014].
Rothman, W. (2014). Computer Benchmarking: Why Getting It Right Is So Damn
Important. [online] Gizmodo. Available at:
http://gizmodo.com/5373379/computer-benchmarking-why-getting-it-right-is-sodamn-important [Accessed 25 Nov. 2014].

Computer Hardware
Computer and Internet Technology Report by Andrew Hobbs Year 1
HNC Engineering (Electronic Design) 06/12/14

19

Game-debate.com, (2014). Crysis 3 System Requirements and Crysis 3


requirements for PC Games. [online] Available at: http://www.gamedebate.com/games/index.php?g_id=3953&game=Crysis%203 [Accessed 2 Dec.
2014].
PC World (2014). How to Benchmark Your PC for Free. [online] PCWorld. Available
at:
http://www.pcworld.com/article/258473/how_to_benchmark_your_pc_for_free.htm
l [Accessed 4 Dec. 2014].
Ukgamingcomputers.co.uk, (2014). Hermes - i7 Gaming PC | i7 PCs | Gaming PCs
& Custom PC desktops - UK Gaming Computers. [online] Available at:
http://www.ukgamingcomputers.co.uk/hermes-i7-gaming-pc-p-98.html [Accessed
4 Dec. 2014].

Computer Hardware
Computer and Internet Technology Report by Andrew Hobbs Year 1
HNC Engineering (Electronic Design) 06/12/14

Вам также может понравиться