Вы находитесь на странице: 1из 24

Code : CS 201(First Semester)

Contacts : 2L + 1T = 3
Credits : 3
Fundamentals of Computer:
History of Computer, Generation of Computer, Classification of Computers 2L
Basic Anatomy of Computer System, Primary & Secondary Memory, Processing Unit, Input & Output
devices
3L
Binary & Allied number systems representation of signed and unsigned numbers. BCD, ASII. Binary
Arithmetic & logic gates
6L
Assembly language, high level language, compiler and assembler (basic concepts)
2L
Basic concepts of operating systems like MS DOS, MS WINDOW, UNIX, Algorithm & flow chart
2L
C Fundamentals:
The C character set identifiers and keywords, data type & sizes, variable names, declaration, statements 3L
Operators & Expressions:
Arithmetic operators, relational and logical operators, type, conversion, increment and decrement
operators, bit wise operators, assignment operators and expressions, precedence and order of evaluation.
Input and Output: Standard input and output, formatted output -- printf, formatted input scanf.
5L
Flow of Control:
Statement and blocks, if - else, switch, loops - while, for do while, break and continue, go to and labels
2L
Fundamentals and Program Structures:
Basic of functions, function types, functions returning values, functions not returning values, auto,
external, static and register variables, scope rules, recursion, function prototypes, C preprocessor,
command line arguments.
6L
Arrays and Pointers:
One dimensional arrays, pointers and functions, multidimensional arrays. 6L
Structures Union and Files:
Basic of structures, structures and functions, arrays of structures, bit fields, formatted and unformatted
files.
5L
Recommended reference Books:
Introduction To Computing (TMH WBUT Series), E. Balagurusamy,TMH
Kerninghan, B.W. The Elements of Programming Style
Yourdon, E. Techniques of Program Structures and Design
Schied F.S. Theory and Problems of Computers and Programming
Gottfried Programming with C Schaum
Kerninghan B.W. & Ritchie D.M. The C Programming Language
Rajaraman V. Fundamental of Computers
Balaguruswamy Programming in C
Kanetkar Y. Let us C
M.M.Oka Computer Fundamentals,EPH
Leon Introduction to Computers,Vikas
Leon- Fundamental of Information Technology,Vikas
The Five Generations of Computers
The history of computer development is often referred to in reference to the different generations of computing devices.
Each generation of computer is characterized by a major technological development that fundamentally changed the way
computers operate, resulting in increasingly smaller, cheaper, more powerful and more efficient and reliable devices.

Read about each generation and the developments that led to the current devices that we use today.

First Generation (1940-1956) Vacuum Tubes


The first computers used vacuum tubes for circuitry and magnetic drums for memory, and were often enormous, taking
up entire rooms. They were very expensive to operate and in addition to using a great deal of electricity, generated a lot
of heat, which was often the cause of malfunctions.

First generation computers relied on machine language, the lowest-level programming language understood by
computers, to perform operations, and they could only solve one problem at a time. Input was based on punched cards
and paper tape, and output was displayed on printouts.

The UNIVAC and ENIAC computers are examples of first-generation computing devices. The UNIVAC was the first
commercial computer delivered to a business client, the U.S. Census Bureau in 1951.

Second Generation (1956-1963) Transistors


Transistors replaced vacuum tubes and ushered in the second generation of computers. The transistor was invented in
1947 but did not see widespread use in computers until the late 1950s. The transistor was far superior to the vacuum tube,
allowing computers to become smaller, faster, cheaper, more energy-efficient and more reliable than their first-generation
predecessors. Though the transistor still generated a great deal of heat that subjected the computer to damage, it was a
vast improvement over the vacuum tube. Second-generation computers still relied on punched cards for input and
printouts for output.

Second-generation computers moved from cryptic binary machine language to symbolic, or assembly, languages, which
allowed programmers to specify instructions in words. High-level programming languages were also being developed at
this time, such as early versions of COBOL and FORTRAN. These were also the first computers that stored their
instructions in their memory, which moved from a magnetic drum to magnetic core technology.

The first computers of this generation were developed for the atomic energy industry.

Third Generation (1964-1971) Integrated Circuits


The development of the integrated circuit was the hallmark of the third generation of computers. Transistors were
miniaturized and placed on silicon chips, called semiconductors, which drastically increased the speed and efficiency of
computers.

Instead of punched cards and printouts, users interacted with third generation computers through keyboards and monitors
and interfaced with an operating system, which allowed the device to run many different applications at one time with a
central program that monitored the memory. Computers for the first time became accessible to a mass audience because
they were smaller and cheaper than their predecessors.
Fourth Generation (1971-Present) Microprocessors
The microprocessor brought the fourth generation of computers, as thousands of integrated circuits were built onto a
single silicon chip. What in the first generation filled an entire room could now fit in the palm of the hand. The Intel 4004
chip, developed in 1971, located all the components of the computer—from the central processing unit and memory to
input/output controls—on a single chip.

In 1981 IBM introduced its first computer for the home user, and in 1984 Apple introduced the Macintosh.
Microprocessors also moved out of the realm of desktop computers and into many areas of life as more and more
everyday products began to use microprocessors.

As these small computers became more powerful, they could be linked together to form networks, which eventually led
to the development of the Internet. Fourth generation computers also saw the development of GUIs, the mouse and
handheld devices.

Fifth Generation (Present and Beyond) Artificial Intelligence


Fifth generation computing devices, based on artificial intelligence, are still in development, though there are some
applications, such as voice recognition, that are being used today. The use of parallel processing and superconductors is
helping to make artificial intelligence a reality. Quantum computation and molecular and nanotechnology will radically
change the face of computers in years to come. The goal of fifth-generation computing is to develop devices that respond
to natural language input and are capable of learning and self-organization.

Classification of Computers
Computers are classified according to their data processing speed, amount of data that they can hold and price. Generally,
a computer with high processing speed and large internal storage is called a big computer. Due to rapidly improving
technology, we are always confused among the categories of computers.

Depending upon their speed and memory size, computers are classified into following four main groups.

1. Supercomputer.
2. Mainframe computer.
3. Mini computer.
4. Microcomputer.

1.Super Computer
Supercomputer is the most powerful and fastest, and also very expensive. It was developed in 1980s. It is used to process
large amount of data and to solve the complicated scientific problems. It can perform more than one trillions calculations
per second. It has large number of processors connected parallel. So parallel processing is done in this computer. In a
single supercomputer thousands of users can be connected at the same time and the supercomputer handles the work of
each user separately. Supercomputer are mainly used for:

• Weather forecasting.
• Nuclear energy research.
• Aircraft design.
• Automotive design.
• Online banking.
• To control industrial units.

The supercomputers are used in large organizations, research laboratories, aerospace centers, large industrial units etc.
Nuclear scientists use supercomputers to create and analyze models of nuclear fission and fusions, predicting the actions
and reactions of millions of atoms as they interact. The examples of supercomputers are CRAY-1, CRAY-2, Control
Data CYBER 205 and ETA A-10 etc.

2MainframeComputers
Mainframe computers are also large-scale computers but supercomputers are larger than mainframe. These are also very
expensive. The mainframe computer specially requires a very large clean room with air-conditioner. This makes it very
expensive to buy and operate. It can support a large number of various equipments. It also has multiple processors. Large
mainframe systems can handle the input and output requirements of several thousand of users. For example, IBM, S/390
mainframe can support 50,000 users simultaneously. The users often access then mainframe with terminals or personal
computers. Tere are basically two types of terminals used with mainframe systems. These are:
i)Dumb Terminal
Dumb terminal does not have its own CPU and storage devices. This type of terminal uses the CPU and storage devices
of mainframe system. Typically, a dumb terminal consists of monitor and a keyboard (or mouse).

ii)Intelligent Terminal
Intelligent terminal has its own processor and can perform some processing operations. Usually, this type of terminal
does not have its own storage. Typically, personal computrers are used as intelligent terminals. A personal computer as
an intelligent terminal gives facility to access data and other services from mainframe system. It also enables to store and
process data locally.
The mainframe computers are specially used as servers on the World Wide Web. The mainframe computers are used in
large organizations such as Banks, Airlines and Universities etc. where many people (users) need frequent access to the
same data, which is usually organized into one or more huge databases. IBM is the major manufacturer of mainframe
computers. The examples of mainframes are IBM S/390, Control Data CYBER 176 and Amdahl 580 etc.

3. Minicomputers
These are smaller in size, have lower processing speed and also have lower cost than mainframe. These computers are
known as minicomputers because of their small size as compared to other computers at that time. The capabilities of a
minicomputer are between mainframe and personal computer. These computers are also known as midrange computers.

The minicomputers are used in business, education and many other government departments. Although some
minicomputers are designed for a single user but most are designed to handle multiple terminals. Minicomputers are
commonly used as servers in network environment and hundreds of personal computers can be connected to the network
with a minicomputer acting as server like mainframes, minicomputers are used as web servers. Single user
minicomputers are used for sophisticated design tasks.

The first minicomputer was introduced in the mid-1960s by Digital Equipment Corporation (DEC). After this IBM
Corporation (AS/400 computers) Data General Corporation and Prime Computer also designed the mini computers.

4. Microcomputer
The microcomputers are also known as personal computers or simply PCs. Microprocessor is used in this type of
computer. These are very small in size and cost. The IBM’s first microcomputer was designed in 1981 and was named as
IBM-PC. After this many computer hardware companies copied the design of IBM-PC. The term “PC-compatible” refers
any personal computer based on the original IBM personal computer design.

The most popular types of personal computers are the PC and the Apple. PC and PC-compatible computers have
processors with different architectures than processors in Apple computers. These two types of computers also use
different operating systems. PC and PC-compatible computers use the Windows operating system while Apple computers
use the Macintosh operating system (MacOS). The majority of microcomputers sold today are part of IBM-compatible.
However the Apple computer is neither an IBM nor a compatible. It is another family of computers made by Apple
computer.

Personal computers are available in two models. These are:

1. Desktop PCs
2. Tower PCs

A desktop personal computer is most popular model of personal computer. The system unit of the desktop personal
computer can lie flat on the desk or table. In desktop personal computer, the monitor is usually placed on the system unit.

Another model of the personal computer is known as tower personal computer. The system unit of the tower PC is
vertically placed on the desk of table. Usually the system unit of the tower model is placed on the floor to make desk
space free and user can place other devices such as printer, scanner etc. on the desktop. Today computer tables are
available which are specially designed for this purpose. The tower models are mostly used at homes and offices.

Microcomputer are further divided into following categories.

1. Laptop computer
2. Workstation
3. Network computer
4. Handheld computer

1. Laptop computer

Laptop computer is also known as notebook computer. It is small size (85-by-11 inch notebook computer and can fit
inside a briefcase. The laptop computer is operated on a special battery and it does not have to be plugged in like desktop
computer. The laptop computer is portable and fully functional microcomputer. It is mostly used during journey. It can be
used on your lap in an airplane. It is because it is referred to as laptop computer.

The memory and storage capacity of laptop computer is almost equivalent to the PC or desktop computer. It also has the
hard dist, floppy disk drive, Zip disk drive, CD-ROM drive, CD-writer etc. it has built-in keyboard and built-in trackball
as pointing device. Laptop computer is also available with the same processing speed as the most powerful personal
computer. It means that laptop computer has same features as personal computer. Laptop computers are more expensive
than desktop computers. Normally these computers are frequently used in business travelers.

2.Work stations
Workstations are special single user computers having the same features as personal computer but have the processing
speed equivalent to minicomputer or mainframe computer. A workstation computer can be fitted on a desktop. Scientists,
engineers, architects and graphic designers mostly use these computers.

Workstation computers are expensive and powerful computers. These have advanced processors, more RAM and storage
capacity than personal computers. These are usually used as single-user applications but these are used as servers on
computer network and web servers as well.

3.Network computers
Network computers are also version of personal computers having less processing power, memory and storage. These are
specially designed as terminals for network environment. Some types of network computers have no storage. The
network computers are designed for network, Internet or Intranet for data entry or to access data on the network. The
network computers depend upon the network’s server for data storage and to use software. These computers also use the
network’s server to perform some processing tasks.

In the mid-1990s the concept of network computers became popular among some PC manufacturers. As a result several
variations of the network computers quickly became available. In business, variations of the network computer are
Windows terminals, NetPCs and diskless workstations. Some network computers are designed to access only the Internet
or to an Intranet. These devices are sometimes called Internet PCs, Internet boxes etc. In home some network computers
do not include monitor. These are connected to home television, which serves as the output devices. A popular example
of a home-based network computer is Web TV, which enables the user to connect a television to the Internet. The Web
TV has a special set-top box used to connect to the Internet and also provides a set of simple controls which enable the
user to navigate the Internet, send and receive e-mails and to perform other tasks on the network while watching
television.

Network computers are cheaper to purchase and to maintain than personal computers.

4.handheld computer
In the mid 1990s, many new types of small personal computing devices have been introduced and these are referred to as
handheld computers. These computers are also referred to as Palmtop Computers. The handheld computers sometimes
called Mini-Notebook Computers. The type of computer is named as handheld computer because it can fit in one hand
while you can operate it with the other hand. Because of its reduced size, the screen of handheld computer is quite small.
Similarly it also has small keyboard. The handheld computers are preferred by business traveler. Some handheld
computers have a specialized keyboard. These computers are used by mobile employees, such as meter readers and
parcel delivery people, whose jobs require them to move from place to place.

The examples of handheld computers are:

1. Personal Digital Assistance


2. Cellular telephones
3. H/PC Pro devices

1.Personal Digital Assistance (PDAs)


The PDA is one of the more popular lightweight mobile devices in use today. A PDA provides special functions such as
taking notes, organizing telephone numbers and addresses. Most PDAs also offer a variety of other application software
such as word processing, spreadsheet and games etc. Some PDAs include electronic books that enable users to read a
book on the PDA’s screen.

Many PDAs are web-based and users can send/receive e-mails and access the Internet. Similarly, some PDAs also
provide telephone capabilities.

The primary input device of a PDA is the stylus. A stylus is an electronic pen and looks like a small ballpoint pen. This
input device is used to write notes and store in the PDA by touching the screen. Some PDAs also support voice input.

2.Cellular phones
A cellular phone is a web-based telephone having features of analog and digital devices. It is also referred to as Smart
Phone. In addition to basic phone capabilities, a cellular phone also provides the functions to receive and send e-mails &
faxes and to access the Internet.

4. H/PC Pro Devices


H/PC Pro dive is new development in handheld technology. These systems are larger than PDAs but they are not
quite as large as typical notebook PCs. These devices have features between PDAs and notebook PCs. The H/PC
Pro device includes a full-size keyboard but it does not include disk. These systems also have RAM with very low
storage capacity and slow speed of processor.

Basic anatomy of your computer


Your computer has a processor chip inside it that does the actual computing. It has internal memory (what
DOS/Windows people call "RAM" and Unix people often call "core"; the Unix term is a folk memory from when RAM
consisted of ferrite-core donuts). The processor and memory live on the motherboard, which is the heart of your
computer.

Your computer has a screen and keyboard. It has hard drives and an optical CD-ROM (or maybe a DVD drive) and
maybe a floppy disk. Some of these devices are run by controller cards that plug into the motherboard and help the
computer drive them; others are run by specialized chipsets directly on the motherboard that fulfill the same function as a
controller card. Your keyboard is too simple to need a separate card; the controller is built into the keyboard chassis
itself.

We'll go into some of the details of how these devices work later. For now, here are a few basic things to keep in mind
about how they work together:

All the parts of your computer inside the case are connected by a bus. Physically, the bus is what you plug your controller
cards into (the video card, the disk controller, a sound card if you have one). The bus is the data highway between your
processor, your screen, your disk, and everything else.

(If you've seen references to ‘ISA’, ‘PCI’, and ‘PCMCIA’ in connection with PCs and have not understood them, these
are bus types. ISA is, except in minor details, the same bus that was used on IBM's original PCs in 1980; it is passing out
of use now. PCI, for Peripheral Component Interconnection, is the bus used on most modern PCs, and on modern
Macintoshes as well. PCMCIA is a variant of ISA with smaller physical connectors used on laptop computers.)

The processor, which makes everything else go, can't actually see any of the other pieces directly; it has to talk to them
over the bus. The only other subsystem that it has really fast, immediate access to is memory (the core). In order for
programs to run, then, they have to be in core (in memory).

When your computer reads a program or data off the disk, what actually happens is that the processor uses the bus to
send a disk read request to your disk controller. Some time later the disk controller uses the bus to signal the processor
that it has read the data and put it in a certain location in memory. The processor can then use the bus to look at that data.

Your keyboard and screen also communicate with the processor via the bus, but in simpler ways. We'll discuss those later
on. For now, you know enough to understand what happens when you turn on your computer.

Computer data storage


Computer data storage, often called storage or memory, refers to computer components, devices, and recording media
that retain digital data used for computing for some interval of time. Computer data storage provides one of the core
functions of the modern computer, that of information retention. It is one of the fundamental components of all modern
computers, and coupled with a central processing unit (CPU, a processor), implements the basic computer model used
since the 1940s.

In contemporary usage, memory usually refers to a form of semiconductor storage known as random-access memory
(RAM) and sometimes other forms of fast but temporary storage. Similarly, storage today more commonly refers to mass
storage — optical discs, forms of magnetic storage like hard disk drives, and other types slower than RAM, but of a more
permanent nature. Historically, memory and storage were respectively called main memory and secondary storage. The
terms internal memory and external memory are also used.

The contemporary distinctions are helpful, because they are also fundamental to the architecture of computers in general.
The distinctions also reflect an important and significant technical difference between memory and mass storage devices,
which has been blurred by the historical usage of the term storage. Nevertheless, this article uses the traditional
nomenclature.

Purpose of storage

Many different forms of storage, based on various natural phenomena, have been invented. So far, no practical universal
storage medium exists, and all forms of storage have some drawbacks. Therefore a computer system usually contains
several kinds of storage, each with an individual purpose.
A digital computer represents data using the binary numeral system. Text, numbers, pictures, audio, and nearly any other
form of information can be converted into a string of bits, or binary digits, each of which has a value of 1 or 0. The most
common unit of storage is the byte, equal to 8 bits. A piece of information can be handled by any computer whose
storage space is large enough to accommodate the binary representation of the piece of information, or simply data. For
example, using eight million bits, or about one megabyte, a typical computer could store a short novel.

Traditionally the most important part of every computer is the central processing unit (CPU, or simply a processor),
because it actually operates on data, performs any calculations, and controls all the other components.

Without a significant amount of memory, a computer would merely be able to perform fixed operations and immediately
output the result. It would have to be reconfigured to change its behavior. This is acceptable for devices such as desk
calculators or simple digital signal processors. Von Neumann machines differ in that they have a memory in which they
store their operating instructions and data. Such computers are more versatile in that they do not need to have their
hardware reconfigured for each new program, but can simply be reprogrammed with new in-memory instructions; they
also tend to be simpler to design, in that a relatively simple processor may keep state between successive computations to
build up complex procedural results. Most modern computers are von Neumann machines.

Hierarchy of storage

Various forms of storage, divided according to their distance from the central processing unit. The fundamental
components of a general-purpose computer are arithmetic and logic unit, control circuitry, storage space, and
input/output devices. Technology and capacity as in common home computers around 2005.

Primary storage
Direct links to this section: Primary storage, Main memory, Internal Memory.
Primary storage (or main memory or internal memory), often referred to simply as memory, is the only one directly
accessible to the CPU. The CPU continuously reads instructions stored there and executes them as required. Any data
actively operated on is also stored there in uniform manner.

Historically, early computers used delay lines, Williams tubes, or rotating magnetic drums as primary storage. By 1954,
those unreliable methods were mostly replaced by magnetic core memory, which was still rather cumbersome.
Undoubtedly, a revolution was started with the invention of a transistor, that soon enabled then-unbelievable
miniaturization of electronic memory via solid-state silicon chip technology.

This led to a modern random-access memory (RAM). It is small-sized, light, but quite expensive at the same time. (The
particular types of RAM used for primary storage are also volatile, i.e. they lose the information when not powered).

As shown in the diagram, traditionally there are two more sub-layers of the primary storage, besides main large-capacity
RAM:

• Processor registers are located inside the processor. Each register typically holds a word of data (often 32 or 64
bits). CPU instructions instruct the arithmetic and logic unit to perform various calculations or other operations on
this data (or with the help of it). Registers are technically among the fastest of all forms of computer data storage.
• Processor cache is an intermediate stage between ultra-fast registers and much slower main memory. It's
introduced solely to increase performance of the computer. Most actively used information in the main memory is
just duplicated in the cache memory, which is faster, but of much lesser capacity. On the other hand it is much
slower, but much larger than processor registers. Multi-level hierarchical cache setup is also commonly used—
primary cache being smallest, fastest and located inside the processor; secondary cache being somewhat larger
and slower.

Main memory is directly or indirectly connected to the CPU via a memory bus. It is actually two buses (not on the
diagram): an address bus and a data bus. The CPU firstly sends a number through an address bus, a number called
memory address, that indicates the desired location of data. Then it reads or writes the data itself using the data bus.
Additionally, a memory management unit (MMU) is a small device between CPU and RAM recalculating the actual
memory address, for example to provide an abstraction of virtual memory or other tasks.

As the RAM types used for primary storage are volatile (cleared at start up), a computer containing only such storage
would not have a source to read instructions from, in order to start the computer. Hence, non-volatile primary storage
containing a small startup program (BIOS) is used to bootstrap the computer, that is, to read a larger program from non-
volatile secondary storage to RAM and start to execute it. A non-volatile technology used for this purpose is called
ROM, for read-only memory (the terminology may be somewhat confusing as most ROM types are also capable of
random access).

Many types of "ROM" are not literally read only, as updates are possible; however it is slow and memory must be erased
in large portions before it can be re-written. Some embedded systems run programs directly from ROM (or similar),
because such programs are rarely changed. Standard computers do not store non-rudimentary programs in ROM, rather
use large capacities of secondary storage, which is non-volatile as well, and not as costly.

Recently, primary storage and secondary storage in some uses refer to what was historically called, respectively,
secondary storage and tertiary storage.[1]
Secondary storage

A hard disk drive with protective cover removed.

Secondary storage (or external memory) differs from primary storage in that it is not directly accessible by the CPU.
The computer usually uses its input/output channels to access secondary storage and transfers the desired data using
intermediate area in primary storage. Secondary storage does not lose the data when the device is powered down—it is
non-volatile. Per unit, it is typically also an order of magnitude less expensive than primary storage. Consequently,
modern computer systems typically have an order of magnitude more secondary storage than primary storage and data is
kept for a longer time there.

In modern computers, hard disk drives are usually used as secondary storage. The time taken to access a given byte of
information stored on a hard disk is typically a few thousandths of a second, or milliseconds. By contrast, the time taken
to access a given byte of information stored in random access memory is measured in billionths of a second, or
nanoseconds. This illustrates the very significant access-time difference which distinguishes solid-state memory from
rotating magnetic storage devices: hard disks are typically about a million times slower than memory. Rotating optical
storage devices, such as CD and DVD drives, have even longer access times. With disk drives, once the disk read/write
head reaches the proper placement and the data of interest rotates under it, subsequent data on the track are very fast to
access. As a result, in order to hide the initial seek time and rotational latency, data are transferred to and from disks in
large contiguous blocks.

When data reside on disk, block access to hide latency offers a ray of hope in designing efficient external memory
algorithms. Sequential or block access on disks is orders of magnitude faster than random access, and many sophisticated
paradigms have been developed to design efficient algorithms based upon sequential and block access . Another way to
reduce the I/O bottleneck is to use multiple disks in parallel in order to increase the bandwidth between primary and
secondary memory.[2]

Some other examples of secondary storage technologies are: flash memory (e.g. USB flash drives or keys), floppy disks,
magnetic tape, paper tape, punched cards, standalone RAM disks, and Iomega Zip drives.

The secondary storage is often formatted according to a file system format, which provides the abstraction necessary to
organize data into files and directories, providing also additional information (called metadata) describing the owner of a
certain file, the access time, the access permissions, and other information.

Most computer operating systems use the concept of virtual memory, allowing utilization of more primary storage
capacity than is physically available in the system. As the primary memory fills up, the system moves the least-used
chunks (pages) to secondary storage devices (to a swap file or page file), retrieving them later when they are needed. As
more of these retrievals from slower secondary storage are necessary, the more the overall system performance is
degraded.
Tertiary storage

Large tape library. Tape cartridges placed on shelves in the front, robotic arm moving in the back. Visible height of the
library is about 180 cm.

Tertiary storage or tertiary memory,[3] provides a third level of storage. Typically it involves a robotic mechanism
which will mount (insert) and dismount removable mass storage media into a storage device according to the system's
demands; this data is often copied to secondary storage before use. It is primarily used for archival of rarely accessed
information since it is much slower than secondary storage (e.g. 5–60 seconds vs. 1-10 milliseconds). This is primarily
useful for extraordinarily large data stores, accessed without human operators. Typical examples include tape libraries
and optical jukeboxes.

When a computer needs to read information from the tertiary storage, it will first consult a catalog database to determine
which tape or disc contains the information. Next, the computer will instruct a robotic arm to fetch the medium and place
it in a drive. When the computer has finished reading the information, the robotic arm will return the medium to its place
in the library.

Off-line storage

Off-line storage, also known as disconnected storage, is a computer data storage on a medium or a device that is not
under the control of a processing unit.[4] The medium is recorded, usually in a secondary or tertiary storage device, and
then physically removed or disconnected. It must be inserted or connected by a human operator before a computer can
access it again. Unlike tertiary storage, it cannot be accessed without human interaction.

Off-line storage is used to transfer information, since the detached medium can be easily physically transported.
Additionally, in case a disaster, for example a fire, destroys the original data, a medium in a remote location will
probably be unaffected, enabling disaster recovery. Off-line storage increases general information security, since it is
physically inaccessible from a computer, and data confidentiality or integrity cannot be affected by computer-based
attack techniques. Also, if the information stored for archival purposes is accessed seldom or never, off-line storage is
less expensive than tertiary storage.

In modern personal computers, most secondary and tertiary storage media are also used for off-line storage. Optical discs
and flash memory devices are most popular, and to much lesser extent removable hard disk drives. In enterprise uses,
magnetic tape is predominant. Older examples are floppy disks, Zip disks, or punched cards.
Characteristics of storage

A 1GB DDR RAM memory module

Storage technologies at all levels of the storage hierarchy can be differentiated by evaluating certain core characteristics
as well as measuring characteristics specific to a particular implementation. These core characteristics are volatility,
mutability, accessibility, and addressibility. For any particular implementation of any storage technology, the
characteristics worth measuring are capacity and performance.

Input/output
In computing, input/output, or I/O, refers to the communication between an information processing system (such as a
computer), and the outside world possibly a human, or another information processing system. Inputs are the signals or
data received by the system, and outputs are the signals or data sent from it. The term can also be used as part of an
action; to "perform I/O" is to perform an input or output operation. I/O devices are used by a person (or other system) to
communicate with a computer. For instance, a keyboard or a mouse may be an input device for a computer, while
monitors and printers are considered output devices for a computer. Devices for communication between computers, such
as modems and network cards, typically serve for both input and output.

Note that the designation of a device as either input or output depends on the perspective. Mice and keyboards take as
input physical movement that the human user outputs and convert it into signals that a computer can understand. The
output from these devices is input for the computer. Similarly, printers and monitors take as input signals that a computer
outputs. They then convert these signals into representations that human users can see or read. (For a human user the
process of reading or seeing these representations is receiving input.)

In computer architecture, the combination of the CPU and main memory (i.e. memory that the CPU can read and write to
directly, with individual instructions) is considered the brain of a computer, and from that point of view any transfer of
information from or to that combination, for example to or from a disk drive, is considered I/O. The CPU and its
supporting circuitry provide memory-mapped I/O that is used in low-level computer programming in the implementation
of device drivers. An I/O algorithm is one designed to exploit locality and perform efficiently when data reside on
secondary storage, such as a disk drive.
Interface

I/O Interface is required whenever the I/O device is driven by the processor. The interface must have necessary logic to
interpret the device address generated by the processor. Handshaking should be implemented by the interface using
appropriate commands like (BUSY,READY,WAIT), and the processor can communicate with I/O device through the
interface. If different data formats are being exchanged, the interface must be able to convert serial data to parallel form
and vice-versa. There must be provision for generating interrupts and the corresponding type numbers for further
processing by the processor if required

A computer that uses memory-mapped I/O accesses hardware by reading and writing to specific memory locations, using
the same assembler language instructions that computer would normally use to access memory.

Higher-level implementation

Higher-level operating system and programming facilities employ separate, more abstract I/O concepts and primitives.
For example, most operating systems provide application programs with the concept of files. The C and C++
programming languages, and operating systems in the Unix family, traditionally abstract files and devices as streams,
which can be read or written, or sometimes both. The C standard library provides functions for manipulating streams for
input and output.

In the context of the ALGOL 68 programming language, the input and output facilities are collectively referred to as
transput. The ALGOL 68 transput library recognizes the following standard files/devices: stand in, stand out, stand
errors and stand back.

An alternative to special primitive functions is the I/O monad, which permits programs to just describe I/O, and the
actions are carried out outside the program. This is notable because the I/O functions would introduce side-effects to any
programming language, but now purely functional programming is practical.

Addressing mode

There are many ways through which data can be read or stored in the memory. Each method is an addressing mode, and
has its own advantages and limitations.

There are many type of addressing modes such as direct addressing, indirect addressing, immediate addressing, index
addressing, based addressing, based-index addressing, implied addressing, etc.

Direct address

In this type of address of the data is a part of the instructions itself. When the processor decodes the instruction, it gets the
memory address from where it can be read/store the required information.

Mov Reg. [Addr]

Here the Addr operand points to a memory location which holds the data and copies it into the specified Register.

Indirect address

Here the address can be stored in a register. The instructions will have the register which has the address. So to fetch the
data, the instruction must be decoded appropriate register selected. The contents of the register will be treated as the
address using this address appropriate memory location is selected and data is read/written.
The Binary System
A pretty damn clear guide to a quite confusing concept by Christine R. Wright with some help from Samuel A. Rebelsky.

Table of Contents

• Basic Concepts Behind the Binary System


• Binary Addition
• Binary Multiplication
• Binary Division
• Conversion from Decimal to Binary
• Negation in the Binary System

Basic Concepts Behind the Binary System

To understand binary numbers, begin by recalling elementary school math. When we first learned about numbers, we
were taught that, in the decimal system, things are organized into columns:

H | T | O
1 | 9 | 3
such that "H" is the hundreds column, "T" is the tens column, and "O" is the ones column. So the number "193" is 1-
hundreds plus 9-tens plus 3-ones.

Years later, we learned that the ones column meant 10^0, the tens column meant 10^1, the hundreds column 10^2 and so
on, such that

10^2|10^1|10^0
1 | 9 | 3
the number 193 is really {(1*10^2)+(9*10^1)+(3*10^0)}.

As you know, the decimal system uses the digits 0-9 to represent numbers. If we wanted to put a larger number in
column 10^n (e.g., 10), we would have to multiply 10*10^n, which would give 10^(n+1), and be carried a column to the
left. For example, putting ten in the 10^0 column is impossible, so we put a 1 in the 10^1 column, and a 0 in the 10^0
column, thus using two columns. Twelve would be 12*10^0, or 10^0(10+2), or 10^1+2*10^0, which also uses an
additional column to the left (12).

The binary system works under the exact same principles as the decimal system, only it operates in base 2 rather than
base 10. In other words, instead of columns being

10^2|10^1|10^0
they are
2^2|2^1|2^0

Instead of using the digits 0-9, we only use 0-1 (again, if we used anything larger it would be like multiplying 2*2^n and
getting 2^n+1, which would not fit in the 2^n column. Therefore, it would shift you one column to the left. For example,
"3" in binary cannot be put into one column. The first column we fill is the right-most column, which is 2^0, or 1. Since
3>1, we need to use an extra column to the left, and indicate it as "11" in binary (1*2^1) + (1*2^0).

Examples: What would the binary number 1011 be in decimal notation?


Try converting these numbers from binary to decimal:

• 10
• 111
• 10101
• 11110

Remember:
2^4| 2^3| 2^2| 2^1| 2^0
| | | 1 | 0
| | 1 | 1 | 1
1 | 0 | 1 | 0 | 1
1 | 1 | 1 | 1 | 0

Binary Addition

Consider the addition of decimal numbers:

23
+48
___

We begin by adding 3+8=11. Since 11 is greater than 10, a one is put into the 10's column (carried), and a 1 is recorded
in the one's column of the sum. Next, add {(2+4) +1} (the one is from the carry)=7, which is put in the 10's column of the
sum. Thus, the answer is 71.

Binary addition works on the same principle, but the numerals are different. Begin with one-bit binary addition:

0 0 1
+0 +1 +0
___ ___ ___
0 1 1

1+1 carries us into the next column. In decimal form, 1+1=2. In binary, any digit higher than 1 puts us a column to the
left (as would 10 in decimal notation). The decimal number "2" is written in binary notation as "10" (1*2^1)+(0*2^0).
Record the 0 in the ones column, and carry the 1 to the twos column to get an answer of "10." In our vertical notation,

1
+1
___
10

The process is the same for multiple-bit binary numbers:

1010
+1111
______

• Step one:
Column 2^0: 0+1=1.
Record the 1.
Temporary Result: 1; Carry: 0
• Step two:
Column 2^1: 1+1=10.
Record the 0, carry the 1.
Temporary Result: 01; Carry: 1
• Step three:
Column 2^2: 1+0=1 Add 1 from carry: 1+1=10.
Record the 0, carry the 1.
Temporary Result: 001; Carry: 1
• Step four:
Column 2^3: 1+1=10. Add 1 from carry: 10+1=11.
Record the 11.
Final result: 11001

Alternately:

11 (carry)
1010
+1111
______
11001

Always remember

• 0+0=0
• 1+0=1
• 1+1=10

Try a few examples of binary addition:

111 101 111


+110 +111 +111
______ _____ _____

Binary Multiplication

Multiplication in the binary system works the same way as in the decimal system:

• 1*1=1
• 1*0=0
• 0*1=0

101
* 11
____
101
1010
_____
1111

Note that multiplying by two is extremely easy. To multiply by two, just add a 0 on the end.
Binary Division

Follow the same rules as in decimal division. For the sake of simplicity, throw away the remainder.

For Example: 111011/11

10011 r 10
_______
11)111011
-11
______
101
-11
______
101
11
______
10

Decimal to Binary

Converting from decimal to binary notation is slightly more difficult conceptually, but can easily be done once you know
how through the use of algorithms. Begin by thinking of a few examples. We can easily see that the number 3= 2+1. and
that this is equivalent to (1*2^1)+(1*2^0). This translates into putting a "1" in the 2^1 column and a "1" in the 2^0
column, to get "11". Almost as intuitive is the number 5: it is obviously 4+1, which is the same as saying [(2*2) +1], or
2^2+1. This can also be written as [(1*2^2)+(1*2^0)]. Looking at this in columns,

2^2 | 2^1 | 2^0


1 0 1
or 101.

What we're doing here is finding the largest power of two within the number (2^2=4 is the largest power of 2 in 5),
subtracting that from the number (5-4=1), and finding the largest power of 2 in the remainder (2^0=1 is the largest power
of 2 in 1). Then we just put this into columns. This process continues until we have a remainder of 0. Let's take a look at
how it works. We know that:

2^0=1
2^1=2
2^2=4
2^3=8
2^4=16
2^5=32
2^6=64
2^7=128
and so on. To convert the decimal number 75 to binary, we would find the largest power of 2 less than 75, which is 64.
Thus, we would put a 1 in the 2^6 column, and subtract 64 from 75, giving us 11. The largest power of 2 in 11 is 8, or
2^3. Put 1 in the 2^3 column, and 0 in 2^4 and 2^5. Subtract 8 from 11 to get 3. Put 1 in the 2^1 column, 0 in 2^2, and
subtract 2 from 3. We're left with 1, which goes in 2^0, and we subtract one to get zero. Thus, our number is 1001011.

Making this algorithm a bit more formal gives us:

1. Let D=number we wish to convert from decimal to binary


2. Repeat until D=0
o a. Find the largest power of two in D. Let this equal P.
o b. Put a 1 in binary column P.
o c. Subtract P from D.
3. Put zeros in all columns which don't have ones.

This algorithm is a bit awkward. Particularly step 3, "filling in the zeros." Therefore, we should rewrite it such that we
ascertain the value of each column individually, putting in 0's and 1's as we go:

1. Let D= the number we wish to convert from decimal to binary


2. Find P, such that 2^P is the largest power of two smaller than D.
3. Repeat until P<0
o If 2^P<=D then
 put 1 into column P
 subtract 2^P from D
o Else
 put 0 into column P
o End if
o Subtract 1 from P

Now that we have an algorithm, we can use it to convert numbers from decimal to binary relatively painlessly. Let's try
the number D=55.

• Our first step is to find P. We know that 2^4=16, 2^5=32, and 2^6=64. Therefore, P=5.
• 2^5<=55, so we put a 1 in the 2^5 column: 1-----.
• Subtracting 55-32 leaves us with 23. Subtracting 1 from P gives us 4.
• Following step 3 again, 2^4<=23, so we put a 1 in the 2^4 column: 11----.
• Next, subtract 16 from 23, to get 7. Subtract 1 from P gives us 3.
• 2^3>7, so we put a 0 in the 2^3 column: 110---
• Next, subtract 1 from P, which gives us 2.
• 2^2<=7, so we put a 1 in the 2^2 column: 1101--
• Subtract 4 from 7 to get 3. Subtract 1 from P to get 1.
• 2^1<=3, so we put a 1 in the 2^1 column: 11011-
• Subtract 2 from 3 to get 1. Subtract 1 from P to get 0.
• 2^0<=1, so we put a 1 in the 2^0 column: 110111
• Subtract 1 from 1 to get 0. Subtract 1 from P to get -1.
• P is now less than zero, so we stop.

Another algorithm for converting decimal to binary

However, this is not the only approach possible. We can start at the right, rather than the left.

All binary numbers are in the form

a[n]*2^n + a[n-1]*2^(n-1)+...+a[1]*2^1 + a[0]*2^0


where each a[i] is either a 1 or a 0 (the only possible digits for the binary system). The only way a number can be odd is
if it has a 1 in the 2^0 column, because all powers of two greater than 0 are even numbers (2, 4, 8, 16...). This gives us
the rightmost digit as a starting point.

Now we need to do the remaining digits. One idea is to "shift" them. It is also easy to see that multiplying and dividing
by 2 shifts everything by one column: two in binary is 10, or (1*2^1). Dividing (1*2^1) by 2 gives us (1*2^0), or just a 1
in binary. Similarly, multiplying by 2 shifts in the other direction: (1*2^1)*2=(1*2^2) or 10 in binary. Therefore

{a[n]*2^n + a[n-1]*2^(n-1) + ... + a[1]*2^1 + a[0]*2^0}/2


is equal to

a[n]*2^(n-1) + a[n-1]*2^(n-2) + ... + a[1]2^0

Let's look at how this can help us convert from decimal to binary. Take the number 163. We know that since it is odd,
there must be a 1 in the 2^0 column (a[0]=1). We also know that it equals 162+1. If we put the 1 in the 2^0 column, we
have 162 left, and have to decide how to translate the remaining digits.

Two's column: Dividing 162 by 2 gives 81. The number 81 in binary would also have a 1 in the 2^0 column. Since we
divided the number by two, we "took out" one power of two. Similarly, the statement a[n-1]*2^(n-1) + a[n-2]*2^(n-2)
+ ... + a[1]*2^0 has a power of two removed. Our "new" 2^0 column now contains a1. We learned earlier that there is a 1
in the 2^0 column if the number is odd. Since 81 is odd, a[1]=1. Practically, we can simply keep a "running total", which
now stands at 11 (a[1]=1 and a[0]=1). Also note that a1 is essentially "remultiplied" by two just by putting it in front of
a[0], so it is automatically fit into the correct column.

Four's column: Now we can subtract 1 from 81 to see what remainder we still must place (80). Dividing 80 by 2 gives 40.
Therefore, there must be a 0 in the 4's column, (because what we are actually placing is a 2^0 column, and the number is
not odd).

Eight's column: We can divide by two again to get 20. This is even, so we put a 0 in the 8's column. Our running total
now stands at a[3]=0, a[2]=0, a[1]=1, and a[0]=1.

We can continue in this manner until there is no remainder to place.

Let's formalize this algorithm:


1. Let D= the number we wish to convert from decimal to binary.
2. Repeat until D=0:
a) If D is odd, put "1" in the leftmost open column, and subtract 1 from D.
b) If D is even, put "0" in the leftmost open column.
c) Divide D by 2.
End Repeat
For the number 163, this works as follows:
1. Let D=163
2. b) D is odd, put a 1 in the 2^0 column.
Subtract 1 from D to get 162.
c) Divide D=162 by 2.
Temporary Result: 01 New D=81
D does not equal 0, so we repeat step 2.

2. b) D is odd, put a 1 in the 2^1 column.


Subtract 1 from D to get 80.
c) Divide D=80 by 2.
Temporary Result: 11 New D=40
D does not equal 0, so we repeat step 2.

2. b) D is even, put a 0 in the 2^2 column.


c) Divide D by 2.
Temporary Result:011 New D=20

2. b) D is even, put a 0 in the 2^3 column.


c) Divide D by 2.
Temporary Result: 0011 New D=10

2. b) D is even, put a 0 in the 2^4 column.


c) Divide D by 2.
Temporary Result: 00011 New D=5

2. a) D is odd, put a 1 in the 2^5 column.


Subtract 1 from D to get 4.
c) Divide D by 2.
Temporary Result: 100011 New D=2
2. b) D is even, put a 0 in the 2^6 column.
c) Divide D by 2.
Temporary Result: 0100011 New D=1

2. a) D is odd, put a 1 in the 27 column.


Subtract 1 from D to get D=0.
c) Divide D by 2.
Temporary Result: 10100011 New D=0

D=0, so we are done, and the decimal number 163 is equivalent to the binary number 10100011.

Since we already knew how to convert from binary to decimal, we can easily verify our result.
10100011=(1*2^0)+(1*2^1)+(1*2^5)+(1*2^7)=1+2+32+128= 163.

Negation in the Binary System

These techniques work well for non-negative integers, but how do we indicate negative numbers in the binary system?

Before we investigate negative numbers, we note that the computer uses a fixed number of "bits" or binary digits. An 8-
bit number is 8 digits long. For this section, we will work with 8 bits.

Signed Magnitude:

The simplest way to indicate negation is signed magnitude. In signed magnitude, the left-most bit is not actually part of
the number, but is just the equivalent of a +/- sign. "0" indicates that the number is positive, "1" indicates negative. In 8
bits, 00001100 would be 12 (break this down into (1*2^3) + (1*2^2) ). To indicate -12, we would simply put a "1" rather
than a "0" as the first bit: 10001100.

One's Complement:

In one's complement, positive numbers are represented as usual in regular binary. However, negative numbers are
represented differently. To negate a number, replace all zeros with ones, and ones with zeros - flip the bits. Thus, 12
would be 00001100, and -12 would be 11110011. As in signed magnitude, the leftmost bit indicates the sign (1 is
negative, 0 is positive). To compute the value of a negative number, flip the bits and translate as before.

Two's Complement:

Begin with the number in one's complement. Add 1 if the number is negative. Twelve would be represented as 00001100,
and -12 as 11110100. To verify this, let's subtract 1 from 11110100, to get 11110011. If we flip the bits, we get
00001100, or 12 in decimal.

In this notation, "m" indicates the total number of bits. For us (working with 8 bits), it would be excess 2^7. To represent
a number (positive or negative) in excess 2^7, begin by taking the number in regular binary representation. Then add 2^7
(=128) to that number. For example, 7 would be 128 + 7=135, or 2^7+2^2+2^1+2^0, and, in binary,10000111. We would
represent -7 as 128-7=121, and, in binary, 01111001.

Note:

• Unless you know which representation has been used, you cannot figure out the value of a number.
• A number in excess 2^(m-1) is the same as that number in two's complement with the leftmost bit flipped.

To see the advantages and disadvantages of each method, let's try working with them.

Using the regular algorithm for binary adition, add (5+12), (-5+12), (-12+-5), and (12+-12) in each system. Then convert
back to decimal numbers.
Answers
What would the binary number 1011 be in decimal notation?
1011=(1*2^3)+(0*2^2)+(1*2^1)+(1*2^0)
= (1*8) + (0*4) + (1*2) + (1*1)
= 11 (in decimal notation)

Try converting these numbers from binary to decimal:


10=(1*2^1) + (0*2^0) = 2+0 = 2
111 = (1*2^2) + (1*2^1) + (1*2^0) = 4+2+1=7
10101= (1*2^4) + (0*2^3) + (1*2^2) + (0*2^1) + (1*2^0)=16+0+4+0+1=21
11110= (1*2^4) + (1*2^3) + (1*2^2) + (1*2^1) + (0*2^0)=16+8+4+2+0=30

Try a few examples of binary addition:


1 1
111 111 111
+110 +110 +110
______ ______ _____
1 01 1101

1 11 1
101 101 101
+111 +111 +111
_____ ____ _____
0 00 1100

1 1 1
111 111 111
+111 +111 +111
_____ _____ _____
0 10 1110
Using the regular algorithm for binary adition, add (5+12), (-5+12), (-12+-5), and (12+-12) in each system. Then convert
back to decimal numbers.
Signed Magnitude:

5+12 -5+12 -12+-5 12+-12

00000101 10000101 10001100 00001100


00001100 00001100 10000101 10001100
__________ ________ ________ _________
00010001 10010001 00010000 10011000

17 -17 16 -24

One' Complement:

00000101 11111010 11110011 00001100


00001100 00001100 11111010 11110011
_________ ________ ________ ________
00010001 00000110 11101101 11111111

17 6 -18 0

Two's Complement:

00000101 11111011 11110100 00001100


00001100 00001100 11111011 11110100
________ ________ ________ ________
00010001 00000111 11101111 00000000

17 7 -17 0

Signed Magnitude:
10000101 01111011 01110100 00001100
10001100 10001100 01111011 01110100
________ ________ ________ ________
00010001 00000111 11101111 01111100

109 119 111 124

Assembly language
Assembly languages are a family of low-level languages for programming computers, microprocessors,
microcontrollers, and other (usually) integrated circuits. They implement a symbolic representation of the numeric
machine codes and other constants needed to program a particular CPU architecture. This representation is usually
defined by the hardware manufacturer, and is based on abbreviations (called mnemonics) that help the programmer
remember individual instructions, registers, etc. An assembly language is thus specific to a certain physical or virtual
computer architecture (as opposed to most high-level languages, which are usually portable).

A utility program called an assembler is used to translate assembly language statements into the target computer's
machine code. The assembler performs a more or less isomorphic translation (a one-to-one mapping) from mnemonic
statements into machine instructions and data. This is in contrast with high-level languages, in which a single statement
generally results in many machine instructions.

Many sophisticated assemblers offer additional mechanisms to facilitate program development, control the assembly
process, and aid debugging. In particular, most modern assemblers include a macro facility (described below), and are
called macro assemblers.

Assembler
Compare with: Microassembler.

Typically a modern assembler creates object code by translating assembly instruction mnemonics into opcodes, and by
resolving symbolic names for memory locations and other entities.[1] The use of symbolic references is a key feature of
assemblers, saving tedious calculations and manual address updates after program modifications. Most assemblers also
include macro facilities for performing textual substitution—e.g., to generate common short sequences of instructions as
inline, instead of called subroutines, or even generate entire programs or program suites.

Assemblers are generally simpler to write than compilers for high-level languages, and have been available since the
1950s. Modern assemblers, especially for RISC based architectures, such as MIPS, Sun SPARC, and HP PA-RISC, as
well as x86(-64), optimize instruction scheduling to exploit the CPU pipeline efficiently.

There are two types of assemblers based on how many passes through the source are needed to produce the executable
program.

• One-pass assemblers go through the source code once and assumes that all symbols will be defined before any
instruction that references them.
• Two-pass assemblers (and multi-pass assemblers) create a table with all unresolved symbols in the first pass, then
use the 2nd pass to resolve these addresses. The advantage of a one-pass assembler is speed, which is not as
important as it once was with advances in computer speed and capabilities. The advantage of the two-pass
assembler is that symbols can be defined anywhere in the program source. As a result, the program can be defined
in a more logical and meaningful way. This makes two-pass assembler programs easier to read and maintain.[2]

More sophisticated high-level assemblers provide language abstractions such as:


• Advanced control structures
• High-level procedure/function declarations and invocations
• High-level abstract data types, including structures/records, unions, classes, and sets
• Sophisticated macro processing (although available on ordinary assemblers since late 1960s for IBM/360,
amongst other machines)
• Object-Oriented features such as encapsulation, polymorphism, inheritance, interfaces

See Language design below for more details.

Note that, in normal professional usage, the term assembler is often used ambiguously: It is frequently used to refer to an
assembly language itself, rather than to the assembler utility. Thus: "CP/CMS was written in S/360 assembler" as
opposed to "ASM-H was a widely-used S/370 assembler.

Assembly language

A program written in assembly language consists of a series of instructions--mnemonics that correspond to a stream of
executable instructions, when translated by an assembler, that can be loaded into memory and executed.

For example, an x86/IA-32 processor can execute the following binary instruction ('MOV') as expressed in machine
language (see x86 assembly language):

high-level language

A programming language such as C, FORTRAN, or Pascal that enables a programmer to write programs that are more or
less independent of a particular type of computer. Such languages are considered high-level because they are closer to
human languages and further from machine languages. In contrast, assembly languages are considered low-level because
they are very close to machine languages.

The main advantage of high-level languages over low-level languages is that they are easier to read, write, and maintain.
Ultimately, programs written in a high-level language must be translated into machine language by a compiler or
interpreter.

The first high-level programming languages were designed in the 1950s. Now there are dozens of different languages,
including Ada, Algol, BASIC, COBOL, C, C++, FORTRAN, LISP, Pascal, and Prolog.

Compiler
A compiler is a computer program (or set of programs) that transforms source code written in a computer language (the
source language) into another computer language (the target language, often having a binary form known as object
code). The most common reason for wanting to transform source code is to create an executable program.

The name "compiler" is primarily used for programs that translate source code from a high-level programming language
to a lower level language (e.g., assembly language or machine code). A program that translates from a low level language
to a higher level one is a decompiler. A program that translates between high-level languages is usually called a language
translator, source to source translator, or language converter. A language rewriter is usually a program that translates
the form of expressions without a change of language.

A compiler is likely to perform many or all of the following operations: lexical analysis, preprocessing, parsing, semantic
analysis, code generation, and code optimization.

Program faults caused by incorrect compiler behavior can be very difficult to track down and work around and compiler
implementors invest a lot of time ensuring the correctness of their software.

Вам также может понравиться