Вы находитесь на странице: 1из 14

CHAPTER 1 COMPUTER AN INTRODUCTION

Computer, machine that performs tasks, such as mathematical calculations or electronic communication, under the control of a set of instructions called a program. Programs usually reside within the computer and are retrieved and processed by the computers electronics, and the program results are stored or routed to output devices, such as video display monitors or printers. Computers are used to perform a wide variety of activities with reliability, accuracy, and speed. People use computers in a wide variety of ways. In business, computers track inventories with bar codes and scanners, check the credit status of customers, and transfer funds electronically. In homes, tiny computers embedded in the electronic circuitry of most appliances control the indoor temperature, operate home security systems, tell the time, and turn videocassette recorders on and off. Computers in automobiles regulate the flow of fuel, thereby increasing gas mileage. Computers also entertain, creating digitized sound on stereo systems or computer-animated features from a digitally encoded laser disc. Computer programs, or applications, exist to aid every level of education, from programs that teach simple addition or sentence construction to programs that teach advanced calculus. Educators use computers to track grades and prepare notes; with computer-controlled projection units, they can add graphics, sound, and animation to their lectures. Computers are used extensively in scientific research to solve mathematical problems, display complicated data, or model systems that are too costly or impractical to build, such as testing the air flow around the next generation of space shuttles. The military employs computers in sophisticated communications to encode and unscramble messages, and to keep track of personnel and supplies.

CHARACTERSTICS OF A COMPUTER

The characteristics of computer are high speed of operations, accuracy, reliability, flexibility and economy coupled with efficiency in storing and processing data. SPEED: The speed of a computer is virtually instantaneous. It is measured in terms of microseconds (1/10-6 second), nanoseconds (1/10-9 second) and picosecond (1/10-12 second). STORAGE : One of the prime characteristic of a computer is its ability to store information which it can access very quickly. ACCURACY: Computers are very accurate. They seldom make mistakes, though they do occasionally break down. VERSATILITY: For all practical purposes, computers can perform any task that can be reduced to a series of logical steps. AUTOMATION : Once a program is in computers memory, the individual instruction are then transferred, one after the other, to the control unit for execution. No human intervention is required until the control unit meets the last instruction which tells the computer to stop program execution. DILIGENCE: A computer can work endlessly for hours and hours, without suffering from the human traits of tiredness, boredom and lack of concentration.

APPLICATION OF COMPUTERS
A Computer can be employed for a wide variety of purposes. They are used to assist humans in business organization, in research and on many other walks of life. Some of the areas of Computers applications are : Payroll: This involves recording of employee details, such as name, gross-pay or rate of job, tax code, national insurance etc. These data serves as the input for the program that contains formula for calculating all the deductions and allowances to arrive at net-

pay. The program finally contains instruction to print the payslips for all employees. Office Automation: Office automation can be defined as the application of todays technology to current office practice. Word Processing: Word processors are partially useful for preparing reports and producing standard letters and documents. Word processor a program that allows you to checks the spelling of your documents contents. Desktop Publishing: A DTP system is built around three componentsa microcomputer, page makeup software and a laser printer. It allows a user to set up, view and change page layout as many times as is necessary to achieve the right design. An Aid to Management: The computer can also be used as a management tool to assist in solving business problems. Banking: In most instances, the computer is sited centrally. Branches are equipped with terminals giving them an on-line accounting facility and enabling them to access information on such things as current balances, deposits, overdrafts and interest charges. Industrial Applications: In industry, production may be planned, coordinated and controlled with the aid of a computer. The computer may be used to operate assembly machines which piece together parts of equipments e.g. sections of a motorcar. Engineering Design: Computers help in calculating that all the parts of a proposed design (be it that of an airplane, car, bridge, road or a building) are satisfactory and also assist in the designing. Meteorology: Data is recorded at different levels of atmosphere, at different places, using remote sensors carried on a satellite. All this data is transmitted to the meteorology centers where the computer system is able to analyse the data for accurate prediction of weather. Air Travel: Small computers are installed as a part of the planes equipment. These computers are programmed to continuously analyse data relayed from various instruments, so as to provide coordinated information to the pilot in time for human decision and action. Road Traffic Control: Computers assist with the control of traffic lights. Telephones: Computerized telephone exchanges handle an ever increasing volume of calls very efficiently. By way of satellites, calls can be transmitted at faster speeds. Medicine: Computers are widely used in hospital administration for such tasks as maintaining drugs, surgical equipments and linen, for payroll, hospital accounting and for a bed allocation.

DATA PROCESSING CYCLE


Being a data processing system it becomes important to have an in-depth analysis of what all is involved in processing the data. But before looking into that, we require to define some of the terms associated with it. Data : It is a raw set of facts and figures associated with an individual, an entity or an event. Data can be represented in various forms i.e. in figures, characters, symbols etc. Instruction : It specifies how the data is to be manipulated.

Process : It is the actual interpretation and execution of the instruction, which is carried out by the microprocessor (This will be explained in chapter 2). Output : The result obtained from the processor is called output. Information : Output, which is meaningful, is termed as Information. Data Processing Cycle Or Input-Process-Output Cycle This Input output cycle, which begins with the data and information being input to the computer and being stored. Further processing of the data as per the instruction provided in a logical sequence, one by one, gives an outcome commonly called an OUTPUT. This output which is the outcome, acts as data or input to a second process and so on and so forth.

(Figure 1.1) Processing could take one of the following forms: Data Processing Text Processing Graphical Processing Arithmetical Processing Decision-making etc.

HOW COMPUTER REALLY WORKS


The physical computer and its components are known as hardware. Computer hardware includes the memory that stores data and instructions; the central processing unit (CPU) that carries out instructions; the bus that connects the various computer components; the input devices, such as a keyboard or mouse, that allow the user to communicate with the computer; and the output devices, such as printers and video display monitors, that enable the computer to present information to the user. The programs that run the computer are called software. Software generally is designed to perform a particular type of task-for example, to control the arm of a robot to weld a cars body, to write a letter, to draw a graph, or to direct the general operation of the computer. Main part of the computer that processes the data and provides the output is Central Processing Unit, commonly known as CPU. It is the heart of the computer and a most important component of a computers hardware. CPU further consists of other units: Arithmetical and Logical Unit (AMU): This unit performs arithmetical and logical functions such as addition, subtraction, multiplication, and division, and the logical operations such as: " Is A=B ?" ( Is A equals to B), "Is a given character equal to M (for male) and F (for female)?". Control Unit (CU): The control unit interprets and controls all the activities being carried out within the computer.

Memory: This unit stores all the data and information required by various units to perform their function.

(Figure 1.2) A. The Operating System When a computer is turned on it searches for instructions in its memory. Usually, the first set of these instructions is a special program called the operating system, which is the software that makes the computer work. It prompts the user (or other machines) for input and commands, reports the results of these commands and other operations, stores and manages data, and controls the sequence of the software and hardware actions. When the user requests that a program run, the operating system loads the program in the computers memory and runs the program. Popular operating systems, such as Microsoft Windows and the Macintosh system (Mac OS), have a graphical user interface (GUI)-that is, a display that uses tiny pictures, or icons, to represent various commands. To execute these commands, the user clicks the mouse on the icon or presses a combination of keys on the keyboard. B. Computer Memory To process information electronically, data are stored in a computer in the form of binary digits, or bits, each having two possible representations (0 or 1). If a second bit is added to a single bit of information, the number of representations is doubled, resulting in four possible combinations: 00, 01, 10, or 11. A third bit added to this two-bit representation again doubles the number of combinations, resulting in eight possibilities: 000, 001, 010, 011, 100, 101, 110, or 111. Each time a bit is added, the number of possible patterns is doubled. Eight bits is called a byte; a byte has 256 possible combinations of 0s and 1s. A byte is a useful quantity in which to store information because it provides enough possible patterns to represent the entire alphabet, in lower and upper cases, as well as numeric digits, punctuation marks, and several character-sized graphics symbols, including non-English characters such as p. A byte also can be interpreted as a pattern that represents a number between 0 and 255. A kilobyte-1,024 bytes-can store about 1,000 characters; a megabyte can store about 1 million characters; a gigabyte can store about 1 billion characters; and a terabyte can store about 1 trillion characters. We provide the data in the form of numbers, characters and symbols, but the computer cannot recognize these symbols. Then how does it process your data and give you the output?. Every data you provide is converted into a format that computer can recognize. Computers operate using the Binary System. The binary system is a system based on 0s and 1s, representing switches or electrical currents that can be on or off, 1 or 0. Smallest and

fundamental unit of computer data is Bits (Binary Digits). We use the decimal number 27 for example below. In binary form it is 11011. Therefore, if you imagine binary numbers as electrical switches it is like the following:

(Figure 1.3) Converting DECIMAL numbers to BINARY There are many methods to convert a decimal number to a binary format. The most common method is to repeatedly divide the number by 2 and then arrange the remainders from bottom to top lets see how ? Example : A decimal number (13)10

So (13)10 is represented as (1101)2 in the Binary format. Lets see another example (112)10 Exercise: Convert these numbers to binary format (1) (39)10 (2) (123)10 (3) (499)10 (4) (253)10

Converting Binary numbers to Decimal numbers Lets see how to convert binary numbers to decimal

The physical memory of a computer is either Random Access Memory (RAM), which can be read or changed by the user or computer, or Read-Only Memory (ROM), which can be read by the computer but not altered. One way to store memory is within the circuitry of the computer, usually in tiny computer chips that hold millions of bytes of information. The memory within these computer chips is RAM. Memory also can be stored outside the circuitry of the computer on external storage devices, such as magnetic floppy disks, which can store about 2 megabytes of information; hard drives, which can store thousands of megabytes of information; CD-ROMs (compact discs), which can store up to 630 megabytes of information; and DVDs (digital video discs), which can store 8.5 gigabytes of information. A single CDROM can store nearly as much information as 700 floppy disks can, and a DVD can hold 15 times as much data as a CD-ROM. C. The Bus The bus is usually a flat cable with numerous parallel wires. The bus enables the components in a computer, such as the CPU and memory, to communicate. Typically, several bits at a time are sent along the bus. For example, a 16-bit bus, with 16 parallel wires, allows the simultaneous transmission of 16 bits (2 bytes) of information from one device to another. D. Input Devices Input devices, such as a keyboard or mouse, permit the computer user to communicate with the computer. Other input devices include a joystick, a rodlike device often used by game players; a scanner, which converts images such as photographs into binary information that the computer can manipulate; a light pen, which can draw on, or select objects from, a computers video display by pressing the pen against the displays surface; a touch panel, which senses the placement of a users finger; and a microphone, used to gather sound information. E. The Central Processing Unit Information from an input device or memory is communicated via the bus to the central

processing unit (CPU), which is the part of the computer that translates commands and runs programs. The CPU is a microprocessor chip-that is, a single piece of silicon containing millions of electrical components. Information is stored in a CPU memory location called a register. Registers can be thought of as the CPUs tiny scratchpad, temporarily storing instructions or data. When a program is run, one register called the program counter keeps track of which program instruction comes next. The CPUs control unit coordinates and times the CPUs functions, and it retrieves the next instruction from memory. In a typical sequence, the CPU locates the next instruction in the appropriate memory device. The instruction then travels along the bus from the computers memory to the CPU, where it is stored in a special instruction register. Meanwhile, the program counter is incremented to prepare for the next instruction. The current instruction is analyzed by a decoder, which determines what the instruction will do. Any data the instruction needs are retrieved via the bus and placed in the CPUs registers. The CPU executes the instruction, and the results are stored in another register or copied to specific memory locations. F. Output Devices Once the CPU has executed the program instruction, the program may request that information be communicated to an output device, such as a video display monitor or a flat liquid crystal display. Other output devices are printers, overhead projectors, videocassette recorders (VCRs), and speakers. See also Input/Output Devices. G. Storage Devices When Power to the PC is switched OFF contents of the memory are lost. It is the Hard Disk which serves as a bulk, non volatile medium for storage of user files or data or applications. Its hard to believe that not so long ago 100MB of hard disk space was considered to be generous. Today this would be totally inadequate, hardly to install the operating system alone. A magnetic disk is a circular platter of plastic which is clotted with magnetizable material. One of the key components of a magnetic disks is a conducting coil named as head which perform the job of reading and writing on the magnetic surface. The head remains stationary while the disk rotates below it for reading or writing operation. This consists of a flexible thin sheet of plastics material with a magnetic coating and grooves arranged in concentric circles with tracks. Floppy disks become a convenient recording medium to transport information from one location to another. Magnetic tapes are mounted on reels or a cartridge or a cassette of tapes to store large volumes or backup data. These are cheaper and since these are removable from the drive, they provide unlimited storage capacity. Since the recording is like that in a tape recorder used in audio system, information retrieval will be only sequential and not random. These are not suitable for on-line retrieval of data, since sequential searching will takes long time. These are convenient for archival storage for backup. The CD-ROM (Compact disks read- only memories) is a direct extension of audio CD. CDROM players are more rugged and have error- correction facility. This ensures proper data transfer from CD-ROM to the main memory of the computer. CD-ROM is written into during the process of manufacture by a high power less beam. Information is retrieved from a CDROM using a low power laser, which ingenerate in an optical disks drive unit. In CD-ROMs the information is stored evenly across the disk in segments of the same size. Therefore, in CDROMs data stored on a track increases as we go towards other surface of disk. Thus, the CDROMs are rotated at variable speed for the reading process.

PROGRAMMING LANGUAGES
Programming languages contain the series of commands that create software. In general, a language that is encoded in binary numbers or a language similar to binary numbers that a computers hardware understands is understood more quickly by the computer. A program written in this type of language also runs faster. Languages that use words or other commands that reflect how humans think are easier for programmers to use, but they are slower because the language must be translated first so the computer can understand it. A. Machine Language Computer programs that can be run by a computers operating system are called executables. An executable program is a sequence of extremely simple instructions known as machine code. These instructions are specific to the individual computers CPU and associated hardware; for example, Intel Pentium and Power PC microprocessor chips each have different machine languages and require different sets of codes to perform the same task. Machine code instructions are few in number (roughly 20 to 200, depending on the computer and the CPU). Typical instructions are for copying data from a memory location or for adding the contents of two memory locations (usually registers in the CPU). Machine code instructions are binary-that is, sequences of bits (0s and 1s). Because these numbers are not understood easily by humans, computer instructions usually are not written in machine code. B. Assembly Language Assembly language uses commands that are easier for programmers to understand than are machine-language commands. Each machine language instruction has an equivalent command in assembly language. For example, in assembly language, the statement "MOV A, B" instructs the computer to copy data from one location to another. The same instruction in machine code is a string of 16 0s and 1s. Once an assembly-language program is written, it is converted to a machine-language program by another program called an assembler. Assembly language is fast and powerful because of its correspondence with machine language. It is still difficult to use, however, because assembly-language instructions are a series of abstract codes. In addition, different CPUs use different machine languages and therefore require different assembly languages. Assembly language is sometimes inserted into a high-level language program to carry out specific hardware tasks or to speed up a high-level program. C. High-Level Languages High-level languages were developed because of the difficulty of programming assembly languages. High-level languages are easier to use than machine and assembly languages because their commands resemble natural human language. In addition, these languages are not CPU-specific. Instead, they contain general commands that work on different CPUs. For example, a programmer writing in the high-level Pascal programming language who wants to display a greeting need include only the following command: Write (Hello, User!); This command directs the computers CPU to display the greeting, and it will work no matter what type of CPU the computer uses. Like assembly-language instructions, highlevel languages also must be translated, but a compiler is used. A compiler turns a highlevel program into a CPU-specific machine language. For example, a programmer may write a program in a high-level language such as C and then prepare it for different machines, such as a Cray Y-MP supercomputer or a personal computer, using compilers designed for those machines. This speeds the programmers task and makes the software more portable to different users and machines. American naval officer and mathematician Grace Murray Hopper helped develop the first commercially available high-level software language, FLOW-MATIC, in 1957. Hopper is credited for inventing the term bug, which indicates a computer malfunction; in 1945 she discovered a hardware failure in the Mark II computer caused by a moth trapped between its mechanical relays.

From 1954 to 1958 American computer scientist John Backus of International Business Machines, Inc. (IBM) developed FORTRAN, an acronym for FORmula TRANslation. It became a standard programming language because it can process mathematical formulas. FORTRAN and its variations are still in use today. Beginners All-purpose Symbolic Instruction Code, or BASIC, was developed by Hungarian-American mathematician John Kemeny and American mathematician Thomas Kurtz at Dartmouth College in 1964. The language was easier to learn than its predecessors and became popular due to its friendly, interactive nature and its inclusion on early personal computers (PCs). Unlike other languages that require that all their instructions be translated into machine code first, BASIC is interpreted-that is, it is turned into machine language line by line as the program runs. BASIC commands typify high-level languages because of their simplicity and their closeness to natural human language. For example, a program that divides a number in half can be written as 10 INPUT "ENTER A NUMBER," X 20 Y=X/2 30 PRINT "HALF OF THAT NUMBER IS," Y The numbers that precede each line are chosen by the programmer to indicate the sequence of the commands. The first line prints "ENTER A NUMBER" on the computer screen followed by a question mark to prompt the user to type in the number labeled "X." In the next line, that number is divided by two, and in the third line, the result of the operation is displayed on the computer screen. Other high-level languages in use today include C, Ada, Pascal, LISP, Prolog, COBOL, HTML, and Java. New compilers are being developed, and many features available in one language are being made available in others. D. Object-Oriented Programming Languages Object-oriented programming (OOP) languages, such as C++, are based on traditional high-level languages, but they enable a programmer to think in terms of collections of cooperating objects instead of lists of commands. Objects, such as a circle, have properties such as the radius of the circle and the command that draws it on the computer screen. Classes of objects can inherit features from other classes of objects. For example, a class defining squares can inherit features such as right angles from a class defining rectangles. This set of programming classes simplifies the programmers task, resulting in more reliable and efficient programs.

TYPES OF COMPUTERS
A. Digital and Analog Computers can be either digital or analog. Digital refers to the processes in computers that manipulate binary numbers (0s or 1s), which represent switches that are turned on or off by electrical current. Analog refers to numerical values that have a continuous range. Both 0 and 1 are analog numbers, but so is 1.5 or a number like p (approximately 3.14). As an example, consider a desk lamp. If it has a simple on/off switch, then it is digital, because the lamp either produces light at a given moment or it does not. If a dimmer replaces the on/off switch, then the lamp is analog, because the amount of light can vary continuously from on to off and all intensities in between. Analog computer systems were the first type to be produced. A popular analog computer used in the 20th century was the slide rule. To perform calculations with a slide rule, the user slides a narrow, gauged wooden strip inside a ruler-like holder. Because the sliding is continuous and there is no mechanism to stop at one exact value, the slide rule is analog. New interest has been shown recently in analog computers, particularly in areas such as neural networks that respond to continuous electrical signals. Most modern computers, however, are digital machines whose components have a finite number of

states-for example, the 0 or 1, or on or off of bits. These bits can be combined to denote information such as numbers, letters, graphics, and program instructions. B. Range of Computer Ability Computers exist in a wide range of sizes and power. The smallest are embedded within the circuitry of appliances, such as televisions and wrist-watches. These computers are typically preprogrammed for a specific task, such as tuning to a particular television frequency or keeping accurate time. Programmable computers vary enormously in their computational power, speed, memory, and physical size. The smallest of these computers can be held in one hand and are called personal digital assistants (PDAs). They are used as notepads, scheduling systems, and address books; if equipped with a cellular phone, they can connect to worldwide computer networks to exchange information regardless of location. Laptop computers and PCs are typically used in businesses and at home to communicate on computer networks, for word processing, to track finances, and to play games. They have large amounts of internal memory to store hundreds of programs and documents. They are equipped with a keyboard; a mouse, trackball, or other pointing device; and a video display monitor or liquid crystal display (LCD) to display information. Laptop computers usually have similar hardware and software as PCs, but they are more compact and have flat, lightweight LCDs instead of video display monitors. Workstations are similar to personal computers but have greater memory and more extensive mathematical abilities, and they are connected to other workstations or personal computers to exchange data. They are typically found in scientific, industrial, and business environments that require high levels of computational abilities. Mainframe computers have more memory, speed, and capabilities than workstations and are usually shared by multiple users through a series of interconnected computers. They control businesses and industrial facilities and are used for scientific research. The most powerful mainframe computers, called supercomputers, process complex and timeconsuming calculations, such as those used to create weather predictions. They are used by the largest businesses, scientific institutions, and the military. Some supercomputers have many sets of CPUs. These computers break a task into small pieces, and each CPU processes a portion of the task to increase overall speed and efficiency. Such computers are called parallel processors.

NETWORKS
Computers can communicate with other computers through a series of connections and associated hardware called a network. The advantage of a network is that data can be exchanged rapidly, and software and hardware resources, such as hard-disk space or printers, can be shared.

(Figure 1.4) One type of network, a local area network (LAN), consists of several PCs or workstations connected to a special computer called the server. The server stores and manages programs and data. A server often contains all of a networked groups data and enables LAN workstations to be set up without storage capabilities to reduce cost. Mainframe computers and supercomputers commonly are networked. They may be connected to PCs, workstations, or terminals that have no computational abilities of their own. These "dumb" terminals are used only to enter data into, or receive output from, the central computer. Wide area networks (WANs) are networks that span large geographical areas. Computers can connect to these networks to use facilities in another city or country. For example, a person in Los Angeles can browse through the computerized archives of the Library of Congress in Washington, D.C. The largest WAN is the Internet, a global consortium of networks linked by common communication programs. The Internet is a mammoth resource of data, programs, and utilities. It was created mostly by American computer scientist Vinton Cerf in 1973 as part of the United States Department of Defense Advanced Research Projects Agency (DARPA). In 1984 the development of Internet technology was turned over to private, government, and scientific agencies. The World Wide Web, developed in the 1980s by British physicist Timothy Berners-Lee, is a system of information resources accessed primarily through the Internet. Users can obtain a variety of information in the form of text, graphics, sounds, or animations. These data are extensively cross-indexed, enabling users to browse (transfer from one information site to another) via buttons, highlighted text, or sophisticated searching software known as search engines. TOPOLOGY The way in which network is configured is called Network topology. In other words we can say that network topology is the physical connectivity of the network. The first major goal in establishing a topology for the network is to provide the least cost path between the application processes residing on the DTEs(Data Terminal Equipments). DTE is a generic term used to describe the end user machine, which is usually a computer or terminal. The second major goal in establishing a topology is to provide the best possible response time and throughput. Short response time entails minimizing delay between the transmission and the receipt of the data between the DTEs ,and is especially important for interactive sessions between user applications. Horizontal Topology (Bus) The horizontal topology or bus topology is illustrated in figure below . This arrangement is quite popular in local area networks. It is relatively simple to control traffic flow between and among the DTEs because the bus permits all stations to receive every transmission. That is, a single station broadcasts to multiple stations. The main drawback of a horizontal topology stems from the fact that usually only one communication channel

exists to service all the devices on the network. Consequently, in the event of a failure of the communications channel, the entire network is lost.

(Figure 1.5) Star Topology The star topology is another widely used structure for data communications systems. One of the major reasons for its continued use is based on historical precedence. The star network was used in the 1960s and early 1970s because it was easy to control - the software is not complex and the traffic flow is simple. All traffic emanates from the hub of the star, the central site in figure below, labeled A. Site A ,typically a computer, is in complete control of the DTEs attached to it. Consequently, it is quite similar to the hierarchical topology, except that the star topology has limited distributed processing capabilities. Fault isolation is relatively simple in a star network because the lines can be isolated to identify the problem. However, like the hierarchical structure, the star network is subject to potential bottleneck and failure problems at the central site.

(Figure 1.6) The mesh topology has been used somewhat in the last few years (figure below). Its attraction is its relative immunity to bottleneck and failure problems. Due to the multiplicity of paths from DTEs, traffic can be routed around failed components or busy nodes. Even though this approach is an expensive undertaking , some users prefer the reliability o the mesh network to that of teh others(especially for networks with only a few nodes that need to be connected)

(Figure 1.7)

HARDWARE HISTORY OVERVIEW


Modern computing can probably be traced back to the Harvard Mk I and Colossus (both of 1943). Colossus was an electronic computer built in Britain at the end 1943 and designed to crack the German coding system - Lorenz cipher. The Harvard Mk I was a more general purpose electro-mechanical programmable computer built at Harvard University with backing from IBM. These computers were among the first of the first generation computers. First generation computers were normally based around wired circuits containing vacuum valves and used punched cards as the main (non-volatile) storage medium. Another general purpose computer of this era was ENIAC (Electronic Numerical Integrator and Computer) which was completed in 1946. It was typical of first generation computers, it weighed 30 tonnes contained 18,000 electronic valves and consumed around 25KW of electrical power. It was, however, capable of an amazing 100,000 calculations a second. The next major step in the history of computing was the invention of the transistor in 1947. This replaced the inefficient valves with a much smaller and more reliable component. Transistorised computers are normally referred to as Second Generation and dominated the late 1950s and early 1960s. Despite using transistors and printed circuits these computers were still bulky and strictly the domain of Universities and governments. The explosion in the use of computers began with Third Generation computers. These relied Jack St. Claire Kilbys invention - the integrated circuit or microchip; the first integrated circuit was produced in September 1958 but computers using them didnt begin to appear until 1963. While large mainframes such as the I.B.M. 360 increased storage and processing capabilities further, the integrated circuit allowed the development of Minicomputers that began to bring computing into many smaller businesses. Large scale integration of circuits led to the development of very small processing units, an early example of this is the processor used for analyzing flight data in the US Navys F14A `TomCat fighter jet. This processor was developed by Steve Geller, Ray Holt and a team from AiResearch and American Microsystems. On November 15th Intel released the worlds first commercial microprocessor, the 4004. Fourth generation computers developed, using a microprocessor to locate much of the computers processing abilities on a single (small) chip. Coupled with one of Intels inventions - the RAM chip (Kilobits of memory on a single chip) - the microprocessor allowed fourth generation computers to be even smaller and faster than ever before. The

4004 was only capable of 60,000 instructions per second, but later processors (such as the 8086 that all of Intels processors for the IBM PC and compatibles is based) brought ever increasing speed and power to the computers. Supercomputers of the era were immensely powerful, like the Cray-1 which could calculate 150 million floating point operations per second. The microprocessor allowed the development of microcomputers, personal computers that were small and cheap enough to be available to ordinary people. The first such personal computer was the MITS Altair 8800, released at the end of 1974, but it was followed by computers such as the Apple I & II, Commodore PET and eventually the original IBM PC in 1981. Although processing power and storage capacities have increased beyond all recognition since the 1970s the underlying technology of LSI (large scale integration) or VLSI (very large scale integration) microchips has remained basically the same, so it is widely regarded that most of todays computers still belong to the fourth generation.

Вам также может понравиться