Вы находитесь на странице: 1из 16

COMPUTER FUNDAMENTALS

Pranjal Agarwal S7-D

People always fear change. People feared electricity when it was invented, didnt they? People feared coal; they feared gas-powered engines. There will always be ignorance, and ignorance leads to fear. But with time, people will come to accept their silicon masters. -Bill Gates

Computer
A computer is a programmable machine designed to sequentially and automatically carry out a sequence of arithmetic or logical operations. The particular sequence of operations can be changed readily, allowing the computer to solve more than one kind of problem. Conventionally a computer consists of some form of memory for data storage, at least one element that carries out arithmetic and logic operations, and a sequencing and control element that can change the order of operations based on the information that is stored. Peripheral devices allow information to be entered from an external source, and allow the results of operations to be sent out. A computer's processing unit executes series of instructions that make it read, manipulate and then store data. Conditional instructions change the sequence of instructions as a function of the current state of the machine or its environment.

Generation of Computers
Each generation of computer is characterized by a major technological development that fundamentally changed the way computers operate, resulting in increasingly smaller, cheaper, and more powerful and more efficient and reliable devices.

1. First Generation (1940-1956): Vacuum Tubes


The first computers used vacuum tubes for circuitry and magnetic drums for memory, and were often enormous, taking up entire rooms. They were very expensive to operate and in addition to using a great deal of electricity, generated a lot of heat, which was often the cause of malfunctions. First generation computers relied on machine language, the lowest-level programming language understood by computers, to perform operations, and they could only solve one problem at a time. Input was based on punched cards and paper tape, and output was displayed on printouts. The UNIVAC and ENIAC computers are examples of first-generation computing devices. The UNIVAC was the first Commercial computer delivered to a business client, the U.S. Census Bureau in 1951.

2. Second Generation (1956-1963): Transistors

Transistors replaced vacuum tubes and ushered in the second generation of computers. The transistor was invented in 1947 but did not see widespread use in computers until the late 1950s. The transistor was far superior to the vacuum tube, allowing computers to become smaller, faster, cheaper, more energy-efficient and more reliable than their first-generation predecessors. Though the transistor still generated a great deal of heat that subjected the computer to damage, it was a vast improvement over the vacuum tube. Second-generation computers still relied on punched cards for input and printouts for output. Second-generation computers moved from cryptic binary machine language to symbolic, or assembly, languages, which allowed programmers to specify instructions in words. High-level programming languages were also being developed at this time, such as early versions of COBOL and FORTRAN. These were also the first computers that stored their instructions in their memory, which moved from a magnetic drum to magnetic core technology. The first computers of this generation were developed for the atomic energy industry.

3. Third Generation (1964-1971): Integrated Circuits


The development of the integrated circuit was the hallmark of the third generation of computers. Transistors were miniaturized and placed on silicon chips, called semiconductors, which drastically increased the speed and efficiency of computers. Instead of punched cards and printouts, users interacted with third generation computers through keyboards and monitors and interfaced with an operating system, which allowed the device to run many different applications at one time with a central program that monitored the memory. Computers for the first time became accessible to a mass audience because they were smaller and cheaper than their predecessors.

4. Fourth Generation (1971-Present): Microprocessors


The microprocessor brought the fourth generation of computers, as thousands of integrated circuits were built onto a single silicon chip. What in the first generation filled an entire room could now fit in the palm of the

hand. The Intel 4004 chip, developed in 1971, located all the components of the computerfrom the central processing unit and memory to input/output controlson a single chip. In 1981 IBM introduced its first computer for the home user, and in 1984 Apple introduced the Macintosh. Microprocessors also moved out of the realm of desktop computers and into many areas of life as more and more everyday products began to use microprocessors. As these small computers became more powerful, they could be linked together to form networks, which eventually led to the development of the Internet. Fourth generation computers also saw the development of GUIs, the mouse and handheld devices.

5. Fifth Generation (Present and Beyond): Artificial Intelligence


Fifth generation computing devices, based on artificial intelligence, are still in development, though there are some applications, such as voice recognition, that are being used today. The use of parallel processing and superconductors is helping to make artificial intelligence a reality. Quantum computation and molecular and nanotechnology will radically change the face of computers in years to come. The goal of fifth-generation computing is to develop devices that respond to natural language input and are capable of learning and self-organization.

Booting
In computing, booting (also known as booting up) is a bootstrapping process that starts operating systems when the user turns on a computer system. A boot sequence is the initial set of operations that the computer performs when power is switched on. The boot loader typically loads the main operating system for the computer.

Software Concepts
Types of software:1. System software

System software is computer software designed to operate the computer hardware and to provide a platform for running application software The most basic types of system software are:
y

The computer BIOS and device firmware which provide basic functionality to operate and control the hardware connected to or built into the computer. The operating system (prominent examples being Microsoft windows, Mac OS X and linux), which allows the parts of a computer to work together by performing tasks like transferring data between memory and disks or rendering output onto a display device. It also provides a platform to run high-level system software and application software.

2. Utility software:Utility software is a kind of system software designed to help analyze, configure, optimize and maintain the computer. A single piece of utility software is usually called a utility or tool. Utility software should be contrasted with application software, which allows users to do things like creating text documents, playing games, listening to music or surfing the web. Rather than providing these kinds of user-oriented or output-oriented functionality, utility software usually focuses on how the computer infrastructure operates.

3. Application software:Application software, also known as an application or an "app", is computer software designed to help the user to perform singular or multiple related specific tasks. Examples include enterprise software, accounting software, office suites, graphics software and media players. Many application programs deal principally with documents. Apps may be bundled with the computer and its system software, or may be published separately. Some users are satisfied with the bundled apps and need never install one.

System Software

a) Operating system
An operating system is software, consisting of programs and data, that runs on computers, manages computer hardware resources, and provides common services for execution of various application software. Operating system is the most important type of system software in a computer system. Without an operating system, a user cannot run an application program on their computer, unless the application program is self booting. For hardware functions such as input and output and memory allocation, the operating system acts as an intermediary between application programs and the computer hardware, although the application code is usually executed directly by the hardware and will frequently call the OS or be interrupted by it. Types of Operating Systems are: i. Real Time A real-time operating systemis a multitasking operating system that aims at executing real-time applications. The main object of real-time operating systems is their quick and predictable response to events. They have an event-driven or time-sharing design and often aspects of both. An event-driven system switches between tasks based on their priorities or external events while time-sharing operating systems switch tasks based on clock interrupts. ii. Multi-user vs. Single-user A multi-user operating system allows multiple users to access a computer system concurrently. Time-sharing system can be classified as multi-user systems as they enable a multiple user access to a computer through the sharing of time. Single-user operating systems, as opposed to a multi-user operating system, are usable by a single user at a time. Being able to have multiple accounts on a Windows operating system does not make it a multi-user system. Rather, only the network administrator is the real user. But for a Unix-like

operating system, it is possible for two users to login at a time and this capability of the OS makes it a multi-user operating system. iii. Multi-tasking vs. Single-tasking When a single program is allowed to run at a time, the system is grouped under a single-tasking system, while in case the operating system allows the execution of multiple tasks at one time, it is classified as a multi-tasking operating system. Multitasking can be of two types namely, pre-emptive or cooperative. In pre-emptive multitasking, the operating system slices the CPU time and dedicates one slot to each of the programs. Unix-like operating systems such as Solaris and Linux support pre-emptive multitasking. Cooperative multitasking is achieved by relying on each process to give time to the other processes in a defined manner. MS Windows prior to Windows 95 used to support cooperative multitasking. iv. Mobile operating systems include Android,Symbian and Maemo etc.

b) Compiler
A compiler is a computer program ( or a set of programs) that transforms source code written in a programming language( the source language) into another computer language. The most common reason for wanting to transform source code is to create an executable program.

Utility Software a) Anti-Virus


Antivirus or anti-virus software is used to prevent, detect, and remove malware, including but not limited to computer viruses, computer worm, trojan horses, spyware and adware. This page talks about the software used for the prevention and removal of such threats, rather than computer security implemented by software methods.

b) File Management Tools


A file manager or file browser is a computer program that provides a user interface to work with file systems. The most common operations performed on files or groups of files are: create, open, edit, view, print, play, rename, move, copy, delete, search/find, and modify attributes, properties and permissions.

Number System

1. Binary
The binary numeral system, or base-2 number system, represents numeric values using two symbols, 0 and 1. More specifically, the usual base-2 system is a positional notation with a radix of 2. Owing to its straightforward implementation in digital electronic circuitry using logic gates, the binary system is used internally by all modern computers.

2. Octal
The octal numeral system, or oct for short, is the base-8 number system, and uses the digits 0 to 7. Numerals can be made from binary numerals by grouping consecutive binary digits into groups of three.

3. Hexadecimal
In mathematics and computer science, hexadecimal (also base 16, or hex) is a positional numeral system with a radix, or base, of 16. It uses sixteen distinct symbols, most often the symbols 09 to represent values zero to nine, and A, B, C, D, E, F to represent values ten to fifteen.

Each hexadecimal digit represents four binary digits (bits) (also called a "nibble"), and the primary use of hexadecimal notation is as a human-friendly representation of binary coded values in computing and digital electronics.Hexadecimal is also commonly used to represent computer memory addresses.

Internal Storage encoding of Characters

1. ASCII
The American Standard Code for Information Interchange is a character-encoding scheme based on the ordering of the English alphabets. ASCII codes represent text in computers, communications equipment, and other devices that use text.

2. ISCII
Indian Standard Code for Information Interchange (ISCII) is a coding scheme for representing various writing systems of India. It encodes the main Indic scripts and a Roman transliteration. The supported scripts are: Assamese,Bengali,Devanagiri,Gujarati,Gurmukhi,Kannada,Malayalm, Oriya,Tamil, and Telugu. ISCII does not encode the writing systems of India based on Arabic, but its writing system switching codes nonetheless provide for Kashmiri,Sindhi,Persian etc.

3. UNICODE
Unicode is a computing industry standard for the consistent encoding, representation and handling of text expressed in most of the world's writing systems.Unicode 6.0 consists of a repertoire of more than 109,000 characters covering 93 scripts, a set of code charts for visual reference, an encoding methodology and set of standardcharacter encodings, an enumeration of character properties such as upper and lower case, a set of reference data computer files, and a number of related items, such as character properties, rules for normalization, decomposition, collation, rendering, and bidirectional display order .

Microprocessor
A microprocessor incorporates the functions of a computers central processing unit(CPU) on a single integrated circuit(IC, or microchip).It is a multipurpose,programmable,clock-driven,register based electronic device that accepts bonary data as input, processes it according to instructions stord in its memory, and provides results as output.

Memory Concepts
1. Byte
The byte is a unit of digital information in computing and telecommunications that most commonly consists of eight bits. Historically, a byte was the number of bits used to encode a single character of text in a computer and it is for this reason the basic addressable element in many computer architectures. The size of the byte has historically been hardware dependent and no definitive standards exist that mandate the size. The standard of eight bits is a convenient power of two permitting the values 0 through 255 for one byte. Many types of applications use variables representable in eight or fewer bits, and processor designers optimize for this common usage. The popularity of major commercial computing architectures have aided in the ubiquitous acceptance of the 8-bit size.

2. Kilo Byte
The kilobyte (symbol: kB) is a multiple of the unit byte for digital information. Although the prefix kilo- means 1000, the term kilobyte and symbol KB have historically been used to refer to either 1024 (210) bytes or 1000 (103) bytes, dependent upon context, in the fields of computer science and information technology.

3. Mega Byte

The megabyte is a multiple of the unit byte for digital information storage or transmission with two different values depending on context: 1048576 bytes (220) generally for computer memory; and one million bytes generally for computer storage.

4. Giga Byte
The gigabyte is a multiple of the unit byte for digital information storage. The prefix giga means 109 in the International System of Units (SI), therefore 1 gigabyte is 1000000000bytes. The unit symbol for the gigabyte is GB or Gbyte, but not Gb (lower case b) which is typically used for the gigabit.

5. Tera Byte
A terabyte is a multiple of the unit byte for digital information. The prefix tera means 1012 in the International System of Units (SI), and therefore 1 terabyte is 1000000000000bytes, or 1 trillion bytes, or 1000 gigabytes. 1 terabyte in binary prefixes is 0.9095 tebibytes, or 931.32 gibibytes. The unit symbol for the terabyte is TB or Tbyte.

6. Peta Byte
A petabyte is a unit of information equal to one quadrillion bytes or 1000 terabytes. The unit symbol for the petabyte is PB. The prefix peta-(P) indicates a power of 1000: 1 PB = 1,000,000,000,000,000 B = 10005 B = 1015 B When used for computer memory the petabyte can also be used for the corresponding power of 1024.

Primary Memory

1. Cache

A cache is a component that transparently stores data so that future requests for that data can be served faster. The data that is stored within a cache might be values that have been computed earlier or duplicates of original values that are stored elsewhere. If requested data is contained in the cache (cache hit), this request can be served by simply reading the cache, which is comparatively faster. Otherwise (cache miss), the data has to be recomputed or fetched from its original storage location, which is comparatively slower. Hence, the more requests can be served from the cache the faster the overall system performance is.

2. RAM
Random-access memory (RAM)is a form of computer data storage. Today, it takes the form of integrated circuits that allow stored data to be accessed in any order with a worst case performance of constant time. Strictly speaking, modern types of DRAM are therefore not random access, as data is read in bursts, although the name DRAM / RAM has stuck. However, many types of SRAM, ROM, OTP, and NOR flash are still random access even in a strict sense. RAM is often associated with volatile types of memory (such as DRAM memory modules), where its stored information is lost if the power is removed.

3. ROM
Read-only memory (ROM) is a class of storage medium used in computers and other electronic devices. Data stored in ROM cannot be modified, or can be modified only slowly or with difficulty, so it is mainly used to distribute firmware (software that is very closely tied to specific hardware, and unlikely to need frequent updates).

Secondary Memory

1. Hard Disk Drive


A hard disk drive (HDD) is a non-volatile, random access digital data storage device. It features rotating rigid platters on a motordriven spindle within a protective enclosure. Data is magnetically read from and written to the platter by read/write heads that float on a film of air above the platters.

2. USB Flash Drive


A USB flash drive consists of a flash memory data storage device integrated with a USB (Universal Serial Bus) interface. USB flash drives are typically removable and rewritable, and physically much smaller than a floppy disk. Most weigh less than 30 g (1 oz). Storage capacities in 2010 can be as large as 256 GB with steady improvements in size and price per capacity expected. Some allow 1 million write or erase cycle and offer a 10-year shelf storage time.

3. DVD Drive
DVD is an optical disc storage media format, invented and developed by Philips, Sony,Toshiba, and Panasonic in 1995. DVDs offer higher storage capacity than compact discswhile having the same dimensions. A normal DVD provides storage capacity upto 4.7GB and some have 17.08 GB storage capacity.

Input Output Ports/Connections

1. USB
USB stands for Universal Serial Bus. A USB port is a standard cable connection interface on personal computers and consumer electronics. USB ports allow stand-alone electronic devices to be connected via cables to a computer.

2. PS/2 port The PS/2 connector is a 6-pin Mini-DIN connector used for connecting some keyboards and mouse to a PC compatible computer system.

3. Serial port
In computing, a serial port is a serial communication physical interface through which information transfers in or out one bit at a time.

Вам также может понравиться