Вы находитесь на странице: 1из 6

An operating system is a program that manages the

computer hardware. It also provides a basis for application programs and acts as an intermediary
between a user of a computer and the computer hardware. It simply provides an environment within
which other programs can do useful work.
A more common definition is that the operating system is the one program running at all times
on the computer (usually called the Kernel), with all else being systems programs and
application programs. Some operating systems are designed to be convenient, others to be efficient, and
others some combination of the two.

Functions of an Operating System:


At the simplest level, an operating system does two things:

 It manages the hardware and software resources of the system. In a desktop computer, these
resources include such things as the processor, memory, disk space and more (On a cell phone, they
include the keypad, the screen, the address book, the phone dialer, the battery and the network
connection).
 It provides a stable, consistent way for applications to deal with the hardware without having to
know all the details of the hardware.

The first task, managing the hardware and software resources, is very important, as various programs
and input methods compete for the attention of the central processing unit (CPU) and demand memory,
storage and input/output (I/O) bandwidth for their own purposes. In this capacity, the operating system
plays the role of the good parent, making sure that each application gets the necessary resources while
playing nicely with all the other applications, as well as husbanding the limited capacity of the system to
the greatest good of all the users and applications.

The second task, providing a consistent application interface, is especially important if there is to be
more than one of a particular type of computer using the operating system, or if the hardware making up
the computer is ever open to change. A consistent application program interface (API) allows a software
developer to write an application on one computer and have a high level of confidence that it will run on
another computer of the same type, even if the amount of memory or the quantity of storage is different
on the two machines.

Types of Operating Systems

Within the broad family of operating systems, there are four types , categorized on the basis of the types of
computers they control and the sort of applications they support. The categories are-

I. Real-time operating system (RTOS) - Real-time operating systems are used to control machinery,
scientific instruments and industrial systems. An RTOS typically has very little user-interface
capability, and no end-user utilities, since the system will be a "sealed box" when delivered for use. A
very important part of an RTOS is managing the resources of the computer so that a particular
operation executes in precisely the same amount of time, every time it occurs. In a complex machine,
having a part move more quickly just because system resources are available may be just as
catastrophic as having it not move at all because the system is busy.
II. Single-user, single task - As the name implies, this operating system is designed to manage the
computer so that one user can effectively do one thing at a time. The Palm OS for Palm handheld
computers is a good example of a modern single-user, single-task operating system.
III. Single-user, multi-tasking - This is the type of operating system most people use on their desktop and
laptop computers today. Microsoft's Windows and Apple's MacOS platforms are both examples of
operating systems that will let a single user have several programs in operation at the same time. For
example, it's entirely possible for a Windows user to be writing a note in a word processor while
downloading a file from the Internet while printing the text of an e-mail message.
IV. Multi-user - A multi-user operating system allows many different users to take advantage of the
computer's resources simultaneously. The operating system must make sure that the requirements of the
various users are balanced, and that each of the programs they are using has sufficient and separate
resources so that a problem with one user doesn't affect the entire community of users. Unix, VMS and
mainframe operating systems, such as MVS, are examples of multi-user operating systems.

OPERATING SYSTEM COMPONENTS

A system can be created as large and complex an operating system only by partitioning it into smaller pieces.
Each piece should be a well-delineated portion of the system, with carefully defined inputs, outputs, and
functions. Obviously, not all systems have the same structure. However, many modern systems share the
goal of supporting the system components outlined below:-

 Process management
A process can be thought of as a program in execution. A process will need certain resources-such
as CPU time, memory, files, and I/O devices-to accomplish its task. These resources are allocated to the
process either when it is created or while it is executing.
A process is the unit of work in most systems. Such a system consists of a collection of processes:
• Operating-system processes execute system code; and
• User processes execute user code.
All these processes may execute concurrently. Although traditionally a process contained only a single
thread of control as it ran, most modern operating systems now support processes that have multiple threads.
The operating system is responsible for the following activities in connection with process and
thread management:
• The creation and deletion of both user and system processes
• The scheduling of processes; and
• The provision of mechanisms for synchronization, communication, and deadlock handling for
processes.

 Main Memory Management


The main memory is central to the operation of a modern computer system which is a large array of
words or bytes, ranging in size from hundreds of thousands to billions. Each word or byte has its own
address. Main memory is a repository of quickly accessible data shared by the CPU and I/O devices. The
central processor reads instructions from main memory during the instruction-fetch cycle, and it both reads
and writes data from main memory during the data-fetch cycle. The I/O operations implemented via DMA
also read and write data in main memory. The main memory is generally the only large storage device that
the CPU is able to address and access directly.
Selection of a memory-management scheme for a specific system depends on many factors
-especially on the hardware design of the system. Each algorithm requires its own hardware support. The
operating system is responsible for the following activities in connection with memory management:
• Keeping track of which parts of memory are currently being used and by whom
• Deciding which processes are to be loaded into memory when memory space becomes available
• Allocating and deallocating memory space as needed

 File Management
Computers can store information on several different types of physical media like magnetic tape, magnetic
disk, and optical disk having its own characteristics and physical organization. These properties include
access speed, capacity, data-transfer rate, and access method (sequential or random).
A file represents programs (both source and object forms) and data, which consists of a sequence of bits,
bytes, lines, or records whose meanings are defined by their creators.
The operating system implements the abstract concept of a file by managing mass storage media, such as disks and
tapes, and the devices that control them.
Also, files are normally organized into directories to ease their use. Finally, when multiple users have
access to files, we may want to control by whom and in what ways files may be accessed. The operating system is
responsible for the following activities in connection with file management:
• Creating and deleting files and directories
• Supporting primitives for manipulating files and directories
• Mapping files onto secondary storage
• Backing up files on stable (nonvolatile) storage media

 I/O-System Management
One of the purposes of an operating system is to hide the peculiarities of specific hardware devices from the user.
Only the device driver knows the peculiarities of the specific device to which it is assigned. The I/O subsystem
consists of
• A memory-management component that includes buffering, caching, and spooling
• A general device-driver interface
• Drivers for specific hardware devices

 Secondary Storage Management


The main purpose of a computer system is to execute programs which must be stored in main memory, or primary
storage, during execution. Because main memory is too small to accommodate all data and programs, and because
the data that it holds are lost when power is lost, the computer system must provide secondary storage to back up
main memory.
Most programs-including compilers, assemblers, sort routines, editors, and formatters-are stored on a disk
until loaded into memory, and then use the disk as both the source and destination of their processing. Hence, the
proper management of disk storage is of central importance to a computer system. The operating system is
responsible for the following activities in connection with disk management:
• Free-space management
• Storage allocation
• Disk scheduling
Because secondary storage is used frequently, it must be used efficiently. The entire speed of operation of a
computer may hinge on the speeds of the disk subsystem and of the algorithms that manipulate that subsystem.

 Networking
A distributed system is a collection of processors that do not share memory, peripheral devices, or a clock.
Instead, each processor has its own local memory and clock, and the processors communicate with one another
through various communication lines. They may include small microprocessors, workstations, minicomputers, and
large computer systems.
Access to a shared resource allows computation speedup, increased functionality, increased data
availability, and enhanced reliability. The protocols that create a distributed system can have a great effect
on that system's utility and popularity. The innovation of the World Wide Web created a new access
method for information sharing. It improved on the existing file-transfer protocol (FTP) and network file-
system (NFS) protocol by removing the need for a user to log in before allowing using a remote resource.
It defined a new protocol, hypertext transfer protocol (http), for use in communication between a web
server and a web browser.
 Protection System
Protection is any mechanism for controlling the access of programs, processes, or users to the
resources defined by a computer system. This mechanism must provide means for specification of the
controls to be imposed and means for enforcement. Protection mechanisms ensure that the files, memory
segments, CPU, and other resources can be operated on by only those processes that have gained proper
authorization from the operating system.
Protection can improve reliability by detecting latent errors at the interfaces between component
subsystems. A protection-oriented system provides a means to distinguish between authorized and
unauthorized usage. Protection mechanisms ensure that the files, memory segments, CPU, and other
resources can be operated on by only those processes that have gained proper authorization from the
operating system.
 Command Interpreter System
Command interpreter is the interface between the user and the operating system. Some operating systems
include the command interpreter in the kernel and other operating systems treat the command interpreter as a
special program that is running when a job is initiated, or when a user first logs on (on time-sharing systems).
A program that reads and interprets control statements is executed automatically. This program is
sometimes called the control-card interpreter or the command-line interpreter, or often known as the shell. Its
function is to get the next command statement and execute it.
The command statements themselves deal with process creation and management, I/O handling,
secondary-storage management, main-memory management, file-system access, protection, and networking.

Typical services that an operating system provides include:

 A task scheduler - The task scheduler is able to allocate the execution of the CPU to a number of
different tasks. Some of those tasks are the different applications that the user is running, and some
of them are operating system tasks. The task scheduler is the part of the operating system that lets
you print a document from your word processor in one window while you are downloading a file in
another window and recalculating a spreadsheet in a third window.
 A memory manager - The memory manager controls the system's RAM and normally creates a
larger virtual memory space using a file on the hard disk. (See also this Question of the Day.)
 A disk manager - The disk manager creates and maintains the directories and files on the disk.
When you request a file, the disk manager brings it in from the disk.
 A network manager - The network manager controls all data moving between the computer and the
network.

Other I/O services manager - The OS manages the keyboard, mouse, video display, printers, etc.
Security manager - The OS maintains the security of the information in the computer's files and
controls who can access the computer.
The Ubuntu Operating System

A brief history:

2004 - In April 2004, Mark Shuttleworth began to round up a small and dedicated group of open
source developers to create a revolutionary new Linux desktop. Based on the principles of time-based
releases, a strong Debian foundation, the GNOME desktop, and a strong commitment to freedom, this
group operated initially under the auspices of http://no-name-yet.com.

The first official release of Ubuntu was made in October 2004 and was duly named Version 4.10, thus
introducing the Y.MM numbering system. While under development, Version 4.10 was affectionately
known as "the Warty Warthog," a name which continued to live on past the time when most
development codenames die. Every release since then has had a similarly alliterative codename.

These early days in the project's history provided the basis of many of the naming conventions which
continue today. For example, the early testing community of Version 4.10 was called the Sounder,
named for the collective noun of warthogs. The Sounder mailing list continues today as an open
discussion forum for the community, and development milestones continue to be named for the
collective noun of the codename animal.

Interest in Ubuntu was dramatic from the outset. There were nearly 3000 messages on the ubuntu-users
mailing list within the first two weeks, and the community focus of the project attracted key
contributors. One of the first community driven teams, the supremely dedicated Documentation Team,
was founded in late 2004. The first incarnation of the Ubuntu Developer Summit was held in Oxford,
UK in August followed by the Mataro Sessions in Mataro, Spain in December.

2005 - The following year saw dramatic growth in the Ubuntu community. Hundreds and then
thousands of free software enthusiasts joined the community. The core development team continued to
grow, and dedicated volunteers around the world found new ways to contribute through code,
advocacy, artwork, documentation, wiki gardening, and more. The community played significant roles
in defining the future of Ubuntu at the Ubuntu Developer Summits in Sydney and Montreal.

Ubuntu 5.04 ("Hoary Hedgehog") was released in April 2005. At the same time, the first release of
Kubuntu was made, to the delight of KDE fans worldwide.

The drumbeat of timely releases continued in October 2005, with the release of Ubuntu 5.10 ("Breezy
Badger"). In addition to the much anticipated Ubuntu and Kubuntu releases, Edubuntu was released for
the first time to address the educational market.

2006 - The Ubuntu project took a significant step forward in 2006, with the release in June of its first
"Long Term Support" or LTS release. While all Ubuntu releases are provided with 18 months of free
security updates and maintenance (and commercial support), enterprise users were demanding a longer
support cycle to match their upgrade cycles. Thus Ubuntu Version 6.06 LTS ("Dapper Drake") was
produced in June. In addition to the extended support cycle, this release also marked the first time a
single CD served as a live and install CD, and in which there was a formal Server Edition.
As defined at the Ubuntu Developer Summit in Paris, Ubuntu 6.10 ("Edgy Eft") was released in
October 2006. And the development community met again at the Summit in Mountain View in
November.

Variants of Ubuntu:

Several official and unofficial Ubuntu variants exist. These Ubuntu variants simply install a set of
packages different from the original Ubuntu, but, since they draw additional packages and updates
from the same repositories as Ubuntu, all of the same software is available for each of them. Unofficial
variants and derivatives are not controlled or guided by Canonical and are generally forks with
different goals in mind. These different versions correspond to development efforts run by largely
separate groups of people who try to bring different functionalities to the distribution, such as increased
stability and/or usability for differing end-user needs implemented through various default program
configurations and user interface customizations. The "fully supported" Ubuntu derivatives include:

• Kubuntu, a desktop distribution using KDE rather than GNOME


• Edubuntu, a distribution designed for classrooms using GNOME
• Ubuntu Server Edition
• Xubuntu, a "lightweight" distribution based on the Xfce desktop environment instead of
GNOME, designed to run better on low-specification computers

Other Ubuntu distributions developed or otherwise recognized by Canonical include:

• Gobuntu, a distribution that includes only free software


• Mythbuntu -- a multimedia platform based on MythTV
• Ubuntu JeOS (pronounced as "juice"), is described as "an efficient variant ... configured
specifically for virtual appliances".
• Ubuntu MID Edition, an Ubuntu edition that targets Mobile Internet Devices.
• Ubuntu Netbook Remix,[54] designed for ultra-portables such as the ASUS Eee PC.
• Ubuntu Studio, a multimedia-creation form of Ubuntu

Other related derivative distributions include:

• Super Ubuntu, a remaster of Ubuntu with usability as the main focus


• Linux Mint - a distribution that includes desktop improvements and proprietary
software/drivers, with a focus on making things work out of the box.

• http://www.howstuffworks.com/operating systems

• "ubuntu/history "The Ubuntu Story"". http://www.ubuntu.com/community/ubuntustory.

• ^ "ubuntu/history "The Ubuntu Story"". http://www.ubuntu.com/community/ubuntustory.

Вам также может понравиться