Вы находитесь на странице: 1из 43

Installation (or setup) of a computer program (including device drivers and plugins), is the act of

making the program ready for execution. Because the process varies for each program and each
computer, programs (including operating systems) often come with an installer, a specialized
program responsible for doing whatever is needed for their installation.
Installation typically involves code being copied/generated from the installation files to new files on
the local computer for easier access by the operating system. Because code is generally
copied/generated in multiple locations, uninstallation usually involves more than just erasing the
program folder. For example, registry files and other deep code in the system may need to be
modified/deleted for a complete uninstallation.


Overview[edit]
Some computer programs can be executed by simply copying them into a folder stored on a
computer and executing them. Other programs are supplied in a form unsuitable for immediate
execution and therefore need an installation procedure. Once installed, the program can be
executed again and again, without the need to reinstall before each execution.
Common operations performed during software installations include:
Making sure that necessary system requirements are met
Checking for existing versions of the software
Creating or updating program files and folders
Adding configuration data such as configuration files, Windows registry entries or environment
variables
Making the software accessible to the user, for instance by creating links,
shortcuts or bookmarks
Configuring components that run automatically, such as daemons or Windows services
Performing product activation
Updating the software versions
Necessity[edit]
As mentioned earlier, some computer programs need no installation. This was once usual for many
programs which run onDOS, Mac OS, Atari TOS and AmigaOS. As computing environments grew
more complex and fixed hard drives replacedfloppy disks, the need for tangible installation
presented itself.
Nowadays, a class of modern applications that do not need installation are known as portable
applications, as they may be roamed around onto different computers and run. Similarly, there
are live operating systems, which do not need installation and can be run directly from
a bootable CD, DVD, or USB flash drive. Examples are AmigaOS 4.0, various Linux
distributions, MorphOS or Mac OS versions 1.0 through 9.0. (See live CD and live USB.)
Finally, web applications, which run inside a web browser, do not need installation.
Types[edit]
Attended installation[edit]
On Windows systems, this is the most common form of installation. An installation process usually
needs a user who attends it to make choices, such as accepting or declining an end-user license
agreement (EULA), specifying preferences such as the installation location, supplying passwords or
assisting in product activation. In graphical environments, installers that offer a wizard-based
interface are common. Attended installers may ask users to help mitigate the errors. For instance, if
the disk in which the computer program is being installed was full, the installer may ask the user to
specify another target path or clear enough space in the disk.
Silent installation[edit]
Installation that does not display messages or windows during its progress. "Silent installation" is not
the same as "unattended installation" (see below): All silent installations are unattended but not all
unattended installations are silent. The reason behind a silent installation may be convenience or
subterfuge. Malware is almost always installed silently.
[citation needed]

Unattended installation[edit]
Installation that is performed without user interaction during its progress or with no user present at
all. One of the reasons to use this approach is to automate the installation of a large number of
systems. An unattended installation either does not require the user to supply anything or has
received all necessary input prior to the start of installation. Such input may be in the form
of command line switches or an answer file, a file that contains all the necessary
parameters. Windows XP and mostLinux distributions are examples of operating systems that can
be installed with an answer file. In unattended installation, it is assumed that there is no user to help
mitigate errors. For instance, if the installation medium was faulty, the installer should fail the
installation, as there is no user to fix the fault or replace the medium. Unattended installers may
record errors in acomputer log for later review.
Headless installation[edit]
Installation performed without using a computer monitor connected. In attended forms of headless
installation, another machine connects to the target machine (for instance, via a local area network)
and takes over the display output. Since a headless installation does not need a user at the location
of the target computer, unattended headless installers may be used to install a program on multiple
machines at the same time.
Scheduled or automated installation[edit]
An installation process that runs on a preset time or when a predefined condition transpires, as
opposed to an installation process that starts explicitly on a user's command. For instance, a system
administrator willing to install a later version of a computer program that is being used can schedule
that installation to occur when that program is not running. An operating system may automatically
install a device driver for a device that the user connects. (See plug and play.) Malware may also be
installed automatically. For example, the infamous Conficker was installed when the user plugged an
infected device to their computer.
Clean installation[edit]
A clean installation is one that is done in the absence of any interfering elements such as old
versions of the computer program being installed or leftovers from a previous installation. In
particular, the clean installation of an operating system is an installation in which the target disk
partition is erased before installation. Since the interfering elements are absent, a clean installation
may succeed where an unclean installation may fail or may take significantly longer.
Network installation[edit]
Not to be confused with network booting.
Network installation, shortened netinstall, is an installation of a program from a shared network
resource that may be done by installing a minimal system before proceeding to download further
packages over the network. This may simply be a copy of the original media but software publishers
which offer site licenses for institutional customers may provide a version intended for installation
over a network.
Installer[edit]
An installation program or installer is a computer program that installs files, such as applications,
drivers, or other software, onto a computer. Some installers are specifically made to install the files
they contain; other installers are general-purpose and work by reading the contents of the software
package to be installed.
The differences between a package management system and an installer are:
Package management system Installer
Usually part of an operating system. Each product comes bundled with its own installer.
Uses one installation database.
Performs its own installation, sometimes recording information
about that installation in a registry.
Can verify and manage all packages
on the system.
Works only with its bundled product.
One package management system
vendor.
Multiple installer vendors.
One package format. Multiple installation formats.
Bootstrapper[edit]
During the installation of computer programs it is sometimes necessary to update the installer or
package manager itself. To make this possible, a technique called bootstrapping is used. The
common pattern for this is to use small executable files which update the installer and starts the real
installation after the update. This small executable is called bootstrapper. Sometimes the
bootstrapper installs other prerequisites for the software during the bootstrapping process too.
Common types[edit]
Cross platform installer builders that produce installers for Windows, Mac OS X and Linux
include InstallAnywhere (Flexera Software), JExpress (DeNova),
[1]
InstallBuilder (BitRock Inc.) and
Install4J (ej-technologies).
[2]

Installers for Microsoft Windows include Windows Installer, a software installation component.
Additional third party commercial tools for creating installers for Windows
include InstallShield (Flexera Software), Advanced Installer (Caphyon Ltd),
[3]
InstallAware
(InstallAware Software),
[4]
Wise Installation Studio (Wise Solutions, Inc.), SetupBuilder (Lindersoft,
Inc.),
[5]
Installer VISE (MindVision Software), MSI Studio (ScriptLogic Corporation), Actual Installer
(Softeza Development),
[6]
Smart Install Maker (InstallBuilders Company),
[7]
MSI Factory and Setup
Factory (Indigo Rose Software), Visual Installer(SamLogic), Centurion Setup (Gammadyne
Corporation),
[8]
Paquet Builder (G.D.G. Software),
[9]
Xeam Visual Installer(Xeam).
[10]
Free installer-
authoring tools include NSIS, IzPack, Clickteam, InnoSetup, InstallSimple and WiX.
Mac OS X includes Installer, a native Package Manager software. Mac OS X also includes a
separate software updating application, Software Update but only supports Apple and system
software. Included in the dock as of 10.6.6, the Mac App Store shares many attributes with the
successful App Store for iOS devices, such as a similar app approval process, the use of Apple ID
for purchases, and automatic installation and updating. Although this is Apple's preferred delivery
method for Mac OS X,
[11]
previously purchased licenses can not be transferred to the Mac App Store
for downloading or automatic updating. Commercial applications for Mac OS X may also use a third-
party installer, such as Mac version of Installer VISE(MindVision Software) or InstallerMaker (StuffIt).
See also[edit]
Application virtualization
List of installation software
Package management system
Portable application
Pre-installed software
Software distribution
Uninstaller
<><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><>

Computer configuration
From Wikipedia, the free encyclopedia
In communications or computer systems, a configuration is an arrangement of functional
units according to their nature, number, and chief characteristics. Often, configuration pertains to the
choice of hardware, software, firmware, and documentation. The configuration affects system
function and performance.
See also[edit]
Auto-configuration
Configuration file - In software, a data resource used for program initialization
Configuration management - In multiple disciplines, a practice for managing change
Configure script (computing)


System monitoring

In systems engineering, a system monitor (SM) is a process within a distributed system for
collecting and storing state data. This is a fundamental principle supporting Application Performance
Management.
Overview[edit]
The argument that system monitoring is just a nice to have, and not really a core requirement for
operational readiness, dissipates quickly when a critical application goes down with no
warning.
[1]
The configuration for the system monitor takes two forms:
1. configuration data for the monitor application itself, and
2. configuration data for the system being monitored. See: System configuration
The monitoring application needs information such as log file path and number of threads to run
with. Once the application is running, it needs to know what to monitor, and deduce how to monitor.
Because the configuration data for what to monitor is needed in other areas of the system, such
as deployment, the configuration data should not be tailored specifically for use by the system
monitor, but should be a generalized system configuration model.
The performance of the monitoring system has two aspects:
Impact on system domain or impact on domain functionality: Any element of the monitoring
system that prevents the main domain functionality from working is in-appropriate. Ideally the
monitoring is a tiny fraction of each applications footprint, requiring simplicity. The monitoring
function must be highly tunable to allow for such issues as network performance, improvements
to applications in the development life-cycle, appropriate levels of detail, etc. Impact on the
systems' primary function must be considered.
Efficient monitoring or ability to monitor efficiently: Monitoring must be efficient, able to handle all
monitoring goals in a timely manner, within the desired period. This is most related
to scalability. Various monitoring modes are discussed below.
There are many issues involved with designing and implementing a system monitor. Here are a few
issues to be dealt with:
configuration
protocol
performance
data access
System monitor basics[edit]
Protocol[edit]
There are many tools for collecting system data from hosts and devices using the SNMP (Simple
Network Management Protocol).
[2]
Most computers and networked devices will have some form of
SNMP access. Interpretation of the SNMP data from a host or device requires either a specialized
tool (typically extra software
[3]
from the vendor) or a Management information base (MIB), a
mapping of commands/data references to the various data elements the host or device provides.
The advantage of SNMP for monitoring is its low bandwidth requirements and universal usage in the
industries.
Unless an application itself provides a MIB and output via SNMP, then SNMP is not suitable for
collecting application data.
Other protocols are suitable for monitoring applications, such as CORBA (language/OS-
independent), JMX (Java-specific management and monitoring protocol), or proprietary TCP/IP or
UDP protocols (language/OS independent for the most part).
Data access[edit]
Data access refers to the interface by which the monitor data can be utilized by other processes. For
example, if the system monitor is a CORBA server, clients can connect and make calls on the
monitor for current state of an element, or historical states for an element for some time period.
The system monitor may be writing data directly into a database, allowing other processes to access
the database outside the context of the system monitor. This is dangerous however, as the table
design for the database will dictate the potential for data-sharing. Ideally the system monitor is a
wrapper for whatever persistence mechanism is used, providing a consistent and 'safe' access
interface for others to access the data.
Mode[edit]
The data collection mode of the system monitor is critical. The modes are: monitor poll, agent push,
and a hybrid scheme.
Monitor poll
In this mode, one or more processes in the monitoring system actually poll the system
elements in some thread. During the loop, devices are polled via SNMP calls, hosts can be
accessed via Telnet/SSH to execute scripts or dump files or execute other OS-specific
commands, applications can be polled for state data, or their state-output-files can be
dumped.
The advantage of this mode is that there is little impact on the host/device being polled. The
host's CPU is loaded only during the poll. The rest of the time the monitoring function plays
no part in CPU loading.
The main disadvantage of this mode is that the monitoring process can only do so much in
its time. If polling takes too long, the intended poll-period gets elongated.
Agent push
In agent-push mode, the monitored host is simply pushing data from itself to the system
monitoring application. This can be done periodically, or by request from the system monitor
asynchronously.
The advantage of this mode is that the monitoring system's load can be reduced to simply
accepting and storing data. It doesn't have to worry about timeouts for SSH calls, parsing
OS-specific call results, etc.
The disadvantage of this mode is that the logic for the polling cycle/options are not
centralized at the system monitor, but distributed to each remote node. Thus changes to the
monitoring logic must be pushed out to each node.
Also, in agent-based monitoring, a host cannot inform that it is completely "down" or powered
off, or if an intermediary system (such as a router) is preventing access to the system.
Hybrid mode
The median mode between 'monitor-poll' and 'agent-push' is a hybrid approach, where
the system configurationdetermines where monitoring occurs, either in the system
monitor or agent. Thus when applications come up, they can determine for themselves what
system elements they are responsible for polling. Everything however must post its
monitored-data ultimately to the system monitor process.
This is especially useful when setting up a monitoring infrastructure for the first time and not
all monitoring mechanisms have been implemented. The system monitor can do all the
polling in whatever simple means are available. As theagents become smarter, they can take
on more of the load.


Upgrade
From Wikipedia, the free encyclopedia
For the facility that upgrades bitumen (extra heavy oil) into synthetic crude oil, see upgrader. For
academic upgrading, see remedial education.

Look up upgrade in
Wiktionary, the free
dictionary.
Upgrading is the process of replacing a product with a newer version of the same product.
In computing and consumer electronics an upgrade is generally a replacement
of hardware, software or firmware with a newer or better version, in order to bring the system up to
date or to improve its characteristics.
Computing and consumer electronics[edit]
Examples of common hardware upgrades include installing additional memory (RAM), adding
larger hard disks, replacing microprocessor cards or graphics cards, and installing new versions of
software. Many other upgrades are often possible as well.
Common software upgrades include changing the version of an operating system, of an office suite,
of an anti-virus program, or of various other tools.
Common firmware upgrades include the updating of the iPod control menus, the Xbox
360 dashboard, or the non-volatile flash memory that contains the embedded operating system for
a consumer electronics device.
Users can often download software and firmware upgrades from the Internet. Often the download is
a patchit does not contain the new version of the software in its entirety, just the changes that
need to be made. Software patches usually aim to improve functionality or solve problems
with security. Rushed patches can cause more harm than good and are therefore sometimes
regarded
[by whom?]
with scepticism for a short time after release (see "Risks").
[1]
Patches are generally
free.
A software or firmware upgrade can be major or minor and the release version code-number
increases accordingly. A major upgrade will change the version number, whereas a minor update
will often append a ".01", ".02", ".03", etc. For example, "version 10.03" might designate the third
minor upgrade of version 10. In commercial software, the minor upgrades (or updates) are generally
free, but the major versions must be purchased. See also: sidegrade.
Risks[edit]
Although developers usually produce upgrades in order to improve a product, there are risks
involvedincluding the possibility that the upgrade will worsen the product.
Upgrades of hardware involve a risk that new hardware will not be compatible with other pieces of
hardware in a system. For example, an upgrade of RAM may not be compatible with existing RAM in
a computer. Other hardware components may not be compatible after either an upgrade or
downgrade, due to the non-availability of compatible drivers for the hardware with a
specific operating system. Conversely, there is the same risk of non-compatibility when software is
upgraded or downgraded for previously functioning hardware to no longer function.
Upgrades of software introduce the risk that the new version (or patch) will contain a bug, causing
the program to malfunction in some way or not to function at all. For example, in October 2005, a
glitch in a software upgrade caused trading on the Tokyo Stock Exchange to shut down for most of
the day.
[2]
Similar gaffes have occurred: from important government systems
[3]
to freeware on the
internet.
Upgrades can also worsen a product subjectively. A user may prefer an older version even if a
newer version functions perfectly as designed.
A software update can be a downgrade from the point of view of the user, for example by removing
features for marketing or copyright reasons, see OtherOS.
See also[edit]
Adaptation kit upgrade
Advanced Packaging Tool
Macintosh Processor Upgrade Card
Source upgrade
Windows Anytime Upgrade
Yellow dog Updater, Modified


System administrator
From Wikipedia, the free encyclopedia
For the privileged user account, see superuser.

[hide]This article has multiple issues. Please help improve it or discuss these issues on
the talk page.
This article is in a list format that may be better presented usingprose. (April 2014)
This article needs additional citations for verification. (August 2010)



System administrator

A professional system administrator works at a server rack in a
datacenter.
Occupation
Names System administrator, systems administrator,
sysadmin, IT professional
Occupation type
Profession
Activity sectors
Information technology
Description
Competencies System administration,network
management,analytical skills, critical thinking
Education
required
Varies from apprenticeship to Masters degree
A system administrator, or sysadmin, is a person who is responsible for the upkeep,
configuration, and reliable operation of computer systems; especiallymulti-user computers, such
as servers.
The system administrator seeks to ensure that the uptime, performance,resources, and security of
the computers he or she manages meet the needs of the users, without exceeding the budget.
To meet these needs, a system administrator may acquire, install, or upgrade computer components
and software; automate routine tasks; write computer programs; troubleshoot; train and/or supervise
staff; and provide technical support.
The duties of a system administrator are wide-ranging, and vary widely from one organization to
another. Sysadmins are usually charged with installing, supporting, and maintaining servers or other
computer systems, and planning for and responding to service outages and other problems. Other
duties may include scripting or light programming, project management for systems-related projects.
Contents
[hide]
1 Related fields
2 Training
3 Skills
4 Duties
5 See also
6 References
7 Further reading
8 External links
Related fields[edit]
Many organizations staff other jobs related to system administration. In a larger company, these may
all be separate positions within a computer support or Information Services (IS) department. In a
smaller group they may be shared by a few sysadmins, or even a single person.
A database administrator (DBA) maintains a database system, and is responsible for the
integrity of the data and the efficiency and performance of the system.
A network administrator maintains network infrastructure such as switches and routers, and
diagnoses problems with these or with the behavior of network-attached computers.
A security administrator is a specialist in computer and network security, including the
administration of security devices such as firewalls, as well as consulting on general security
measures.
A web administrator maintains web server services (such as Apache or IIS) that allow for internal
or external access to web sites. Tasks include managing multiple sites, administering security,
and configuring necessary components and software. Responsibilities may also include
software change management.
A computer operator performs routine maintenance and upkeep, such as changing backup
tapes or replacing failed drives in a RAID. Such tasks usually require physical presence in the
room with the computer; and while less skilled than sysadmin tasks require a similar level of
trust, since the operator has access to possibly sensitive data.
A postmaster administers a mail server.
A Storage (SAN) Administrator. Create, Provision, Add or Remove Storage to/from Computer
systems. Storage can be attached local to the system or from a Storage Area Network (SAN)
or Network Attached Storage (NAS). Create File Systems from newly added storage.
In some organizations, a person may begin as a member of technical support staff or a computer
operator, then gain experience on the job to be promoted to a sysadmin position.
Training[edit]


System Administration Conference Training
Unlike many other professions, there is no single path to becoming a system administrator. Many
system administrators have a degree in a related field: computer science, information
technology, computer engineering, information systems, or even a trade school program. On top of
this, nowadays some companies require an IT certification. Other schools have offshoots of their
Computer Science program specifically for system administration.
Some schools have started offering undergraduate degrees in System Administration. The
first, Rochester Institute of Technology
[1]
started in 1992. Others such as Rensselaer Polytechnic
Institute, University of New Hampshire,
[2]
Marist College, and Drexel University have more recently
offered degrees in Information Technology. Symbiosis Institute of Computer Studies and Research
(SICSR) in Pune, India offers Masters degree in Computers Applications with a specialization in
System Administration. The University of South Carolina[2] offers an Integrated Information
Technology B.S. degree specializing in Microsoft product support.
As of 2011, only five U.S. universities, Rochester Institute of Technology,
[3]
Tufts,
[4]
Michigan Tech,
and Florida State University
[5]
have graduate programs in system administration.
[citation
needed]
In Norway, there is a special English-taught MSc program organized by Oslo University
College
[6]
in cooperation with Oslo University, named "Masters programme in Network and System
Administration." There is also a "BSc in Network and System Administration"
[7]
offered by Gjvik
University College. University of Amsterdam (UvA) offers a similar program in cooperation
with Hogeschool van Amsterdam(HvA) named "Master System and Network Engineering". In Israel,
the IDF's ntmm course in considered a prominent way to train System administrators.
[8]
However,
many other schools offer related graduate degrees in fields such as network systems and computer
security.
One of the primary difficulties with teaching system administration as a formal university discipline, is
that the industry and technology changes much faster than the typical textbook and coursework
certification process. By the time a new textbook has spent years working through approvals and
committees, the specific technology for which it is written may have changed significantly or become
obsolete.
In addition, because of the practical nature of system administration and the easy availability
of open-source serversoftware, many system administrators enter the field self-taught. Some
learning institutions are reluctant to, what is in effect, teach hacking to undergraduate level
students
[citation needed]
.
Generally, a prospective will be required to have some experience with the computer system he or
she is expected to manage. In some cases, candidates are expected to possess industry
certifications such as the Microsoft MCSA, MCSE,MCITP, Red Hat RHCE, Novell CNA, CNE,
Cisco CCNA or CompTIA's A+ or Network+, Sun Certified SCNA, Linux Professional Institute among
others.
Sometimes, almost exclusively in smaller sites, the role of system administrator may be given to a
skilled user in addition to or in replacement of his or her duties. For instance, it is not unusual for a
mathematics or computing teacher to serve as the system administrator of a secondary school
[citation
needed]
.
Skills[edit]
Most important skill to a system administrator
Problem solving. This can some times lead into all sorts of constraints and stress. When a
workstation or server goes down, the sysadmin is called to solve the problem. They should be able
to quickly and correctly diagnose the problem. They must figure out what is wrong and how best it
can be fixed in a short time.


Microsoft System Administrator Badge
Some of this section is from the Occupational Outlook Handbook
[dead link]
, 2010-11 Edition, which is in
the public domain as a work of the United States Government.
The subject matter of system administration includes computer systems and the ways people use
them in an organization. This entails a knowledge ofoperating systems and applications, as well as
hardware and softwaretroubleshooting, but also knowledge of the purposes for which people in the
organization use the computers.
Perhaps the most important skill for a system administrator is problem solvingfrequently under
various sorts of constraints and stress. The sysadmin is on call when a computer system goes down
or malfunctions, and must be able to quickly and correctly diagnose what is wrong and how best to
fix it. They may also need to have team work and communication skills; as well as being able to
install and configure hardware and software.
System administrators are not software engineers or developers. It is not usually within their duties
to design or write new application software. However, sysadmins must understand the behavior of
software in order to deploy it and to troubleshoot problems, and generally know
several programming languages used for scripting or automation of routine tasks.
Particularly when dealing with Internet-facing or business-critical systems, a sysadmin must have a
strong grasp of computer security. This includes not merely deploying software patches, but also
preventing break-ins and other security problems with preventive measures. In some organizations,
computer security administration is a separate role responsible for overall security and the upkeep
of firewalls and intrusion detection systems, but all sysadmins are generally responsible for the
security of computer systems.
Duties[edit]
A system administrator's responsibilities might include:
Analyzing system logs and identifying potential issues with computer systems.
Introducing and integrating new technologies into existing data center environments.
Performing routine audits of systems and software.
Applying operating system updates, patches, and configuration changes.
Installing and configuring new hardware and software.
Adding, removing, or updating user account information, resetting passwords,etc.
Answering technical queries and assisting users.
Responsibility for security.
Responsibility for documenting the configuration of the system.
Troubleshooting any reported problems.
System performance tuning.
Ensuring that the network infrastructure is up and running.
Configure, add, delete file systems. Knowledge of volume management tools like Veritas (now
Symantec), Solaris ZFS, LVM.
User administration (setup and maintaining account)
Maintaining system
Verify that peripherals are working properly
Quickly arrange repair for hardware in occasion of hardware failure
Monitor system performance
Create file systems
Install software
Create a 'backup'and recover policy
Monitor network communication
Implement the policies for the use of the computer system and network
Setup security policies for users. A sysadmin must have a strong grasp of computer security
(e.g. firewalls and intrusion detection systems)
Password and identity management
Sometimes maintains website SSL certificates SSL certificate working with a Certificate authority
Incident management using ticketing system software - Ticketing system
In larger organizations, some of the tasks above may be divided among different system
administrators or members of different organizational groups. For example, a dedicated individual(s)
may apply all system upgrades, a Quality Assurance (QA) team may perform testing and validation,
and one or more technical writers may be responsible for all technical documentation written for a
company. System administrators, in larger organizations, tend not to be systems architects, system
engineers, or system designers.
In smaller organizations, the system administrator might also act as technical support, Database
Administrator, Network Administrator, Storage (SAN) Administrator or application analyst.
See also[edit]

Information technology portal

Computer Science portal
Application service management
Bastard Operator From Hell (BOFH)
DevOps
Forum administrator
Information technology operations
Large Installation System Administration Conference
League of Professional System Administrators
LISA (organization)
Professional certification (computer technology)
Superuser
Sysop
System Administrator Appreciation Day

Software maintenance
From Wikipedia, the free encyclopedia

This article has an unclear citation style. The references used may be made clearer
with a different or consistent style of citation, footnoting, or external linking.(September
2010)
Software development process

A software developer at work
Core activities
Requirements
Specification
Architecture
Construction
Design
Testing
Debugging
Deployment
Maintenance
Methodologies
Waterfall
Prototype model
Incremental
Iterative
V-Model
Spiral
Scrum
Cleanroom
RAD
DSDM
RUP
XP
Agile
Lean
Dual Vee Model
TDD
BDD
FDD
DDD
MDD
Supporting disciplines
Configuration management
Documentation
Quality assurance (SQA)
Project management
User experience
Tools
Compiler
Debugger
Profiler
GUI designer
Modeling
IDE
Build automation
V
T
E
Software maintenance in software engineering is the modification of a software product after
delivery to correct faults, to improve performance or other attributes.
[1]

A common perception of maintenance is that it merely involves fixing defects. However, one study
indicated that the majority, over 80%, of the maintenance effort is used for non-corrective
actions.
[2]
This perception is perpetuated by users submitting problem reports that in reality are
functionality enhancements to the system. More recent studies put the bug-fixing proportion closer to
21%.
[3]

Software maintenance and evolution of systems was first addressed by Meir M. Lehman in 1969.
Over a period of twenty years, his research led to the formulation of Lehman's Laws (Lehman 1997).
Key findings of his research include that maintenance is really evolutionary development and that
maintenance decisions are aided by understanding what happens to systems (and software) over
time. Lehman demonstrated that systems continue to evolve over time. As they evolve, they grow
more complex unless some action such as code refactoring is taken to reduce the complexity.
The key software maintenance issues are both managerial and technical. Key management issues
are: alignment with customer priorities, staffing, which organization does maintenance, estimating
costs. Key technical issues are: limited understanding, impact analysis, testing, maintainability
measurement.
Software maintenance is a very broad activity that includes error correction, enhancements of
capabilities, deletion of obsolete capabilities, and optimization. Because change is inevitable,
mechanisms must be developed for evaluation, controlling and making modifications.
So any work done to change the software after it is in operation is considered to be maintenance
work. The purpose is to preserve the value of software over the time. The value can be enhanced by
expanding the customer base, meeting additional requirements, becoming easier to use, more
efficient and employing newer technology. Maintenance may span for 20 years, whereas
development may be 1-2 years.
Contents
[hide]
1 Importance of software maintenance
2 Software maintenance planning
3 Software maintenance processes
4 Categories of maintenance in ISO/IEC 14764
5 See also
6 References
7 Further reading
8 External links
Importance of software maintenance[edit]
In the late 1970s, a famous and widely cited survey study by Lientz and Swanson, exposed the very
high fraction of life-cycle costs that were being expended on maintenance. They categorized
maintenance activities into four classes:
Adaptive modifying the system to cope with changes in the software environment
(DBMS, OS)
[4]

Perfective implementing new or changed user requirements which concern functional
enhancements to the software
Corrective diagnosing and fixing errors, possibly ones found by users
[4]

Preventive increasing software maintainability or reliability to prevent problems in the future
[4]

The survey showed that around 75% of the maintenance effort was on the first two types, and error
correction consumed about 21%. Many subsequent studies suggest a similar magnitude of the
problem. Studies show that contribution of end user is crucial during the new requirement data
gathering and analysis. And this is the main cause of any problem during software evolution and
maintenance. So software maintenance is important because it consumes a large part of the overall
lifecycle costs and also the inability to change software quickly and reliably means that business
opportunities are lost.
[5]

[6][7]

Impact of key adjustment factors on maintenance (sorted in order of maximum positive impact)
Maintenance Factors Plus Range
Maintenance specialists 35%
High staff experience 34%
Table-driven variables and data 33%
Low complexity of base code 32%
Y2K and special search engines 30%
Code restructuring tools 29%
Re-engineering tools 27%
High level programming languages 25%
Reverse engineering tools 23%
Complexity analysis tools 20%
Defect tracking tools 20%
Y2K mass update specialists 20%
Automated change control tools 18%
Unpaid overtime 18%
Quality measurements 16%
Formal base code inspections 15%
Regression test libraries 15%
Excellent response time 12%
Annual training of > 10 days 12%
High management experience 12%
HELP desk automation 12%
No error prone modules 10%
On-line defect reporting 10%
Productivity measurements 8%
Excellent ease of use 7%
User satisfaction measurements 5%
High team morale 5%
Sum 503%
Not only are error-prone modules troublesome, but many other factors can degrade performance
too. For example, very complex spaghetti code is quite difficult to maintain safely. A very common
situation which often degrades performance is lack of suitable maintenance tools, such as defect
tracking software, change management software, and test library software. Below describe some of
the factors and the range of impact on software maintenance.
Impact of key adjustment factors on maintenance (sorted in order of maximum negative impact)
Maintenance Factors Minus Range
Error prone modules -50%
Embedded variables and data -45%
Staff inexperience -40%
High code complexity -30%
No Y2K of special search engines -28%
Manual change control methods -27%
Low level programming languages -25%
No defect tracking tools -24%
No Y2K mass update specialists -22%
Poor ease of use -18%
No quality measurements -18%
No maintenance specialists -18%
Poor response time -16%
No code inspections -15%
No regression test libraries -15%
No help desk automation -15%
No on-line defect reporting -12%
Management inexperience -15%
No code restructuring tools -10%
No annual training -10%
No reengineering tools -10%
No reverse-engineering tools -10%
No complexity analysis tools -10%
No productivity measurements -7%
Poor team morale -6%
No user satisfaction measurements -4%
No unpaid overtime 0%
Sum -500%
[8]

Software maintenance planning[edit]
An integral part of software is the maintenance one, which requires an accurate maintenance plan to
be prepared during the software development. It should specify how users will request modifications
or report problems. The budget should include resource and cost estimates. A new decision should
be addressed for the developing of every new system feature and its quality objectives. The software
maintenance, which can last for 56 years (or even decades) after the development process, calls
for an effective plan which can address the scope of software maintenance, the tailoring of the post
delivery/deployment process, the designation of who will provide maintenance, and an estimate of
the life-cycle costs. The selection of proper enforcement of standards is the challenging task right
from early stage of software engineering which has not got definite importance by the concerned
stakeholders.
Software maintenance processes[edit]
This section describes the six software maintenance processes as:
1. The implementation process contains software preparation and transition activities, such as
the conception and creation of the maintenance plan; the preparation for handling problems
identified during development; and the follow-up on product configuration management.
2. The problem and modification analysis process, which is executed once the application has
become the responsibility of the maintenance group. The maintenance programmer must
analyze each request, confirm it (by reproducing the situation) and check its validity,
investigate it and propose a solution, document the request and the solution proposal, and
finally, obtain all the required authorizations to apply the modifications.
3. The process considering the implementation of the modification itself.
4. The process acceptance of the modification, by confirming the modified work with the
individual who submitted the request in order to make sure the modification provided a
solution.
5. The migration process (platform migration, for example) is exceptional, and is not part of
daily maintenance tasks. If the software must be ported to another platform without any
change in functionality, this process will be used and a maintenance project team is likely to
be assigned to this task.
6. Finally, the last maintenance process, also an event which does not occur on a daily basis, is
the retirement of a piece of software.
There are a number of processes, activities and practices that are unique to maintainers, for
example:
Transition: a controlled and coordinated sequence of activities during which a system is
transferred progressively from the developer to the maintainer;
Service Level Agreements (SLAs) and specialized (domain-specific) maintenance contracts
negotiated by maintainers;
Modification Request and Problem Report Help Desk: a problem-handling process used by
maintainers to prioritize, documents and route the requests they receive;
Categories of maintenance in ISO/IEC 14764[edit]
E.B. Swanson initially identified three categories of maintenance: corrective, adaptive, and
perfective.
[9]
These have since been updated and ISO/IEC 14764 presents:
Corrective maintenance: Reactive modification of a software product performed after delivery to
correct discovered problems.
Adaptive maintenance: Modification of a software product performed after delivery to keep a
software product usable in a changed or changing environment.
Perfective maintenance: Modification of a software product after delivery to improve
performance or maintainability.
Preventive maintenance: Modification of a software product after delivery to detect and correct
latent faults in the software product before they become effective faults.
There is also a notion of pre-delivery/pre-release maintenance which is all the good things you do to
lower the total cost of ownership of the software. Things like compliance with coding standards that
includes software maintainability goals. The management of coupling and cohesion of the software.
The attainment of software supportability goals (SAE JA1004, JA1005 and JA1006 for example).
Note also that some academic institutions are carrying out research to quantify the cost to ongoing
software maintenance due to the lack of resources such as design documents and system/software
comprehension training and resources (multiply costs by approx. 1.5-2.0 where there is no design
data available.).
See also[edit]
Application retirement
Journal of Software Maintenance and Evolution: Research and Practice
Long-term support
Search-based software engineering
Software archaeology
Software maintainer
Software development

Computer security
From Wikipedia, the free encyclopedia
(Redirected from Computer Security)
See also: Cyber security and countermeasure

[hide]This article has multiple issues. Please help improve it or discuss these issues on
the talk page.
This article needs additional citations for verification. (September 2010)
This article's use of external links may not follow Wikipedia's policies or
guidelines. (April 2014)



It has been suggested that Cyber security and countermeasure be merged into this
article. (Discuss) Proposed since March 2014.
This article is part of a series on
Computer security
Computer security (main article)
Related security categories
Cyber security and countermeasure
Cyberwarfare
Information security
Mobile security
Network security
World Wide Web Security
Threats
Vulnerability
Eavesdropping
Exploits
Trojans
Viruses and worms
Denial of service
Malware
Payloads
Rootkits
Keyloggers
Defenses
Access Control Systems
Application security
Antivirus software
Secure coding
Security by design
Secure operating systems
Authentication
Two-factor authentication
Multi-factor authentication
Authorization
Firewall (computing)
Intrusion detection system
Intrusion prevention system
V
T
E
Computer security (also known as cybersecurity or IT security) isinformation security as applied
to computing devices such as computers andsmartphones, as well as computer networks such as
private and public networks, including the Internet as a whole.
The field covers all the processes and mechanisms by which computer-based equipment,
information and services are protected from unintended or unauthorized access, change or
destruction. Computer security also includes protection from unplanned events and natural
disasters.
The worldwide security technology and services market is forecast to reach $67.2 billion in 2013, up
8.7 percent from $61.8 billion in 2012, according toGartner, Inc.
[1]

Contents
[hide]
1 Vulnerabilities
o 1.1 Backdoors
o 1.2 Denial-of-service attack
o 1.3 Direct access attacks
o 1.4 Eavesdropping
o 1.5 Exploits
o 1.6 Indirect attacks
2 Social engineering and human error
3 Vulnerable areas
o 3.1 Cloud computing
o 3.2 Aviation
4 Financial cost of security breaches
o 4.1 Reasons
5 Computer protection
o 5.1 Security and systems design
o 5.2 Security measures
5.2.1 Difficulty with response
o 5.3 Reducing vulnerabilities
o 5.4 Security by design
o 5.5 Security architecture
o 5.6 Hardware protection mechanisms
o 5.7 Secure operating systems
o 5.8 Secure coding
o 5.9 Capabilities and access control lists
o 5.10 Hacking back
6 Notable computer breaches
o 6.1 Rome Laboratory
o 6.2 Robert Morris and the first computer worm
7 Legal issues and global regulation
8 Computer security policies
o 8.1 United States
8.1.1 Cybersecurity Act of 2010
8.1.2 International Cybercrime Reporting and Cooperation Act
8.1.3 Protecting Cyberspace as a National Asset Act of 2010
8.1.4 White House proposes cybersecurity legislation
o 8.2 Germany
8.2.1 Berlin starts National Cyber Defense Initiative
o 8.3 South Korea
9 The cyber security job market
10 Terminology
11 Scholars in the field
12 See also
13 References
14 Further reading
15 External links
o 15.1 Lists of currently known unpatched vulnerabilities
Vulnerabilities[edit]
Main article: Vulnerability (computing)
To understand the techniques for securing a computer system, it is important to first understand the
various types of "attacks" that can be made against it. These threats can typically be classified into
one of these seven categories:
Backdoors[edit]
A backdoor in a computer system (or cryptosystem or algorithm) is a method of bypassing normal
authentication, securing remote access to a computer, obtaining access to plaintext, and so on,
while attempting to remain undetected. The backdoor may take the form of an installed program
(e.g., Back Orifice), or could be a modification to an existing program or hardware device. A specific
form of backdoor is a rootkit, which replaces system binaries and/or hooks into the function calls of
an operating system to hide the presence of other programs, users, services and open ports. It may
also fake information about disk and memory usage.
Denial-of-service attack[edit]
Main article: Denial-of-service attack
Unlike other exploits, denial of service attacks are not used to gain unauthorized access or control of
a system. They are instead designed to render it unusable. Attackers can deny service to individual
victims, such as by deliberately entering a wrong password three consecutive times and thus
causing the victim account to be locked, or they may overload the capabilities of a machine or
network and block all users at once. These types of attack are, in practice, very hard to prevent,
because the behavior of whole networks needs to be analyzed, not only the behaviour of small
pieces of code. Distributed denial of service (DDoS) attacks are common, where a large number of
compromised hosts (commonly referred to as "zombie computers", used as part of a botnet with, for
example; a worm, trojan horse, or backdoor exploit to control them) are used to flood a target system
with network requests, thus attempting to render it unusable through resource exhaustion. Another
technique to exhaust victim resources is through the use of an attack amplifier, where the attacker
takes advantage of poorly designed protocols on third-party machines, such as FTP or DNS, in order
to instruct these hosts to launch the flood. There are also commonly found vulnerabilities in
applications that cannot be used to take control over a computer, but merely make the target
application malfunction or crash. This is known as a denial-of-service exploit.
Direct access attacks[edit]


Common consumer devices that can be used to transfer data surreptitiously.
Someone who has gained access to a computer can install different types of devices to compromise
security, including operating system modifications, software worms,key loggers, and covert listening
devices. The attacker can also easily download large quantities of data onto backup media, for
instance CD-R/DVD-R, tape; or portable devices such as keydrives, digital cameras or digital audio
players. Another common technique is to boot an operating system contained on a CD-ROM or
other bootable media and read the data from the harddrive(s) this way. The only way to defeat this is
to encrypt the storage media and store the key separate from the system.
Eavesdropping[edit]
Eavesdropping is the act of surreptitiously listening to a private conversation, typically between hosts
on a network. For instance, programs such as Carnivore and NarusInsight have been used by
theFBI and NSA to eavesdrop on the systems of internet service providers. Even machines that
operate as a closed system (i.e., with no contact to the outside world) can be eavesdropped upon
via monitoring the faint electro-magnetic transmissions generated by the hardware such
as TEMPEST.
Exploits[edit]
Main article: Exploit (computer security)
An exploit (from the same word in the French language, meaning "achievement", or
"accomplishment") is a piece of software, a chunk of data, or sequence of commands that take
advantage of a software "bug" or "glitch" in order to cause unintended or unanticipated behavior to
occur on computer software, hardware, or something electronic (usually computerized). This
frequently includes such things as gaining control of a computer system or allowing privilege
escalation or a denial of service attack. Many development methodologies rely on testing to ensure
the quality of any code released; this process often fails to discover unusual potential exploits. The
term "exploit" generally refers to small programs designed to take advantage of a software flaw that
has been discovered, either remote or local. The code from the exploit program is frequently reused
introjan horses and computer viruses. In some cases, a vulnerability can lie in certain programs'
processing of a specific file type, such as a non-executable media file. Some security web sites
maintain lists of currently known unpatched vulnerabilities found in common programs (see "External
links" below).
Indirect attacks[edit]
An indirect attack is an attack launched by a third-party computer. By using someone else's
computer to launch an attack, it becomes far more difficult to track down the actual attacker. There
have also been cases where attackers took advantage of public anonymizing systems, such as
the tor onion router system.
Social engineering and human error[edit]
Main article: Social engineering (security)
See also: Category:Cryptographic attacks
A computer system is no more secure than the human systems responsible for its operation.
Malicious individuals have regularly penetrated well-designed, secure computer systems by taking
advantage of the carelessness of trusted individuals, or by deliberately deceiving them, for example
sending messages that they are the system administrator and asking for passwords. This deception
is known as social engineering.
In the world of information technology there are different types of cyber attacklike code injection to
a website or utilising malware (malicious software) such as virus, trojans, or similar. Attacks of these
kinds are counteracted managing or improving the damaged product. But there is one last type,
social engineering, which does not directly affect the computers but instead their users, which are
also known as "the weakest link". This type of attack is capable of achieving similar results to other
class of cyber attacks, by going around the infrastructure established to resist malicious software;
since being more difficult to calculate or prevent, it is many times a more efficient attack vector.
The main target is to convince the user by means of psychological ways to disclose his or her
personal information such as passwords, card numbers, etc. by, for example, impersonating the
services company or the bank.
[2]

Vulnerable areas[edit]
Computer security is critical in almost any technology-driven industry which operates on computer
systems. The issues of computer based systems and addressing their countless vulnerabilities are
an integral part of maintaining an operational industry.
[3]

Cloud computing[edit]
Security in the cloud is challenging,
[citation needed]
due to varied degrees of security features and
management schemes within the cloud entities. In this connection one logical protocol base needs to
evolve so that the entire gamut of components operates synchronously and securely.
[original research?]

Aviation[edit]
The aviation industry is especially important when analyzing computer security because the involved
risks include human life, expensive equipment, cargo, and transportation infrastructure. Security can
be compromised by hardware and software malpractice, human error, and faulty operating
environments. Threats that exploit computer vulnerabilities can stem from sabotage, espionage,
industrial competition, terrorist attack, mechanical malfunction, and human error.
[4]

The consequences of a successful deliberate or inadvertent misuse of a computer system in the
aviation industry range from loss of confidentiality to loss of system integrity, which may lead to more
serious concerns such as exfiltration (data theft or loss), network and air traffic control outages,
which in turn can lead to airport closures, loss of aircraft, loss of passenger life. Military systems that
control munitions can pose an even greater risk.
A proper attack does not need to be very high tech or well funded; for a power outage at an airport
alone can cause repercussions worldwide.
[5]
One of the easiest and, arguably, the most difficult to
trace security vulnerabilities is achievable by transmitting unauthorized communications over specific
radio frequencies. These transmissions may spoof air traffic controllers or simply disrupt
communications altogether. These incidents are very common, having altered flight courses of
commercial aircraft and caused panic and confusion in the past.
[citation needed]
Controlling aircraft over
oceans is especially dangerous because radar surveillance only extends 175 to 225 miles offshore.
Beyond the radar's sight controllers must rely on periodic radio communications with a third party.
Lightning, power fluctuations, surges, brownouts, blown fuses, and various other power outages
instantly disable all computer systems, since they are dependent on an electrical source. Other
accidental and intentional faults have caused significant disruption of safety critical systems
throughout the last few decades and dependence on reliable communication and electrical power
only jeopardizes computer safety.
[citation needed]

Financial cost of security breaches[edit]
Serious financial damage has been caused by security breaches, but because there is no standard
model for estimating the cost of an incident, the only data available is that which is made public by
the organizations involved. Several computer security consulting firms produce estimates of total
worldwide losses attributable to virus and worm attacks and to hostile digital acts in general. The
2003 loss estimates by these firms range from $13 billion (worms and viruses only) to $226 billion
(for all forms of covert attacks). The reliability of these estimates is often challenged; the underlying
methodology is basically anecdotal.
[6]

Insecurities in operating systems have led to a massive black market
[citation needed]
for rogue software.
An attacker can use a security hole to install software that tricks the user into buying a product. At
that point, an affiliate program pays the affiliate responsible for generating that installation about $30.
The software is sold for between $50 and $75 per license.
[7]

Reasons[edit]
There are many similarities (yet many fundamental differences) between computer and physical
security. Just like real-world security, the motivations for breaches of computer security vary
between attackers, sometimes called hackers or crackers. Some are thrill-seekers or vandals (the
kind often responsible for defacing web sites); similarly, some web site defacementsare done to
make political statements. However, some attackers are highly skilled and motivated with the goal of
compromising computers for financial gain or espionage.
[citation needed]
An example of the latter
is Markus Hess (more diligent than skilled), who spied for the KGB and was ultimately caught
because of the efforts of Clifford Stoll, who wrote a memoir, The Cuckoo's Egg, about his
experiences.
For those seeking to prevent security breaches, the first step is usually to attempt to identify what
might motivate an attack on the system, how much the continued operation and information security
of the system are worth, and who might be motivated to breach it. The precautions required for a
home personal computer are very different for those of banks' Internet banking systems, and
different again for a classified military network. Other computer security writers suggest that, since
an attacker using a network need know nothing about you or what you have on your computer,
attacker motivation is inherently impossible to determine beyond guessing. If true, blocking all
possible attacks is the only plausible action to take.
Computer protection[edit]
There are numerous ways to protect computers, including utilizing security-aware design techniques,
building on secure operating systems and installing hardware devices designed to protect the
computer systems.
Security and systems design[edit]
Although there are many aspects to take into consideration when designing a computer system,
security can prove to be very important. According to Symantec, in 2010, 94 percent of organizations
polled expect to implement security improvements to their computer systems, with 42 percent
claiming cyber security as their top risk.
[8]

At the same time, many organizations are improving security and many types of cyber criminals are
finding ways to continue their activities. Almost every type of cyber attack is on the rise. In 2009
respondents to the CSI Computer Crime and Security Survey admitted
that malware infections, denial-of-service attacks, password sniffing, and web site defacements were
significantly higher than in the previous two years.
[9]

Security measures[edit]
A state of computer "security" is the conceptual ideal, attained by the use of the three processes:
threat prevention, detection and response. These processes are based on various policies and
system components, which include the following:
User account access controls and cryptography can protect systems files and data, respectively.
Firewalls are by far the most common prevention systems from a network security perspective
as they can (if properly configured) shield access to internal network services, and block certain
kinds of attacks through packet filtering. Firewalls can be both hardware- or software-based.
Intrusion Detection Systems (IDSs) are designed to detect network attacks in progress and
assist in post-attackforensics, while audit trails and logs serve a similar function for individual
systems.
"Response" is necessarily defined by the assessed security requirements of an individual
system and may cover the range from simple upgrade of protections to notification
of legal authorities, counter-attacks, and the like. In some special cases, a complete destruction
of the compromised system is favored, as it may happen that not all the compromised resources
are detected.
Today, computer security comprises mainly "preventive" measures, like firewalls or an exit
procedure. A firewall can be defined as a way of filtering network data between a host or a network
and another network, such as the Internet, and can be implemented as software running on the
machine, hooking into the network stack (or, in the case of most UNIX-based operating systems
such as Linux, built into the operating system kernel) to provide real time filtering and blocking.
Another implementation is a so-called physical firewall which consists of a separate machine filtering
network traffic. Firewalls are common amongst machines that are permanently connected to
the Internet.
However, relatively few organisations maintain computer systems with effective detection systems,
and fewer still have organised response mechanisms in place. As result, as Reuters points out:
Companies for the first time report they are losing more through electronic theft of data than
physical stealing of assets.
[10]
The primary obstacle to effective eradication of cyber crime could be
traced to excessive reliance on firewalls and other automated "detection" systems. Yet it is basic
evidence gathering by using packet capture appliances that puts criminals behind bars.
Difficulty with response[edit]
Responding forcefully to attempted security breaches (in the manner that one would for attempted
physical security breaches) is often very difficult for a variety of reasons:
Identifying attackers is difficult, as they are often in a different jurisdiction to the systems they
attempt to breach, and operate through proxies, temporary anonymous dial-up accounts,
wireless connections, and other anonymising procedures which make backtracing difficult and
are often located in yet another jurisdiction. If they successfully breach security, they are often
able to delete logs to cover their tracks.
The sheer number of attempted attacks is so large that organisations cannot spend time
pursuing each attacker (a typical home user with a permanent (e.g., cable modem) connection
will be attacked at least several times per day, so more attractive targets could be presumed to
see many more). Note however, that most of the sheer bulk of these attacks are made by
automated vulnerability scanners and computer worms.
Law enforcement officers are often unfamiliar with information technology, and so lack the skills
and interest in pursuing attackers. There are also budgetary constraints. It has been argued that
the high cost of technology, such as DNAtesting, and improved forensics mean less money for
other kinds of law enforcement, so the overall rate of criminals not getting dealt with goes up as
the cost of the technology increases. In addition, the identification of attackers across a network
may require logs from various points in the network and in many countries, the release of these
records to law enforcement (with the exception of being voluntarily surrendered by a network
administrator or a system administrator) requires a search warrant and, depending on the
circumstances, the legal proceedings required can be drawn out to the point where the records
are either regularly destroyed, or the information is no longer relevant.
Reducing vulnerabilities[edit]
Computer code is regarded by some as a form of mathematics. It is theoretically possible
to prove the correctness of certain classes of computer programs, though the feasibility of actually
achieving this in large-scale practical systems is regarded as small by some with practical
experience in the industry; see Bruce Schneier et al.
It is also possible to protect messages in transit (i.e., communications) by means of cryptography.
One method of encryptionthe one-time padis unbreakable when correctly used. This method
was used by the Soviet Union during the Cold War, though flaws in their implementation allowed
some cryptanalysis; see the Venona project. The method uses a matching pair of key-codes,
securely distributed, which are used once-and-only-once to encode and decode a single message.
For transmitted computer encryption this method is difficult to use properly (securely), and highly
inconvenient as well. Other methods of encryption, while breakable in theory, are often virtually
impossible to directly break by any means publicly known today. Breaking them requires some non-
cryptographic input, such as a stolen key, stolen plaintext (at either end of the transmission), or
some other extra cryptanalytic information.
Social engineering and direct computer access (physical) attacks can only be prevented by non-
computer means, which can be difficult to enforce, relative to the sensitivity of the information. Even
in a highly disciplined environment, such as in military organizations, social engineering attacks can
still be difficult to foresee and prevent.
In practice, only a small fraction of computer program code is mathematically proven, or even goes
through comprehensiveinformation technology audits or inexpensive but extremely
valuable computer security audits, so it is usually possible for a determined hacker to read, copy,
alter or destroy data in well secured computers, albeit at the cost of great time and resources. Few
attackers would audit applications for vulnerabilities just to attack a single specific system. It is
possible to reduce an attacker's chances by keeping systems up to date, using a security scanner
or/and hiring competent people responsible for security. The effects of data loss/damage can be
reduced by careful backing up and insurance.
Security by design[edit]
Main article: Secure by design
Security by design, or alternately secure by design, means that the software has been designed
from the ground up to be secure. In this case, security is considered as a main feature.
Some of the techniques in this approach include:
The principle of least privilege, where each part of the system has only the privileges that are
needed for its function. That way even if an attacker gains access to that part, they have only
limited access to the whole system.
Automated theorem proving to prove the correctness of crucial software subsystems.
Code reviews and unit testing, approaches to make modules more secure where formal
correctness proofs are not possible.
Defense in depth, where the design is such that more than one subsystem needs to be violated
to compromise the integrity of the system and the information it holds.
Default secure settings, and design to "fail secure" rather than "fail insecure" (see fail-safe for
the equivalent in safety engineering). Ideally, a secure system should require a deliberate,
conscious, knowledgeable and free decision on the part of legitimate authorities in order to make
it insecure.
Audit trails tracking system activity, so that when a security breach occurs, the mechanism and
extent of the breach can be determined. Storing audit trails remotely, where they can only be
appended to, can keep intruders from covering their tracks.
Full disclosure of all vulnerabilities, to ensure that the "window of vulnerability" is kept as short
as possible when bugs are discovered.
Security architecture[edit]
The Open Security Architecture organization defines IT security architecture as "the
design artifacts that describe how the security controls (security countermeasures) are positioned,
and how they relate to the overall information technology architecture. These controls serve the
purpose to maintain the system's quality attributes: confidentiality, integrity, availability,
accountability and assurance services".
[11]

Hardware protection mechanisms[edit]
See also: Computer security compromised by hardware failure
While hardware may be a source of insecurity, such as with microchip vulnerabilities maliciously
introduced during the manufacturing process,
[12][13]
hardware-based or assisted computer security
also offers an alternative to software-only computer security. Using devices and methods such
as dongles, trusted platform modules, intrusion-aware cases, drive locks, disabling USB ports, and
mobile-enabled access may be considered more secure due to the physical access (or
sophisticated backdoor access) required in order to be compromised. Each of these is covered in
more detail below.
USB dongles are typically used in software licensing schemes to unlock software
capabilities,
[14]
but they can also be seen as a way to prevent unauthorized access to a
computer or other device's software. The dongle, or key, essentially creates a secure encrypted
tunnel between the software application and the key. The principle is that an encryption scheme
on the dongle, such as Advanced Encryption Standard (AES) provides a stronger measure of
security, since it is harder to hack and replicate the dongle than to simply copy the native
software to another machine and use it. Another security application for dongles is to use them
for accessing web-based content such as cloud software or Virtual Private
Networks (VPNs).
[15]
In addition, a USB dongle can be configured to lock or unlock a
computer.
[16]

Trusted platform modules (TPMs) secure devices by integrating cryptographic capabilities onto
access devices, through the use of microprocessors, or so-called computers-on-a-chip. TPMs
used in conjunction with server-side software offer a way to detect and authenticate hardware
devices, preventing unauthorized network and data access.
[17]

Computer case intrusion detection refers to a push-button switch which is triggered when a
computer case is opened. The firmware or BIOS is programmed to show an alert to the operator
when the computer is booted up the next time.
Drive locks are essentially software tools to encrypt hard drives, making them inaccessible to
thieves.
[18]
Tools exist specifically for encrypting external drives as well.
[19]

Disabling USB ports is a security option for preventing unauthorized and malicious access to an
otherwise secure computer. Infected USB dongles connected to a network from a computer
inside the firewall are considered by Network World as the most common hardware threat facing
computer networks.
[20]

Mobile-enabled access devices are growing in popularity due to the ubiquitous nature of cell
phones. Built-in capabilities such as Bluetooth, the newer Bluetooth low energy (LE), Near field
communication (NFC) on non-iOS devices andbiometric validation such as thumb print readers,
as well as QR code reader software designed for mobile devices, offer new, secure ways for
mobile phones to connect to access control systems. These control systems provide computer
security and can also be used for controlling access to secure buildings.
[21]

Secure operating systems[edit]
Main article: Security-focused operating system
One use of the term "computer security" refers to technology that is used to implement
secure operating systems. Much of this technology is based on science developed in the 1980s and
used to produce what may be some of the most impenetrable operating systems ever. Though still
valid, the technology is in limited use today, primarily because it imposes some changes to system
management and also because it is not widely understood. Such ultra-strong secure operating
systems are based on operating system kernel technology that can guarantee that certain security
policies are absolutely enforced in an operating environment. An example of such a Computer
security policy is the Bell-LaPadula model. The strategy is based on a coupling of
special microprocessor hardware features, often involving the memory management unit, to a
special correctly implemented operating system kernel. This forms the foundation for a secure
operating system which, if certain critical parts are designed and implemented correctly, can ensure
the absolute impossibility of penetration by hostile elements. This capability is enabled because the
configuration not only imposes a security policy, but in theory completely protects itself from
corruption. Ordinary operating systems, on the other hand, lack the features that assure this
maximal level of security. The design methodology to produce such secure systems is precise,
deterministic and logical.
Systems designed with such methodology represent the state of the art
[clarification needed]
of computer
security although products using such security are not widely known. In sharp contrast to most kinds
of software, they meet specifications with verifiable certainty comparable to specifications for size,
weight and power. Secure operating systems designed this way are used primarily to protect
national security information, military secrets, and the data of international financial institutions.
These are very powerful security tools and very few secure operating systems have been certified at
the highest level (Orange Book A-1) to operate over the range of "Top Secret" to "unclassified"
(including Honeywell SCOMP, USAF SACDIN, NSA Blacker and Boeing MLS LAN). The assurance
of security depends not only on the soundness of the design strategy, but also on the assurance of
correctness of the implementation, and therefore there are degrees of security strength defined for
COMPUSEC. The Common Criteria quantifies security strength of products in terms of two
components, security functionality and assurance level (such as EAL levels), and these are specified
in a Protection Profile for requirements and aSecurity Target for product descriptions. None of these
ultra-high assurance secure general purpose operating systems have been produced for decades or
certified under Common Criteria.
In USA parlance, the term High Assurance usually suggests the system has the right security
functions that are implemented robustly enough to protect DoD and DoE classified information.
Medium assurance suggests it can protect less valuable information, such as income tax
information. Secure operating systems designed to meet medium robustness levels of security
functionality and assurance have seen wider use within both government and commercial markets.
Medium robust systems may provide the same security functions as high assurance secure
operating systems but do so at a lower assurance level (such as Common Criteria levels EAL4 or
EAL5). Lower levels mean we can be less certain that the security functions are implemented
flawlessly, and therefore less dependable. These systems are found in use on web servers, guards,
database servers, and management hosts and are used not only to protect the data stored on these
systems but also to provide a high level of protection for network connections and routing services.
Secure coding[edit]
Main article: Secure coding
If the operating environment is not based on a secure operating system capable of maintaining a
domain for its own execution, and capable of protecting application code from malicious subversion,
and capable of protecting the system from subverted code, then high degrees of security are
understandably not possible. While such secure operating systems are possible and have been
implemented, most commercial systems fall in a 'low security' category because they rely on
features not supported by secure operating systems (like portability, and others). In low security
operating environments, applications must be relied on to participate in their own protection. There
are 'best effort' secure coding practices that can be followed to make an application more resistant to
malicious subversion.
In commercial environments, the majority of software subversion vulnerabilities result from a few
known kinds of coding defects. Common software defects include buffer overflows, format string
vulnerabilities, integer overflow, andcode/command injection. These defects can be used to cause
the target system to execute putative data. However, the "data" contain executable instructions,
allowing the attacker to gain control of the processor.
Some common languages such as C and C++ are vulnerable to all of these defects (see
Seacord, "Secure Coding in C and C++").
[22]
Other languages, such as Java, are more resistant to
some of these defects, but are still prone to code/command injection and other software defects
which facilitate subversion.
Another bad coding practice occurs when an object is deleted during normal operation yet the
program neglects to update any of the associated memory pointers, potentially causing system
instability when that location is referenced again. This is called dangling pointer, and the first known
exploit for this particular problem was presented in July 2007. Before this publication the problem
was known but considered to be academic and not practically exploitable.
[23]

Unfortunately, there is no theoretical model of "secure coding" practices, nor is one practically
achievable, insofar as the code (ideally, read-only) and data (generally read/write) generally tends to
have some form of defect.
Capabilities and access control lists[edit]
Main articles: Access control list and Capability (computers)
Within computer systems, two security models capable of enforcing privilege separation are access
control lists (ACLs) andcapability-based security. Using ACLs to confine programs has been proven
to be insecure in many situations, such as if the host computer can be tricked into indirectly allowing
restricted file access, an issue known as the confused deputy problem. It has also been shown that
the promise of ACLs of giving access to an object to only one person can never be guaranteed in
practice. Both of these problems are resolved by capabilities. This does not mean practical flaws
exist in all ACL-based systems, but only that the designers of certain utilities must take responsibility
to ensure that they do not introduce flaws.
[citation needed]

Capabilities have been mostly restricted to research operating systems, while commercial OSs still
use ACLs. Capabilities can, however, also be implemented at the language level, leading to a style
of programming that is essentially a refinement of standard object-oriented design. An open source
project in the area is the E language.
The most secure computers are those not connected to the Internet and shielded from any
interference. In the real world, the most secure systems are operating systems where security is not
an add-on.
Hacking back[edit]
There has been a significant debate regarding the legality of hacking back against digital attackers
(who attempt to or successfully breach an individual's, entity's, or nation's computer). The arguments
for such counter-attacks are based on notions of equity, active defense, vigilantism, and
the Computer Fraud and Abuse Act (CFAA). The arguments against the practice are primarily based
on the legal definitions of "intrusion" and "unauthorized access", as defined by the CFAA. As of
October 2012, the debate is ongoing.
[24]

Notable computer breaches[edit]
Several notable computer breaches are discussed below.
Rome Laboratory[edit]
In 1994, over a hundred intrusions were made by unidentified crackers into the Rome Laboratory,
the US Air Force's main command and research facility. Using trojan horses, hackers were able to
obtain unrestricted access to Rome's networking systems and remove traces of their activities. The
intruders were able to obtain classified files, such as air tasking order systems data and furthermore
able to penetrate connected networks of National Aeronautics and Space Administration's Goddard
Space Flight Center, Wright-Patterson Air Force Base, some Defense contractors, and other private
sector organizations, by posing as a trusted Rome center user.
[25]

Robert Morris and the first computer worm[edit]
One event shows what mainstream generative technology leads to in terms of online security
breaches, and is also the story of the Internet's first worm.
In 1988, 60,000 computers were connected to the Internet, but not all of them were PCs. Most were
mainframes, minicomputers and professional workstations. On November 2, 1988, the computers
acted strangely. They started to slow down, because they were running a malicious code that
demanded processor time and that spread itself to other computers. The purpose of such software
was to transmit a copy to the machines and run in parallel with existing software and repeat all over
again. It exploited a flaw in a common e-mail transmission program running on a computer by
rewriting it to facilitate its entrance or it guessed users' password, because, at that time, passwords
were simple (e.g. username 'harry' with a password '...harry') or were obviously related to a list of
432 common passwords tested at each computer.
[26]

The software was traced back to 23 year old Cornell University graduate student Robert Tappan
Morris, Jr. When questioned about the motive for his actions, Morris said 'he wanted to count how
many machines were connected to the Internet'.
[26]
His explanation was verified with his code, but it
turned out to be buggy, nevertheless.
Legal issues and global regulation[edit]
Some of the main challenges and complaints about the antivirus industry are the lack of global web
regulations, a global base of common rules to judge, and eventually punish, cyber crimes and cyber
criminals. In fact, nowadays, even if an antivirus firm locates the cyber criminal behind the creation
of a particular virus or piece of malware or again one form ofcyber attack, often the local authorities
cannot take action due to lack of laws under which to prosecute.
[27][28]
This is mainly caused by the
fact that many countries have their own regulations regarding cyber crimes.
"[Computer viruses] switch from one country to another, from one jurisdiction to another moving
around the world, using the fact that we don't have the capability to globally police operations like
this. So the Internet is as if someone [had] given free plane tickets to all the online criminals of the
world."
[27]
(Mikko Hyppnen)
Businesses are eager to expand to less developed countries due to the low cost of labor, says White
et al. (2012). However, these countries are the ones with the least amount of Internet safety
measures, and the Internet Service Providers are not so focused on implementing those safety
measures (2010). Instead, they are putting their main focus on expanding their business, which
exposes them to an increase in criminal activity.
[29]

In response to the growing problem of cyber crime, the European Commission established
the European Cybercrime Centre(EC3).
[30]
The EC3 effectively opened on 1 January 2013 and will
be the focal point in the EU's fight against cyber crime, contributing to faster reaction to online
crimes. It will support member states and the EU's institutions in building an operational and
analytical capacity for investigations, as well as cooperation with international partners.
[31]

Computer security policies[edit]
Country-specific computer security policies are discussed below.
United States[edit]
See also: Cyber security standards
Cybersecurity Act of 2010[edit]
On July 1, 2009, Senator Jay Rockefeller (D-WV) introduced the "Cybersecurity Act of 2009 - S.
773"
[32]
in the Senate; the bill, co-written with Senators Evan Bayh (D-IN), Barbara Mikulski (D-
MD), Bill Nelson (D-FL), and Olympia Snowe (R-ME), was referred to the Committee on Commerce,
Science, and Transportation, which approved a revised version of the same bill (the "Cybersecurity
Act of 2010") on March 24, 2010.
[33]
The bill seeks to increase collaboration between the public and
the private sector on cybersecurity issues, especially those private entities that own infrastructures
that are critical to national security interests (the bill quotes John Brennan, the Assistant to the
President for Homeland Security and Counterterrorism: "our nations security and economic
prosperity depend on the security, stability, and integrity of communications andinformation
infrastructure that are largely privately owned and globally operated" and talks about the country's
response to a "cyber-Katrina"),
[34]
increase public awareness on cybersecurity issues, and foster and
fund cybersecurity research. Some of the most controversial parts of the bill include Paragraph 315,
which grants the President the right to "order the limitation or shutdown of Internet traffic to and from
any compromised Federal Government or United States critical infrastructure information system or
network."
[34]
The Electronic Frontier Foundation, an international non-profit digital rights advocacy
and legal organization based in the United States, characterized the bill as promoting a "potentially
dangerous approach that favors the dramatic over the sober response".
[35]

International Cybercrime Reporting and Cooperation Act[edit]
On March 25, 2010, Representative Yvette Clarke (D-NY) introduced the "International Cybercrime
Reporting and Cooperation Act - H.R.4962"
[36]
in the House of Representatives; the bill, co-
sponsored by seven other representatives (among whom only one Republican), was referred to
three House committees.
[37]
The bill seeks to make sure that the administration
keeps Congress informed on information infrastructure, cybercrime, and end-user protection
worldwide. It also "directs the President to give priority for assistance to improve legal, judicial, and
enforcement capabilities with respect to cybercrime to countries with low information and
communications technology levels of development or utilization in their critical infrastructure,
telecommunications systems, and financial industries"
[37]
as well as to develop an action plan and an
annual compliance assessment for countries of "cyber concern".
[37]

Protecting Cyberspace as a National Asset Act of 2010[edit]
On June 19, 2010, United States Senator Joe Lieberman (I-CT) introduced a bill called "Protecting
Cyberspace as a National Asset Act of 2010 - S.3480"
[38]
which he co-wrote with Senator Susan
Collins (R-ME) and Senator Thomas Carper (D-DE). If signed into law, this controversial bill, which
the American media dubbed the "Kill switch bill", would grant the Presidentemergency powers over
the Internet. However, all three co-authors of the bill issued a statement claiming that instead, the bill
"[narrowed] existing broad Presidential authority to take over telecommunications networks".
[39]

White House proposes cybersecurity legislation[edit]
On May 12, 2011, the White House sent Congress a proposed cybersecurity law designed to force
companies to do more to fend off cyberattacks, a threat that has been reinforced by recent reports
about vulnerabilities in systems used in power and water utilities.
[40]

Executive order 13636 Improving Critical Infrastructure Cybersecurity was signed February 12,
2013.
Germany[edit]
Berlin starts National Cyber Defense Initiative[edit]
On June 16, 2011, the German Minister for Home Affairs, officially opened the new German NCAZ
(National Center for Cyber Defense) Nationales Cyber-Abwehrzentrum, which is located in Bonn.
The NCAZ closely cooperates with BSI (Federal Office for Information Security) Bundesamt fr
Sicherheit in der Informationstechnik, BKA (Federal Police Organisation)Bundeskriminalamt
(Deutschland), BND (Federal Intelligence Service) Bundesnachrichtendienst, MAD (Military
Intelligence Service) Amt fr den Militrischen Abschirmdienst and other national organisations in
Germany taking care of national security aspects. According to the Minister the primary task of the
new organisation founded on February 23, 2011, is to detect and prevent attacks against the
national infrastructure and mentioned incidents like Stuxnet.
South Korea[edit]
Following cyberattacks in the first half of 2013, whereby government, news-media, television station,
and bank websites were compromised, the national government committed to the training of 5,000
new cybersecurity experts by 2017. The South Korean government blamed its northern counterpart
on these attacks, as well as incidents that occurred in 2009, 2011, and 2012, but Pyongyang denies
the accusations.
[41]

Seoul, March 7, 2011 - South Korean police have contacted 35 countries to ask for cooperation in
tracing the origin of a massive cyber attack on the Web sites of key government and financial
institutions, amid a nationwide cyber security alert issued against further threats. The Web sites of
about 30 key South Korean government agencies and financial institutions came under a so-called
distributed denial-of-service (DDoS) attack for two days from Friday, with about 50,000 "zombie"
computers infected with a virus seeking simultaneous access to selected sites and swamping them
with traffic. As soon as the copies of overseas servers are obtained, the cyber investigation unit will
analyse the data to track down the origin of the attacks made from countries, including the United
States, Russia, Italy and Israel, the NPA noted.
[42]

In late September 2013, a computer-security competition jointly sponsored by the defense ministry
and the National Intelligence Service was announced. The winners will be announced on September
29, 2013 and will share a total prize pool of 80 million won (US$74,000).
[41]

The cyber security job market[edit]
Cyber Security is a fast-growing
[43]
field of IT concerned with reducing organizations' risk of hack or
data breach. Commercial, government and non-governmental all employ cybersecurity professional,
but the use of the term "cybersecurity" is government job descriptions is more prevalent than in non-
government job descriptions, in part due to government "cybersecurity" initiatives (as opposed to
corporation's "IT security" initiatives) and the establishment of government institutions like the US
Cyber Command and the UK Defence Cyber Operations Group.
[44]

Typical cybersecurity job titles and descriptions include:
[45]

Security Analyst
Analyzes and assesses vulnerabilities in the infrastructure (software, hardware, networks),
investigates available tools and countermeasures to remedy the detected vulnerabilities, and
recommends solutions and best practices. Analyzes and assesses damage to the
data/infrastructure as a result of security incidents, examines available recovery tools and
processes, and recommends solutions. Tests for compliance with security policies and
procedures. May assist in the creation, implementation, and/or management of security
solutions.
Security Engineer
Performs security monitoring, security and data/logs analysis, and forensic analysis, to
detect security incidents, and mounts incident response. Investigates and utilizes new
technologies and processes to enhance security capabilities and implement improvements.
May also review code or perform other security engineering methodologies.
Security Architect
Designs a security system or major components of a security system, and may head a
security design team building a new security system.
Security Administrator
Installs and manages organization-wide security systems. May also take on some of the
tasks of a security analyst in smaller organizations.
Chief Information Security Officer
A high-level management position responsible for the entire information security
division/staff. The position may include hands-on technical work.
Security Consultant/Specialist
Broad titles that encompass any one or all of the other roles/titles, tasked with protecting
computers, networks, software, data, and/or information systems against viruses, worms,
spyware, malware, intrusion detection, unauthorized access, denial-of-service attacks, and
an ever increasing list of attacks by hackers acting as individuals or as part of organized
crime or foreign governments.
Student programs are also available to people interested in beginning a
career in cybersecurity.
[46][47]

Terminology[edit]

This section may require cleanup to meet
Wikipedia's quality standards. Nocleanup reason has been
specified. Please help improve this section if you
can.(November 2010)
The following terms used with regards to engineering secure systems are
explained below.
Access authorization restricts access to a computer to group of users
through the use of authentication systems. These systems can protect
either the whole computer such as through an interactive login screen
or individual services, such as an FTP server. There are many
methods for identifying and authenticating users, such
as passwords,identification cards, and, more recently, smart
cards and biometric systems.
Anti-virus software consists of computer programs that attempt to
identify, thwart and eliminate computer viruses and other malicious
software (malware).
Applications with known security flaws should not be run. Either leave it
turned off until it can be patched or otherwise fixed, or delete it and
replace it with some other application. Publicly known flaws are the
main entry used by worms to automatically break into a system and
then spread to other systems connected to it. The security
website Secuniaprovides a search tool for unpatched known flaws in
popular products.
Authentication techniques can be used to ensure that communication
end-points are who they say they are.
Automated theorem proving and other verification tools can enable
critical algorithms and code used in secure systems to be
mathematically proven to meet their specifications.
Backups are a way of securing information; they are another copy of all
the important computer files kept in another location. These files are
kept on hard disks, CD-Rs, CD-RWs, tapes and more recently on the
cloud. Suggested locations for backups are a fireproof, waterproof, and
heat proof safe, or in a separate, offsite location than that in which the
original files are contained. Some individuals and companies also keep
their backups in safe deposit boxes insidebank vaults. There is also a
fourth option, which involves using one of the file hosting services that
backs up files over the Internet for both business and individuals,
known as the cloud.
Backups are also important for reasons other than security. Natural
disasters, such as earthquakes, hurricanes, or tornadoes, may
strike the building where the computer is located. The building can
be on fire, or an explosion may occur. There needs to be a recent
backup at an alternate secure location, in case of such kind of
disaster. Further, it is recommended that the alternate location be
placed where the same disaster would not affect both locations.
Examples of alternate disaster recovery sites being compromised
by the same disaster that affected the primary site include having
had a primary site in World Trade Center I and the recovery site
in 7 World Trade Center, both of which were destroyed in
the 9/11 attack, and having one's primary site and recovery site in
the same coastal region, which leads to both being vulnerable to
hurricane damage (for example, primary site in New Orleans and
recovery site in Jefferson Parish, both of which were hit
by Hurricane Katrina in 2005). The backup media should be moved
between the geographic sites in a secure manner, in order to
prevent them from being stolen.
Capability and access control list techniques can be used to ensure
privilege separation and mandatory access control.This
section discusses their use.
Chain of trust techniques can be used to attempt to ensure that all
software loaded has been certified as authentic by the system's
designers.
Confidentiality is the nondisclosure of information except to another
authorized person.
[48]

Cryptographic techniques can be used to defend data in transit
between systems, reducing the probability that data exchanged
between systems can be intercepted or modified.
Data integrity is the accuracy and consistency of stored data, indicated
by an absence of any alteration in data between two updates of a data
record.
[49]



Cryptographic techniques involve transforming information, scrambling it so it becomes
unreadable during transmission. The intended recipient can unscramble the message;
ideally, eavesdroppers cannot.
Encryption is used to protect the message from the eyes of
others.Cryptographically secure ciphers are designed to make any
practical attempt of breaking infeasible. Symmetric-key ciphers are
suitable for bulk encryption using shared keys, and public-key
encryption using digital certificates can provide a practical solution for
the problem of securely communicating when no key is shared in
advance.
Endpoint security software helps networks to prevent exfiltration (data
theft) and virus infection at network entry points made vulnerable by the
prevalence of potentially infected portable computing devices, such as
laptops and mobile devices, and external storage devices, such as USB
drives.
[50]

Firewalls are an important method for control and security on the
Internet and other networks. A network firewall can be a
communications processor, typically a router, or a dedicated server,
along with firewall software. A firewall serves as a gatekeeper system
that protects a company's intranets and other computer networks from
intrusion by providing a filter and safe transfer point for access to and
from the Internet and other networks. It screens all network traffic for
proper passwords or other security codes and only allows authorized
transmission in and out of the network. Firewalls can deter, but not
completely prevent, unauthorized access (hacking) into computer
networks; they can also provide some protection from online intrusion.
Honey pots are computers that are either intentionally or unintentionally
left vulnerable to attack by crackers. They can be used to catch
crackers or fix vulnerabilities.
Intrusion-detection systems can scan a network for people that are on
the network but who should not be there or are doing things that they
should not be doing, for example trying a lot of passwords to gain
access to the network.
A microkernel is the near-minimum amount of software that can provide
the mechanisms to implement an operating system. It is used solely to
provide very low-level, very precisely defined machine code upon which
an operating system can be developed. A simple example is the early
'90s GEMSOS (Gemini Computers), which provided extremely low-level
machine code, such as "segment" management, atop which an
operating system could be built. The theory (in the case of "segments")
was thatrather than have the operating system itself worry about
mandatory access separation by means of military-style labelingit is
safer if a low-level, independently scrutinized module can be
charged solely with the management of individually labeled segments,
be they memory "segments" or file system "segments" or executable
text "segments." If software below the visibility of the operating system
is (as in this case) charged with labeling, there is no theoretically viable
means for a clever hacker to subvert the labeling scheme, since the
operating system per se doesnot provide mechanisms for interfering
with labeling: the operating system is, essentially, a client (an
"application," arguably) atop the microkernel and, as such, subject to its
restrictions.
Pinging The ping application can be used by potential crackers to find if
an IP address is reachable. If a cracker finds a computer, they can try a
port scan to detect and attack services on that computer.
Social engineering awareness keeps employees aware of the dangers
of social engineering and/or having a policy in place to prevent social
engineering can reduce successful breaches of the network and
servers.
Scholars in the field[edit]
Ross J. Anderson
Annie Anton
Adam Back
Daniel J. Bernstein
Stefan Brands
L. Jean Camp
Lance Cottrell
Lorrie Cranor
Cynthia Dwork
Deborah Estrin
Joan Feigenbaum
Ian Goldberg
Peter Gutmann
Monica S. Lam
Brian LaMacchia
Kevin Mitnick
Bruce Schneier
Dawn Song
Gene Spafford
See also[edit]

Computer security portal
Attack tree
Authentication
Authorization
CAPTCHA
CERT
CertiVox
Cloud computing security
Comparison of antivirus software
Computer insecurity
Computer security compromised by hardware failure
Computer security model
Content security
Countermeasure (computer)
Cryptography
Cyber security standards
Dancing pigs
Data loss prevention products
Data security
Differentiated security
Disk encryption
Exploit (computer security)
Fault tolerance
Firewalls
Human-computer interaction (security)
Identity Based Security
Identity theft
Identity management
Information Leak Prevention
Information security
Internet privacy
ISO/IEC 15408
IT risk
Mobile security
Network security
Network Security Toolkit
Next-Generation Firewall
Open security
OWASP
Penetration test
Physical information security
Physical security
Presumed security
Privacy software
Proactive Cyber Defence
Sandbox (computer security)
Security Architecture
Separation of protection and security
Software Defined Perimeter
Threat (computer)
Vulnerability (computing)

Вам также может понравиться