Вы находитесь на странице: 1из 99

Your continued donations keep Wikipedia running!

Firewall
From Wikipedia, the free encyclopedia

Jump to: navigation, search


This article needs additional citations for verification.
Please help improve this article by adding reliable references. Unsourced material may be challenged and
removed. (February 2008)
This article is about the network security device. For other uses, see Firewall (disambiguation).
An illustration of how a firewall works.
An example of a user interface for a firewall (Gufw)

A firewall is a part of a computer system or network that is designed to block unauthorized


access while permitting authorized communications. It is a device or set of devices configured to
permit, deny, encrypt, decrypt, or proxy all (in and out) computer traffic between different
security domains based upon a set of rules and other criteria.

Firewalls can be implemented in either hardware or software, or a combination of both. Firewalls


are frequently used to prevent unauthorized Internet users from accessing private networks
connected to the Internet, especially intranets. All messages entering or leaving the intranet pass
through the firewall, which examines each message and blocks those that do not meet the
specified security criteria.

There are several types of firewall techniques:

1. Packet filter: Looks at each packet entering or leaving the network and accepts or rejects
it based on user-defined rules. Packet filtering is fairly effective and transparent to users,
but it is difficult to configure. In addition, it is susceptible to IP spoofing.
2. Application gateway: Applies security mechanisms to specific applications, such as FTP
and Telnet servers. This is very effective, but can impose a performance degradation.
3. Circuit-level gateway: Applies security mechanisms when a TCP or UDP connection is
established. Once the connection has been made, packets can flow between the hosts
without further checking.
4. Proxy server: Intercepts all messages entering and leaving the network. The proxy server
effectively hides the true network addresses.

Contents
[hide]

• 1 Function
• 2 History
o 2.1 First generation - packet filters
o 2.2 Second generation - "stateful" filters
o 2.3 Third generation - application layer
o 2.4 Subsequent developments
• 3 Types
o 3.1 Network layer and packet filters
o 3.2 Example of some basic firewall rules
o 3.3 Application-layer
o 3.4 Proxies
o 3.5 Network address translation
• 4 See also
• 5 References

• 6 External links

[edit] Function
A firewall is a dedicated appliance, or software running on a computer, which inspects network
traffic passing through it, and denies or permits passage based on a set of rules.

It is a software or hardware that is normally placed between a protected network and a not
protected network and acts like a gate to protect assets to ensure that nothing private goes out
and nothing malicious comes in.

A firewall's basic task is to regulate some of the flow of traffic between computer networks of
different trust levels. Typical examples are the Internet which is a zone with no trust and an
internal network which is a zone of higher trust. A zone with an intermediate trust level, situated
between the Internet and a trusted internal network, is often referred to as a "perimeter network"
or Demilitarized zone (DMZ).

A firewall's function within a network is similar to physical firewalls with fire doors in building
construction. In the former case, it is used to prevent network intrusion to the private network. In
the latter case, it is intended to contain and delay structural fire from spreading to adjacent
structures.

Without proper configuration, a firewall can often become worthless. Standard security practices
dictate a "default-deny" firewall ruleset, in which the only network connections which are
allowed are the ones that have been explicitly allowed. Unfortunately, such a configuration
requires detailed understanding of the network applications and endpoints required for the
organization's day-to-day operation. Many businesses lack such understanding, and therefore
implement a "default-allow" ruleset, in which all traffic is allowed unless it has been specifically
blocked. This configuration makes inadvertent network connections and system compromise
much more likely.

[edit] History
The term "firewall" originally meant a wall to confine a fire or potential fire within a building,
cf. firewall (construction). Later uses refer to similar structures, such as the metal sheet
separating the engine compartment of a vehicle or aircraft from the passenger compartment.

Firewall technology emerged in the late 1980s when the Internet was a fairly new technology in
terms of its global use and connectivity. The predecessors to firewalls for network security were
the routers used in the late 1980s to separate networks from one another.[1] The view of the
Internet as a relatively small community of compatible users who valued openness for sharing
and collaboration was ended by a number of major internet security breaches, which occurred in
the late 1980s:[1]

• Clifford Stoll's discovery of German spies tampering with his system[1]


• Bill Cheswick's "Evening with Berferd" 1992 in which he set up a simple electronic jail
to observe an attacker[1]
• In 1988 an employee at the NASA Ames Research Center in California sent a memo by
email to his colleagues [2] that read,

We are currently under attack from an Internet VIRUS! It has hit Berkeley, UC San
“ Diego, Lawrence Livermore, Stanford, and NASA Ames. ”
• The Morris Worm spread itself through multiple vulnerabilities in the machines of the
time. Although it was not malicious in intent, the Morris Worm was the first large scale
attack on Internet security; the online community was neither expecting an attack nor
prepared to deal with one.[3]

[edit] First generation - packet filters

The first paper published on firewall technology was in 1988, when engineers from Digital
Equipment Corporation (DEC) developed filter systems known as packet filter firewalls. This
fairly basic system was the first generation of what would become a highly evolved and technical
internet security feature. At AT&T Bell Labs, Bill Cheswick and Steve Bellovin were continuing
their research in packet filtering and developed a working model for their own company based
upon their original first generation architecture.

Packet filters act by inspecting the "packets" which represent the basic unit of data transfer
between computers on the Internet. If a packet matches the packet filter's set of rules, the packet
filter will drop (silently discard) the packet, or reject it (discard it, and send "error responses" to
the source).

This type of packet filtering pays no attention to whether a packet is part of an existing stream of
traffic (it stores no information on connection "state"). Instead, it filters each packet based only
on information contained in the packet itself (most commonly using a combination of the
packet's source and destination address, its protocol, and, for TCP and UDP traffic, the port
number).
TCP and UDP protocols comprise most communication over the Internet, and because TCP and
UDP traffic by convention uses well known ports for particular types of traffic, a "stateless"
packet filter can distinguish between, and thus control, those types of traffic (such as web
browsing, remote printing, email transmission, file transfer), unless the machines on each side of
the packet filter are both using the same non-standard ports.

[edit] Second generation - "stateful" filters

Main article: Stateful firewall

From 1989-1990 three colleagues from AT&T Bell Laboratories, Dave Presetto, Janardan
Sharma, and Kshitij Nigam developed the second generation of firewalls, calling them circuit
level firewalls.

Second(2nd) Generation firewalls in addition regard placement of each individual packet within
the packet series. This technology is generally referred to as a stateful packet inspection as it
maintains records of all connections passing through the firewall and is able to determine
whether a packet is either the start of a new connection, a part of an existing connection, or is an
invalid packet. Though there is still a set of static rules in such a firewall, the state of a
connection can in itself be one of the criteria which trigger specific rules.

This type of firewall can help prevent attacks which exploit existing connections, or certain
Denial-of-service attacks.

[edit] Third generation - application layer

Main article: Application layer firewall

Publications by Gene Spafford of Purdue University, Bill Cheswick at AT&T Laboratories, and
Marcus Ranum described a third generation firewall known as an application layer firewall, also
known as a proxy-based firewall. Marcus Ranum's work on the technology spearheaded the
creation of the first commercial product. The product was released by DEC who named it the
DEC SEAL product. DEC’s first major sale was on June 13, 1991 to a chemical company based
on the East Coast of the USA.

TIS, under a broader DARPA contract, developed the Firewall Toolkit (FWTK), and made it
freely available under license on October 1, 1993. The purposes for releasing the freely-
available, not for commercial use, FWTK were: to demonstrate, via the software, documentation,
and methods used, how a company with (at the time) 11 years' experience in formal security
methods, and individuals with firewall experience, developed firewall software; to create a
common base of very good firewall software for others to build on (so people did not have to
continue to "roll their own" from scratch); and to "raise the bar" of firewall software being used.

The key benefit of application layer filtering is that it can "understand" certain applications and
protocols (such as File Transfer Protocol, DNS, or web browsing), and it can detect whether an
unwanted protocol is being sneaked through on a non-standard port or whether a protocol is
being abused in any harmful way.

[edit] Subsequent developments

In 1992, Bob Braden and Annette DeSchon at the University of Southern California (USC) were
refining the concept of a firewall. The product known as "Visas" was the first system to have a
visual integration interface with colours and icons, which could be easily implemented to and
accessed on a computer operating system such as Microsoft's Windows or Apple's MacOS. In
1994 an Israeli company called Check Point Software Technologies built this into readily
available software known as FireWall-1.

The existing deep packet inspection functionality of modern firewalls can be shared by Intrusion-
prevention systems (IPS).

Currently, the Middlebox Communication Working Group of the Internet Engineering Task
Force (IETF) is working on standardizing protocols for managing firewalls and other
middleboxes.

Another axis of development is about integrating identity of users into Firewall rules. Many
firewalls provide such features by binding user identities to IP or MAC addresses, which is very
approximate and can be easily turned around. The NuFW firewall provides real identity based
firewalling, by requesting user's signature for each connection.

[edit] Types
There are several classifications of firewalls depending on where the communication is taking
place, where the communication is intercepted and the state that is being traced.

[edit] Network layer and packet filters

Network layer firewalls, also called packet filters, operate at a relatively low level of the TCP/IP
protocol stack, not allowing packets to pass through the firewall unless they match the
established rule set. The firewall administrator may define the rules; or default rules may apply.
The term "packet filter" originated in the context of BSD operating systems.

Network layer firewalls generally fall into two sub-categories, stateful and stateless. Stateful
firewalls maintain context about active sessions, and use that "state information" to speed packet
processing. Any existing network connection can be described by several properties, including
source and destination IP address, UDP or TCP ports, and the current stage of the connection's
lifetime (including session initiation, handshaking, data transfer, or completion connection). If a
packet does not match an existing connection, it will be evaluated according to the ruleset for
new connections. If a packet matches an existing connection based on comparison with the
firewall's state table, it will be allowed to pass without further processing.
Stateless firewalls require less memory, and can be faster for simple filters that require less time
to filter than to look up a session. They may also be necessary for filtering stateless network
protocols that have no concept of a session. However, they cannot make more complex decisions
based on what stage communications between hosts have reached.

Modern firewalls can filter traffic based on many packet attributes like source IP address, source
port, destination IP address or port, destination service like WWW or FTP. They can filter based
on protocols, TTL values, netblock of originator, domain name of the source, and many other
attributes.

Commonly used packet filters on various versions of Unix are ipf (various), ipfw (FreeBSD/Mac
OS X), pf (OpenBSD, and all other BSDs), iptables/ipchains (Linux).

[edit] Example of some basic firewall rules

Examples using a subnet address of 10.10.10.x and 255.255.255.0 as the subnet mask for the
local area network (LAN).

It is common to allow a response to a request for information coming from a computer inside the
local network, like NetBIOS.

Direction | Protocol Source | Address Source | Port | Destination Address |


Destination Port | Action
In/Out Tcp/Udp Any Any 10.10.10.0
>1023 Allow

Firewall rule that allows all traffic out.

Direction | Protocol Source | Address Source | Port | Destination Address |


Destination Port | Action
Out Tcp/Udp 10.10.10.0 Any Any
Any Allow

Firewall rule for SMTP (default port 25), allows packets governed by this protocol to access the
local SMTP Gateway (which in this example has the IP 10.10.10.6). (it is far more common to
not specify the Destination Address, or if desired, to use the ISP SMTP service address).

Direction | Protocol Source | Address Source | Port | Destination Address |


Destination Port | Action
Out Tcp Any Any 10.10.10.6
25 Allow

General Rule for the final firewall entry. If a policy does not explicitly allow a request for
service, that service should be denied by this catch-all rule which should be the last in the list of
rules.

Direction | Protocol Source | Address Source | Port | Destination Address |


Destination Port | Action
In/Out Tcp/Udp Any Any Any
Any Deny

Other useful rules would be allowing ICMP error messages, restricting all destination ports
except port 80 in order to allow only web browsing, etc.

[edit] Application-layer

Main article: Application layer firewall

Application-layer firewalls work on the application level of the TCP/IP stack (i.e., all browser
traffic, or all telnet or ftp traffic), and may intercept all packets traveling to or from an
application. They block other packets (usually dropping them without acknowledgment to the
sender). In principle, application firewalls can prevent all unwanted outside traffic from reaching
protected machines.

On inspecting all packets for improper content, firewalls can restrict or prevent outright the
spread of networked computer worms and trojans. In practice, however, this becomes so
complex and so difficult to attempt (given the variety of applications and the diversity of content
each may allow in its packet traffic) that comprehensive firewall design does not generally
attempt this approach.

The XML firewall exemplifies a more recent kind of application-layer firewall.

[edit] Proxies

Main article: Proxy server

A proxy device (running either on dedicated hardware or as software on a general-purpose


machine) may act as a firewall by responding to input packets (connection requests, for example)
in the manner of an application, whilst blocking other packets.

Proxies make tampering with an internal system from the external network more difficult and
misuse of one internal system would not necessarily cause a security breach exploitable from
outside the firewall (as long as the application proxy remains intact and properly configured).
Conversely, intruders may hijack a publicly-reachable system and use it as a proxy for their own
purposes; the proxy then masquerades as that system to other internal machines. While use of
internal address spaces enhances security, crackers may still employ methods such as IP spoofing
to attempt to pass packets to a target network.

[edit] Network address translation

Main article: Network address translation

Firewalls often have network address translation (NAT) functionality, and the hosts protected
behind a firewall commonly have addresses in the "private address range", as defined in RFC
1918. Firewalls often have such functionality to hide the true address of protected hosts.
Originally, the NAT function was developed to address the limited number of IPv4 routable
addresses that could be used or assigned to companies or individuals as well as reduce both the
amount and therefore cost of obtaining enough public addresses for every computer in an
organization. Hiding the addresses of protected devices has become an increasingly important
defense against network reconnaissance.

[edit] See also


• Access control list
• Bastion host
• Circuit-level gateway
• Comodo Firewall Pro
• Comparison of firewalls
• Computer security
• End-to-end connectivity
• Firewall pinhole
• List of Linux router or firewall distributions
• network reconnaissance
• Personal firewall
• Golden Shield Project (aka Great Firewall of China)
• Unified threat management
• Screened-subnet firewall
• Mangled packet
• Sandbox (computer security)

[edit] References
1. ^ a b c d A History and Survey of Network Firewalls Kenneth Ingham and Stephanie Forrest
2. ^ [1] Firewalls by Dr.Talal Alkharobi
3. ^ RFC 1135 The Helminthiasis of the Internet

[edit] External links


Wikimedia Commons has media related to: Firewall

• Internet Firewalls: Frequently Asked Questions, compiled by Matt Curtin, Marcus Ranum
and Paul Robertson.
• Evolution of the Firewall Industry - Discusses different architectures and their
differences, how packets are processed, and provides a timeline of the evolution.
• A History and Survey of Network Firewalls - provides an overview of firewalls at the
various ISO levels, with references to the original papers where first firewall work was
reported.
• Software Firewalls: Made of Straw? Part 1 of 2 and Software Firewalls: Made of Straw?
Part 2 of 2 - a technical view on software firewall design and potential weaknesses

[hide]
v•d•e
Firewall software

Arptables · Check Point Integrity · Cisco Secure Integrated Software ·


ClarkConnect · Comodo Firewall Pro · Context-based access control · Core
Force · EBox · Endian Firewall · FireHOL · Firestarter · IPCop · IPFilter ·
A - M Microsoft Internet Security and Acceleration Server · Ipfirewall · Iplist ·
Iptables · Jetico Firewall · Kaspersky Internet Security · Kerio
Technologies · L7-filter · M0n0wall · McAfee Personal Firewall Plus ·
MoBlock

NetBarrier X4 · Netfilter · Norton 360 · Novell BorderManager · NuFW ·


Online Armor Personal Firewall · Outpost Firewall Pro · PC Tools Firewall
Plus · PF · PeerGuardian · Personal firewall · PfSense · ProtoWall · Sentry
N-Z
Firewall · Shorewall · SmoothWall · Sunbelt Personal Firewall · Tiny
Software · Untangle · WinGate · WinRoute · Windows Firewall · Windows
Live OneCare · Zeroshell · ZoneAlarm · ZoneAlarm Z100G · Zorp firewall

Related articles Application firewall · Application layer firewall · Comparison of firewalls


Retrieved from "http://en.wikipedia.org/wiki/Firewall"
Categories: Computer network security | Firewall software | Packets | Data security
Hidden categories: Wikipedia indefinitely move-protected pages | Articles needing additional
references from February 2008

Views

• Article
• Discussion
• Edit this page
• History

Personal tools

• Log in / create account

Navigation

• Main page
• Contents
• Featured content
• Current events
• Random article

Search

Interaction

• About Wikipedia
• Community portal
• Recent changes
• Contact Wikipedia
• Donate to Wikipedia
• Help

Toolbox

• What links here


• Related changes
• Upload file
• Special pages
• Printable version
• Permanent link
• Cite this page

Languages

• ‫العربية‬
• বাংলা
• Bân-lâm-gú
• Bosanski
• Български
• Català
• Česky
• Dansk
• Deutsch
• Eesti
• Ελληνικά
• Español
• Esperanto
• Euskara
• ‫فارسی‬
• Français
• Galego
• 한국어
• Hornjoserbsce
• Hrvatski
• Bahasa Indonesia

Seaport, a 17th Century depiction by Claude Lorrain, 1638


Major Ports
Make a donation to Wikipedia and
give the gift of knowledge!

Port
From Wikipedia, the free encyclopedia

(Redirected from Ports)


Jump to: navigation, search
For other uses, see Port The Port of Dover, UK is the world's busiest passenger port.
(disambiguation).
This article is in need of attention from an expert on the subject. WikiProject Ports may be
able to help recruit one. (May 2008)
This article may require cleanup to meet Wikipedia's quality standards. Please improve
this article if you can. (May 2008)

The port of Piraeus in Greece

A port is a facility for receiving


ships and transferring cargo. They Valparaíso, Chile, the main port in that country
are usually found at the edge of an
ocean, sea, river, or lake. Ports
often have cargo-handling
equipment such as cranes (operated
by longshoremen) and forklifts for
use in loading/unloading of ships,
Port of Kobe, Japan at twilight
which may be provided by private interests or public bodies. Often, canneries or other processing
facilities will be located near by. Harbour pilots and tugboats are often used to maneuver large
ships in tight quarters as they approach and leave the docks. Ports which handle international
traffic have customs facilities.

A prerequisite for a port is a harbor with water of sufficient depth to receive ships whose draft
will allow passage into and out of the harbor.

Ports sometimes fall out of use. Rye, East Sussex was an important English port in the Middle
Ages, but the coastline changed and it is now 2 miles (3.2 km) from the sea, while the ports of
Ravenspurn and Dunwich have been lost to coastal erosion. Also in the United Kingdom,
London, on the River Thames was once an important international ports but changes in shipping
methods, such as the use of containers and larger ships, put it at a disadvantage.

Contents
[hide]

• 1 Port types
• 2 See also
o 2.1 Water port topics
o 2.2 Other types of ports
o 2.3 Lists

• 3 External links

[edit] Port types


The terms "port" and "seaport" are used for ports that handle ocean-going vessels, and river
port is used for river traffic, such as barges and other shallow draft vessels. Some ports on a
lake, river, or canal have access to a sea or ocean, and are sometimes called "inland ports".

A fishing port is a type of port or harbor facility particularly suitable for landing and distributing
fish.

Port can also be used to refer to the left side of a craft either an airplane or ship.

A "dry port" is a term sometimes used to describe a yard used to place containers or conventional
bulk cargo, usually connected to a seaport by rail or road.

A warm water port is a port where the water does not freeze in winter. Because they are
available year-round, warm water ports can be of great geopolitical or economic interest, with
the ports of Saint Petersburg, Dalian, and Valdez being notable examples.
A seaport is further categorized as a "cruise port" or a "cargo port". Additionally, "cruise ports"
are also known as a "home port" or a "port of call". The "cargo port" is also further categorized
into a "bulk" or "break bulk port" or as a "container port".

A cruise home port is the port where the passengers board to start their cruise and also debark
the cruise ships at the end of their cruise. It is also where the cruise ship's supplies are loaded for
the cruise. this includes everything from the wate and fuels to fruits, vegetable, champagne, and
any other supplies needed for the cruise. "Cruise home ports" are a very busy place during the
day the cruise ship is in port as the passengers along with their baggage debark and the new
passengers board the ship in addition to all the supplies. Currently, the Cruise Capital of the
World is the Port of Miami closely followed behind by Port Everglades and the Port of San Juan,
Puerto Rico.

A port of call is an intermediate stop for a ship on its sailing itinerary which may be half-a-
dozen ports. At these ports a cargo ship may take on supplies or fuel as well as unloading and
loading their cargo. But for a cruise ship, it is their premier stop where the cruise lines take their
passengers to enjoy their vacation.

Cargo ports on the other hand are much more different than cruise ports. They are very different
since each handles very different cargo which has to be loaded and unloaded by very different
mechanical means. The port may handle one particular type of cargo or it may handle numerous
cargoes such as grains, liquid fuels, liquid chemicals, wood, automobiles, etc. Such ports are
known as the "bulk" or "break bulk ports". Those ports that handle containerized cargo are
known as container ports. Most cargo ports handle all sorts of cargo but some ports are very
specific as to what cargo they handle. Additionally, the individual cargo ports are divided into
different operating terminals which handle the different cargoes and are operated by different
companies also known as terminal operators or stevedores.

[edit] See also


[edit] Water port topics

• Bandar (Persian word for "port" or "haven")


• Dock (maritime)
• Harbour
• Marina - port for recreational boating
• Port operator
• Ship transport

[edit] Other types of ports

• Airport
• Spaceport
• Port Wine
[edit] Lists

• List of seaports
• World's busiest port
• List of world's busiest transshipment ports
• List of world's busiest port regions
• List of busiest container ports
• Sea rescue organisations

[edit] External links


Look up port in Wiktionary, the free dictionary.

• Port Industry Statistics, American Association of Port Authorities


• World Port Rankings 2006, by metric tons and by TEUs, American Association of Port
Authorities (xls format, 26.5kb)
• Information on yachting facilities at 1,613 ports in 191 countries from Noonsite.com
• Social & Economic Benefits of PORTS from "NOAA Socioeconomics" website initiative
• World sea ports search
• PortCities UK

Retrieved from "http://en.wikipedia.org/wiki/Port"


Categories: Ports articles needing expert attention | Ports and harbours
Hidden categories: Articles needing expert attention from May 2008 | Cleanup from May 2008 |
All pages needing cleanup

Views

• Article
• Discussion
• Edit this page
• History

Personal tools

• Log in / create account

Navigation

• Main page
• Contents
• Featured content
• Current events
• Random article
Search

Interaction

• About Wikipedia
• Community portal
• Recent changes
• Contact Wikipedia
• Donate to Wikipedia
• Help

Toolbox

• What links here


• Related changes
• Upload file
• Special pages
• Printable version
• Permanent link
• Cite this page

Languages

• ‫العربية‬
• Brezhoneg
• Български
• Català
• Česky
• Dansk
• Eesti
• Ελληνικά
• Español
• Euskara
• ‫فارسی‬
• Français
• Қазақша
• 한국어
• Hrvatski
• Bahasa Indonesia
• Italiano
• ‫עברית‬
• Latina
• Lietuvių
• Lingála
• 日本語
• Polski
• Português
• Română
• Русский
• Simple English
• Slovenščina
• Suomi
• Svenska
• Tagalog
• Türkçe
• Tiếng Việt
• 中文
• Walon

• This page was last modified on 26 July 2009 at 18:24.


• Text is available under the Creative Commons Attribution/Share-Alike License;
additional terms may apply. See Terms of Use for details.
Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a non-profit
organization.
• Privacy policy
• About Wikipedia
• Disclaimers

Your continued donations keep Wikipedia running!

Tunneling protocol
From Wikipedia, the free encyclopedia

Jump to: navigation, search

Computer networks use a tunneling protocol when one network protocol (the delivery
protocol) encapsulates a different payload protocol. By using tunneling one can (for example)
carry a payload over an incompatible delivery-network, or provide a secure path through an
untrusted network.

Tunneling typically contrasts with a layered protocol model such as those of OSI or TCP/IP. The
tunnel protocol usually (but not always) operates at a higher level in the model than does the
payload protocol, or at the same level. Protocol encapsulation carried out by conventional
layered protocols, in accordance with the OSI model or TCP/IP model (for example: HTTP over
TCP over IP over PPP over a V.92 modem) does not count as tunneling.
To understand a particular protocol stack, network engineers must understand both the payload
and delivery protocol sets.

As an example of network layer over network layer, Generic Routing Encapsulation (GRE), a
protocol running over IP (IP Protocol Number 47), often serves to carry IP packets, with RFC
1918 private addresses, over the Internet using delivery packets with public IP addresses. In this
case, the delivery and payload protocols are compatible, but the payload addresses are
incompatible with those of the delivery network.

In contrast, an IP payload might believe it sees a data link layer delivery when it is carried inside
the Layer 2 Tunneling Protocol (L2TP), which appears to the payload mechanism as a protocol
of the data link layer. L2TP, however, actually runs over the transport layer using User Datagram
Protocol (UDP) over IP. The IP in the delivery protocol could run over any data-link protocol
from IEEE 802.2 over IEEE 802.3 (i.e., standards-based Ethernet) to the Point-to-Point Protocol
(PPP) over a dialup modem link.

Tunneling protocols may use data encryption to transport insecure payload protocols over a
public network (such as the Internet), thereby providing VPN functionality. IPSec has an end-to-
end Transport Mode, but can also operate in a tunneling mode through a trusted security
gateway.

The Internet Protocol Suite

Application Layer

BGP · DHCP · DNS · FTP · GTP · HTTP · IMAP ·


IRC · Megaco · MGCP · NNTP · NTP · POP · RIP ·
RPC · RTP · RTSP · SDP · SIP · SMTP · SNMP ·
SOAP · SSH · Telnet · TLS/SSL · XMPP · (more)

Transport Layer

TCP · UDP · DCCP · SCTP · RSVP · ECN · (more)

Internet Layer

IP (IPv4, IPv6) · ICMP · ICMPv6 · IGMP · IPsec ·


(more)

Link Layer
ARP · RARP · NDP · OSPF · Tunnels (L2TP) · PPP ·
Media Access Control (Ethernet, MPLS, DSL, ISDN,
FDDI) · Device Drivers · (more)

This box: view • talk • edit

Contents
[hide]

• 1 SSH tunneling
• 2 Tunneling to circumvent firewall policy
• 3 See also

• 4 External links

[edit] SSH tunneling


An SSH tunnel consists of an encrypted tunnel created through an SSH protocol connection.
Users may set up SSH tunnels to tunnel unencrypted traffic over a network through an encrypted
channel. For example, Windows machines can share files using the SMB protocol, a non-
encrypted protocol. If one were to mount a Microsoft Windows file-system remotely through the
Internet, someone snooping on the connection could see transferred files. To mount the Windows
file-system securely, one can establish an SSH tunnel that routes all SMB traffic to the remote
fileserver through an encrypted channel. Even though the SMB protocol itself contains no
encryption, the encrypted SSH channel through which it travels offers security.

To set up an SSH tunnel, one configures an SSH client to forward a specified local port to a port
on the remote machine. Once the SSH tunnel has been established, the user can connect to the
specified local port to access the network service. The local port need not have the same port
number as the remote port.

SSH tunnels provide a means to bypass firewalls that prohibit certain Internet services — so long
as a site allows outgoing connections. For example, an organization may prohibit a user from
accessing Internet web pages (port 80) directly without passing through the organization's proxy
filter (which provides the organization with a means of monitoring and controlling what the user
sees through the web). But users may not wish to have their web traffic monitored or blocked by
the organization's proxy filter. If users can connect to an external SSH server, they can create an
SSH tunnel to forward a given port on their local machine to port 80 on a remote web server. To
access the remote web server users would point their browser to http://localhost/.

Some SSH clients support dynamic port forwarding that allows the user to create a SOCKS 4/5
proxy. In this case users can configure their applications to use their local SOCKS proxy server.
This gives more flexibility than creating an SSH tunnel to a single port as previously described.
SOCKS can free the user from the limitations of connecting only to a predefined remote port and
server.

[edit] Tunneling to circumvent firewall policy


Users can also use tunneling to "sneak through" a firewall, using a protocol that the firewall
would normally block, but "wrapped" inside a protocol that the firewall does not block, such as
HTTP. If the firewall policy does not specifically exclude this kind of "wrapping", this trick can
function to get around the intended firewall policy.

Another HTTP-based tunneling method uses the HTTP CONNECT method/command. A client
issues the HTTP CONNECT command to an HTTP proxy. The proxy then makes a TCP
connection to a particular server:port, and relays data between that server:port and the client
connection. Because this creates a security hole, CONNECT-capable HTTP proxies commonly
restrict access to the CONNECT method. The proxy allows access only to TLS/SSL-based
HTTPS services.

[edit] See also


• Tunnel broker
• Virtual private network
• HTTP tunnel (software)
• Pseudo-wire

[edit] External links


• SSH Tunnels explained by example
• SSH tunneling used to lower latency/ping times in World of Warcraft.
• HOWTO: Set up a Windows SSH server for VNC tunneling
• SSH port forwarding and tunneling explained in detail.

This article was originally based on material from the Free On-line Dictionary of Computing,
which is licensed under the GFDL.

Retrieved from "http://en.wikipedia.org/wiki/Tunneling_protocol"


Categories: Tunneling protocols
Hidden categories: Wikipedia articles incorporating text from FOLDOC

Views

• Article
• Discussion
• Edit this page
• History
Personal tools

• Log in / create account

Navigation

• Main page
• Contents
• Featured content
• Current events
• Random article

Search

Interaction

• About Wikipedia
• Community portal
• Recent changes
• Contact Wikipedia
• Donate to Wikipedia
• Help

Toolbox

• What links here


• Related changes
• Upload file
• Special pages
• Printable version
• Permanent link
• Cite this page

Languages

• ‫العربية‬
• Česky
• Deutsch
• Español
• Euskara
• Français
• Italiano
• Nederlands
• Polski
• Português
• Русский
• Svenska
• 中文

• This page was last modified on 6 July 2009 at 13:24.


• Text is available under the Creative Commons Attribution/Share-Alike License;
additional terms may apply. See Terms of Use for details.
Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a non-profit
organization.
• Privacy policy
• About Wikipedia
• Disclaimers

Make a donation to Wikipedia and give the gift of knowledge!

Internet standard
From Wikipedia, the free encyclopedia

Jump to: navigation, search

In computer network engineering, an Internet Standard (STD) is a normative specification of a


technology or methodology applicable to the Internet. Internet Standards are created and
published by the Internet Engineering Task Force (IETF).

Contents
[hide]

• 1 Overview
• 2 Standardization process
o 2.1 Proposed Standard
o 2.2 Draft Standard
o 2.3 Standard
• 3 See also
• 4 References

• 5 External links

[edit] Overview
An Internet Standard is a special Request for Comments (RFC) or set of RFCs. An RFC that is to
become a Standard or part of a Standard begins as an Internet Draft, and is later (usually after
several revisions) accepted and published by the RFC Editor as a RFC and labeled a Proposed
Standard. Later, an RFC is labelled a Draft Standard, and finally a Standard. Collectively, these
stages are known as the standards track, and are defined in RFC 2026. The label Historic (sic) is
applied to deprecated standards-track documents or obsolete RFCs that were published before
the standards track was established.

Only the IETF, represented by the Internet Engineering Steering Group (IESG), can approve
standards-track RFCs. The definitive list of Internet Standards is maintained in Internet
Standards document STD 1: Internet Official Protocol Standards.[1]

[edit] Standardization process


This section may require cleanup to meet Wikipedia's quality standards. Please improve
this section if you can. (November 2007)

Becoming a standard is a three step process within the IETF called Proposed Standards, Draft
Standards and finally Internet Standards. If an RFC is part of a proposal that is on the standard
track, then at the first stage, the standard is proposed and subsequently organizations decide
whether to implement this Proposed Standard. After three separate implementations, more
review and corrections are made to the RFC, and a Draft Standard is created. At the final stage,
the RFC becomes a Standard.

[edit] Proposed Standard

A Proposed Standard (PS) is generally stable, has resolved known design choices, is believed to
be well-understood, has received significant community review, and appears to enjoy enough
community interest to be considered valuable. However, further experience might result in a
change or even retraction of the specification before it advances. Usually, neither implementation
nor operational experience is required.

[edit] Draft Standard

A specification from which at least two independent and interoperable implementations from
different code bases have been developed, and for which sufficient successful operational
experience has been obtained, may be elevated to the Draft Standard (DS) level.

A Draft Standard is normally considered to be a final specification, and changes are likely to be
made only to solve specific problems encountered. In most circumstances, it is reasonable for
vendors to deploy implementations of Draft Standards into a disruption sensitive environment.

[edit] Standard
A specification for which significant implementation and successful operational experience has
been obtained may be elevated to the Internet Standard (STD) level. An Internet Standard, which
may simply be referred to as a Standard, is characterized by a high degree of technical maturity
and by a generally held belief that the specified protocol or service provides significant benefit to
the Internet community.

Generally Internet Standards cover interoperability of systems on the internet through defining
protocols, messages formats, schemas, and languages. The most fundamental of the Standards
are the ones defining the Internet Protocol.

All Internet Standards are given a number in the STD series - The first document in this series,
STD 1, describes the remaining documents in the series, and has a list of Proposed Standards.

Each RFC is static; if the document is changed, it is submitted again and assigned a new RFC
number. If an RFC becomes an Internet Standard (STD), it is assigned an STD number but
retains its RFC number. When an Internet Standard is updated, its number stays the same and it
simply refers to a different RFC or set of RFCs. A given Internet Standard, STD n, may be RFCs
x and y at a given time, but later the same standard may be updated to be RFC z instead. For
example, in 2007 RFC 3700 was an Internet Standard—STD 1—and in May 2008 it was
replaced with RFC 5000, so RFC 3700 changed to Historic status, and now STD 1 is RFC 5000.
When STD 1 is updated again, it will simply refer to a newer RFC, but it will still be STD 1.
Note that not all RFCs are standards-track documents, but all Internet Standards and other
standards-track documents are RFCs.[2]

[edit] See also


• Standardization

[edit] References
1. ^ "Internet Official Protocol Standards (STD 1)" (plain text). RFC Editor. May 2008. ftp://ftp.rfc-
editor.org/in-notes/std/std1.txt. Retrieved on 2008-05-25.
2. ^ Huitema, C.; Postel, J.; Crocker, S. (April 1995). "Not All RFCs are Standards (RFC 1796)".
The Internet Engineering Task Force. http://tools.ietf.org/html/rfc1796. Retrieved on 2008-05-25.
"[E]ach RFC has a status…: Informational, Experimental, or Standards Track (Proposed
Standard, Draft Standard, Internet Standard), or Historic."

The Internet Standards Process is defined in a "Best Current Practice" document BCP 9
(currently RFC 2026).

[edit] External links


• RFC 5000 is the current Request For Comments that specifies Internet Official Protocol
Standards. It is, in itself, also an Internet Standard, STD 1.
• List of Official Internet Protocol Standards including “historic”, proposed, draft, obsolete,
and experimental standards, plus all of the "Best Current Practices."
• List of Full Standard RFCs
• Internet Architecture Board
• Internet Engineering Steering Group
• Internet Engineering Task Force
• RFC Editor

Retrieved from "http://en.wikipedia.org/wiki/Internet_standard"


Categories: Internet standards
Hidden categories: Cleanup from November 2007 | All pages needing cleanup | Articles
containing potentially dated statements from May 2008 | All articles containing potentially dated
statements | Articles containing potentially dated statements from 2006

Views

• Article
• Discussion
• Edit this page
• History

Personal tools

• Log in / create account

Navigation

• Main page
• Contents
• Featured content
• Current events
• Random article

Search

Interaction

• About Wikipedia
• Community portal
• Recent changes
• Contact Wikipedia
• Donate to Wikipedia
• Help
Toolbox

• What links here


• Related changes
• Upload file
• Special pages
• Printable version
• Permanent link
• Cite this page

Languages

• ‫العربية‬
• Dansk
• Deutsch
• 日本語

• This page was last modified on 7 April 2009 at 00:08.


• Text is available under the Creative Commons Attribution/Share-Alike License;
additional terms may apply. See Terms of Use for details.
Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a non-profit
organization.
• Privacy policy
• About Wikipedia
• Disclaimers

Help us provide free content to the world by donating today!

Proxy server
From Wikipedia, the free encyclopedia

Jump to: navigation, search


For Wikipedia's policy on editing from open proxies, please see Wikipedia:Open proxies.
This article is in need of attention from an expert on the subject. WikiProject Technology or
the Technology Portal may be able to help recruit one. (November 2008)
Schematic representation of a proxy server, where the computer in the middle acts as the proxy
server between the other two.

In computer networks, a proxy server is a server (a computer system or an application program)


that acts as a go-between for requests from clients seeking resources from other servers. A client
connects to the proxy server, requesting some service, such as a file, connection, web page, or
other resource, available from a different server. The proxy server evaluates the request
according to its filtering rules. For example, it may filter traffic by IP address or protocol. If the
request is validated by the filter, the proxy provides the resource by connecting to the relevant
server and requesting the service on behalf of the client. A proxy server may optionally alter the
client's request or the server's response, and sometimes it may serve the request without
contacting the specified server. In this case, it 'caches' responses from the remote server, and
returns subsequent requests for the same content directly.

A proxy server has two purposes:

• To keep machines behind it anonymous (mainly for security).[1]


• To speed up access to a resource (via caching). It is commonly used to cache web pages
from a web server.[2]

A proxy server that passes requests and replies unmodified is usually called a gateway or
sometimes tunneling proxy.

A proxy server can be placed in the user's local computer or at various points between the user
and the destination servers or the Internet. A reverse proxy is a proxy used as a front-end to
accelerate and cache in-demand resources (such as a web page).

Contents
[hide]

• 1 Types and functions


o 1.1 Caching proxy server
o 1.2 Web proxy
o 1.3 Content-filtering web proxy
o 1.4 Anonymizing proxy server
o 1.5 Hostile proxy
o 1.6 Intercepting proxy server
o 1.7 Transparent and non-transparent proxy server
o 1.8 Forced proxy
o 1.9 Suffix proxy
o 1.10 Open proxy server
o 1.11 Reverse proxy server
o 1.12 Circumventor
o 1.13 Content filter
• 2 Risks of using anonymous proxy servers
• 3 References
• 4 See also

• 5 External links

[edit] Types and functions


Proxy servers implement one or more of the following functions:

[edit] Caching proxy server

A caching proxy server accelerates service requests by retrieving content saved from a previous
request made by the same client or even other clients. Caching proxies keep local copies of
frequently requested resources, allowing large organizations to significantly reduce their
upstream bandwidth usage and cost, while significantly increasing performance. Most ISPs and
large businesses have a caching proxy. These machines are built to deliver superb file system
performance (often with RAID and journaling) and also contain hot-rodded versions of TCP.
Caching proxies were the first kind of proxy server.

Some poorly-implemented caching proxies have had downsides (e.g., an inability to use user
authentication). Some problems are described in RFC 3143 (Known HTTP Proxy/Caching
Problems).

Another important use of the proxy server is to reduce the hardware cost. An organization
may have many systems on the same network or under control of a single server, prohibiting the
possibility of an individual connection to the Internet for each system. In such a case, the
individual systems can be connected to one proxy server, and the proxy server connected to the
main server.

[edit] Web proxy

A proxy that focuses on WWW traffic is called a "web proxy". The most common use of a web
proxy is to serve as a web cache. Most proxy programs (e.g. Squid) provide a means to deny
access to certain URLs in a blacklist, thus providing content filtering. This is often used in a
corporate, educational or library environment, and anywhere else where content filtering is
desired. Some web proxies reformat web pages for a specific purpose or audience (e.g., cell
phones and PDAs).

AOL dialup customers used to have their requests routed through an extensible proxy that
'thinned' or reduced the detail in JPEG pictures. This sped up performance but caused problems,
either when more resolution was needed or when the thinning program produced incorrect
results. This is why in the early days of the web many web pages would contain a link saying
"AOL Users Click Here" to bypass the web proxy and to avoid the bugs in the thinning software.

[edit] Content-filtering web proxy

Further information: Content-control software

A content-filtering web proxy server provides administrative control over the content that may be
relayed through the proxy. It is commonly used in both commercial and non-commercial
organizations (especially schools) to ensure that Internet usage conforms to acceptable use
policy. In some cases users can circumvent the proxy, since there are services designed to proxy
information from a filtered website through a non filtered site to allow it through the users proxy.
Some common methods used for content filtering include: URL or DNS blacklists, URL regex
filtering, MIME filtering, or content keyword filtering. Some products have been known to
employ content analysis techniques to look for traits commonly used by certain types of content
providers.

A content filtering proxy will often support user authentication, to control web access. It also
usually produces logs, either to give detailed information about the URLs accessed by specific
users, or to monitor bandwidth usage statistics. It may also communicate to daemon based and/or
ICAP based antivirus software to provide security against virus and other malware by scanning
incoming content in real time before it enters the network.

[edit] Anonymizing proxy server

An anonymous proxy server (sometimes called a web proxy) generally attempts to anonymize
web surfing. There are different varieties of anonymizers. One of the more common variations is
the open proxy. Because they are typically difficult to track, open proxies are especially useful to
those seeking online anonymity, from political dissidents to computer criminals. Some users are
merely interested in anonymity on principle, to facilitate constitutional human rights of freedom
of speech, for instance. The server receives requests from the anonymizing proxy server, and
thus does not receive information about the end user's address. However, the requests are not
anonymous to the anonymizing proxy server, and so a degree of trust is present between that
server and the user. Many of them are funded through a continued advertising link to the user.

Access control: Some proxy servers implement a logon requirement. In large organizations,
authorized users must log on to gain access to the web. The organization can thereby track usage
to individuals.

Some anonymizing proxy servers may forward data packets with header lines such as
HTTP_VIA, HTTP_X_FORWARDED_FOR, or HTTP_FORWARDED, which may reveal the
IP address of the client. Other anonymizing proxy servers, known as elite or high anonymity
proxies, only include the REMOTE_ADDR header with the IP address of the proxy server,
making it appear that the proxy server is the client. A website could still suspect a proxy is being
used if the client sends packets which include a cookie from a previous visit that did not use the
high anonymity proxy server. Clearing cookies, and possibly the cache, would solve this
problem.

[edit] Hostile proxy

Proxies can also be installed in order to eavesdrop upon the dataflow between client machines
and the web. All accessed pages, as well as all forms submitted, can be captured and analyzed by
the proxy operator. For this reason, passwords to online services (such as webmail and banking)
should always be exchanged over a cryptographically secured connection, such as SSL.

[edit] Intercepting proxy server


An intercepting proxy (also known as a "transparent proxy") combines a proxy server with a
gateway. Connections made by client browsers through the gateway are redirected through the
proxy without client-side configuration (or often knowledge).

Intercepting proxies are commonly used in businesses to prevent avoidance of acceptable use
policy, and to ease administrative burden, since no client browser configuration is required.

It is often possible to detect the use of an intercepting proxy server by comparing the external IP
address to the address seen by an external web server, or by examining the HTTP headers on the
server side.

[edit] Transparent and non-transparent proxy server

The term "transparent proxy" is most often used incorrectly to mean "intercepting proxy"
(because the client does not need to configure a proxy and cannot directly detect that its requests
are being proxied). Transparent proxies can be implemented using Cisco's WCCP (Web Cache
Control Protocol). This proprietary protocol resides on the router and is configured from the
cache, allowing the cache to determine what ports and traffic is sent to it via transparent
redirection from the router. This redirection can occur in one of two ways: GRE Tunneling (OSI
Layer 3) or MAC rewrites (OSI Layer 2).

However, RFC 2616 (Hypertext Transfer Protocol—HTTP/1.1) offers different definitions:

"A 'transparent proxy' is a proxy that does not modify the request or response beyond
what is required for proxy authentication and identification".
"A 'non-transparent proxy' is a proxy that modifies the request or response in order to
provide some added service to the user agent, such as group annotation services, media
type transformation, protocol reduction, or anonymity filtering".

[edit] Forced proxy

The term "forced proxy" is ambiguous. It means both "intercepting proxy" (because it filters all
traffic on the only available gateway to the Internet) and its exact opposite, "non-intercepting
proxy" (because the user is forced to configure a proxy in order to access the Internet).

Forced proxy operation is sometimes necessary due to issues with the interception of TCP
connections and HTTP. For instance, interception of HTTP requests can affect the usability of a
proxy cache, and can greatly affect certain authentication mechanisms. This is primarily because
the client thinks it is talking to a server, and so request headers required by a proxy are unable to
be distinguished from headers that may be required by an upstream server (esp authorization
headers). Also the HTTP specification prohibits caching of responses where the request
contained an authorization header.

[edit] Suffix proxy


A suffix proxy server allows a user to access web content by appending the name of the proxy
server to the URL of the requested content (e.g. "en.wikipedia.org.6a.nl").

Suffix proxy servers are easier to use than regular proxy servers. The concept appeared in 2003
in form of the IPv6Gate and in 2004 in form of the Coral Content Distribution Network, but the
term suffix proxy was only coined in October 2008 by "6a.nl"[citation needed].

[edit] Open proxy server

Main article: Open proxy

Because proxies might be used to abuse, system administrators have developed a number of
ways to refuse service to open proxies. Many IRC networks automatically test client systems for
known types of open proxy. Likewise, an email server may be configured to automatically test e-
mail senders for open proxies.

Groups of IRC and electronic mail operators run DNSBLs publishing lists of the IP addresses of
known open proxies, such as AHBL, CBL, NJABL, and SORBS.

The ethics of automatically testing clients for open proxies are controversial. Some experts, such
as Vernon Schryver, consider such testing to be equivalent to an attacker portscanning the client
host. [1] Others consider the client to have solicited the scan by connecting to a server whose
terms of service include testing.

[edit] Reverse proxy server

Main article: Reverse proxy

A reverse proxy is a proxy server that is installed in the neighborhood of one or more web
servers. All traffic coming from the Internet and with a destination of one of the web servers goes
through the proxy server. There are several reasons for installing reverse proxy servers:

• Encryption / SSL acceleration: when secure web sites are created, the SSL encryption is
often not done by the web server itself, but by a reverse proxy that is equipped with SSL
acceleration hardware. See Secure Sockets Layer. Furthermore, a host can provide a
single "SSL proxy" to provide SSL encryption for an arbitrary number of hosts; removing
the need for a separate SSL Server Certificate for each host, with the downside that all
hosts behind the SSL proxy have to share a common DNS name or IP address for SSL
connections.
• Load balancing: the reverse proxy can distribute the load to several web servers, each
web server serving its own application area. In such a case, the reverse proxy may need to
rewrite the URLs in each web page (translation from externally known URLs to the
internal locations).
• Serve/cache static content: A reverse proxy can offload the web servers by caching static
content like pictures and other static graphical content.
• Compression: the proxy server can optimize and compress the content to speed up the
load time.
• Spoon feeding: reduces resource usage caused by slow clients on the web servers by
caching the content the web server sent and slowly "spoon feeding" it to the client. This
especially benefits dynamically generated pages.
• Security: the proxy server is an additional layer of defense and can protect against some
OS and WebServer specific attacks. However, it does not provide any protection to
attacks against the web application or service itself, which is generally considered the
larger threat.
• Extranet Publishing: a reverse proxy server facing the Internet can be used to
communicate to a firewalled server internal to an organization, providing extranet access
to some functions while keeping the servers behind the firewalls. If used in this way,
security measures should be considered to protect the rest of your infrastructure in case
this server is compromised, as its web application is exposed to attack from the Internet.

[edit] Circumventor

A circumventor is a method of defeating blocking policies implemented using proxy servers.


Ironically, most circumventors are also proxy servers, of varying degrees of sophistication,
which effectively implement "bypass policies".

A circumventor is a web-based page that takes a site that is blocked and "circumvents" it through
to an unblocked web site, allowing the user to view blocked pages. A famous example is elgooG,
which allowed users in China to use Google after it had been blocked there. elgooG differs from
most circumventors in that it circumvents only one block.

A September 2007 report from Citizen Lab recommended Web based proxies Proxify[2],
StupidCensorship[3], and CGIProxy.[4] Alternatively, users could partner with individuals outside
the censored network running Psiphon[5] or Peacefire/Circumventor.[6] A more elaborate approach
suggested was to run free tunneling software such as UltraSurf[7], and FreeGate,[8] or pay services
Anonymizer[9] and Ghost Surf.[10] Also listed were free application tunneling software Gpass[11]
and HTTP Tunnel,[12] and pay application software Relakks[13] and Guardster.[3] Lastly,
anonymous communication networks JAP ANON,[14] Tor,[15] and I2P[16] offer a range of
possibilities for secure publication and browsing.[4]

Other options include Garden and GTunnel by Garden Networks[17].

Students are able to access blocked sites (games, chatrooms, messenger, offensive material,
internet pornography, social networking, etc.) through a circumventor. As fast as the filtering
software blocks circumventors, others spring up. However, in some cases the filter may still
intercept traffic to the circumventor, thus the person who manages the filter can still see the sites
that are being visited.

Circumventors are also used by people who have been blocked from a web site.
Another use of a circumventor is to allow access to country-specific services, so that Internet
users from other countries may also make use of them. An example is country-restricted
reproduction of media and webcasting.

The use of circumventors is usually safe with the exception that circumventor sites run by an
untrusted third party can be run with hidden intentions, such as collecting personal information,
and as a result users are typically advised against running personal data such as credit card
numbers or passwords through a circumventor.

An example of one way to circumvent a content-filtering proxy server is by tunnelling through to


another proxy server, usually controlled by the user, which has unrestricted access to the internet.
This is often achieved by using a VPN type tunnel, such as VPN itself or SSH, through a port left
open by the proxy server to be circumvented. Port 80 is almost always open to allow the use of
HTTP, as is Port 443 to allow the use of HTTPS. Through the use of encryption, tunnelling to a
remote proxy server, provided the remote proxy server is itself secure, is not only difficult to
detect, but also difficult to intercept.

In some network configurations, clients attempting to access the proxy server are given different
levels of access privilege on the grounds of their computer location or even the MAC address of
the network card. However, if one has access to a system with higher access rights, they could
use that system as a proxy server for which the other clients use to access the original proxy
server, consequently altering their access privileges.

[edit] Content filter

Many work places, schools, and colleges restrict the web sites and online services that are made
available in their buildings. This is done either with a specialized proxy, called a content filter
(both commercial and free products are available), or by using a cache-extension protocol such
as ICAP, that allows plug-in extensions to an open caching architecture.

Requests made to the open internet must first pass through an outbound proxy filter. The web-
filtering company provides a database of URL patterns (regular expressions) with associated
content attributes. This database is updated weekly by site-wide subscription, much like a virus
filter subscription. The administrator instructs the web filter to ban broad classes of content (such
as sports, pornography, online shopping, gambling, or social networking). Requests that match a
banned URL pattern are rejected immediately.

Assuming the requested URL is acceptable, the content is then fetched by the proxy. At this point
a dynamic filter may be applied on the return path. For example, JPEG files could be blocked
based on fleshtone matches, or language filters could dynamically detect unwanted language. If
the content is rejected then an HTTP fetch error is returned and nothing is cached.

Most web filtering companies use an internet-wide crawling robot that assesses the likelihood
that a content is a certain type (i.e. "This content is 70% chance of porn, 40% chance of sports,
and 30% chance of news" could be the outcome for one web page). The resultant database is then
corrected by manual labor based on complaints or known flaws in the content-matching
algorithms.

Web filtering proxies are not able to peer inside secure sockets HTTP transactions. As a result,
users wanting to bypass web filtering will typically search the internet for an open and
anonymous HTTPS transparent proxy. They will then program their browser to proxy all
requests through the web filter to this anonymous proxy. Those requests will be encrypted with
https. The web filter cannot distinguish these transactions from, say, a legitimate access to a
financial website. Thus, content filters are only effective against unsophisticated users.

A special case of web proxies is "CGI proxies". These are web sites that allow a user to access a
site through them. They generally use PHP or CGI to implement the proxy functionality. These
types of proxies are frequently used to gain access to web sites blocked by corporate or school
proxies. Since they also hide the user's own IP address from the web sites they access through the
proxy, they are sometimes also used to gain a degree of anonymity, called "Proxy Avoidance".

[edit] Risks of using anonymous proxy servers


This section does not cite any references or sources. Please help improve this article by
adding citations to reliable sources. Unsourced material may be challenged and removed.
(February 2009)

In using a proxy server (for example, anonymizing HTTP proxy), all data sent to the service
being used (for example, HTTP server in a website) must pass through the proxy server before
being sent to the service, mostly in unencrypted form. It is therefore a feasible risk that a
malicious proxy server may record everything sent: including unencrypted logins and passwords.

By chaining proxies which do not reveal data about the original requester, it is possible to
obfuscate activities from the eyes of the user's destination. However, more traces will be left on
the intermediate hops, which could be used or offered up to trace the user's activities. If the
policies and administrators of these other proxies are unknown, the user may fall victim to a false
sense of security just because those details are out of sight and mind.

The bottom line of this is to be wary when using anonymizing proxy servers, and only use proxy
servers of known integrity (e.g., the owner is known and trusted, has a clear privacy policy, etc.),
and never use proxy servers of unknown integrity. If there is no choice but to use unknown proxy
servers, do not pass any private information (unless it is over an encrypted connection) through
the proxy.

In what is more of an inconvenience than a risk, proxy users may find themselves being blocked
from certain Web sites, as numerous forums and Web sites block IP addresses from proxies
known to have spammed or trolled the site.

[edit] References
1. ^ "How-to". Linux.org. http://www.linux.org/docs/ldp/howto/Firewall-HOWTO-11.html#ss11.4.
"The proxy server is, above all, a security device."
2. ^ Thomas, Keir (2006). Beginning Ubuntu Linux: From Novice to Professional. Apress. "A proxy
server helps speed up Internet access by storing frequently accessed pages"
3. ^ Site at www.guardster.com
4. ^ "Everyone's Guide to By-Passing Internet Censorship".
http://www.civisec.org/guides/everyones-guides.

[edit] See also


• Captive portal
• HTTP
• ICAP
• Internet privacy
• Proxy list
• SOCKS
• Transparent SMTP proxy
• Web cache

[edit] External links


• Proxy software and scripts at the Open Directory Project
• Free web-based proxy services at the Open Directory Project
• Free http proxy servers at the Open Directory Project

Retrieved from "http://en.wikipedia.org/wiki/Proxy_server"


Categories: Computer networking | Network performance | Internet architecture | Internet privacy
| Computer security software | Proxy servers
Hidden categories: Technology articles needing expert attention | Articles needing expert
attention from November 2008 | All articles with unsourced statements | Articles with unsourced
statements from October 2008 | Articles needing additional references from February 2009

Views

• Article
• Discussion
• Edit this page
• History

Personal tools

• Log in / create account

Navigation

• Main page
• Contents
• Featured content
• Current events
• Random article

Search

Interaction

• About Wikipedia
• Community portal
• Recent changes
• Contact Wikipedia
• Donate to Wikipedia
• Help

Toolbox

• What links here


• Related changes
• Upload file
• Special pages
• Printable version
• Permanent link
• Cite this page

Languages

• Afrikaans
• ‫العربية‬
• Česky
• Dansk
• Deutsch
• Ελληνικά
• Esperanto
• Español
• Euskara
• Suomi
• Français
• Galego
• ‫עברית‬
• Hrvatski
• Magyar
• Bahasa Indonesia
• Italiano
• 日本語
• 한국어
• Nederlands
• Polski
• Português
• Русский
• Simple English
• Slovenčina
• Slovenščina
• Svenska
• தமிழ
• ไทย
• Türkçe
• Українська
• ‫اردو‬
• Tiếng Việt
• 中文
• 粵語

• This page was last modified on 27 July 2009 at 07:24.


• Text is available under the Creative Commons Attribution/Share-Alike License;
additional terms may apply. See Terms of Use for details.
Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a non-profit
organization.
• Privacy policy
• About Wikipedia

Disclaimers Wikipedia is sustained by people like you. Please donate today.

Name resolution
From Wikipedia, the free encyclopedia

Jump to: navigation, search

In computer science, name resolution (also called name lookup) can have one of several
meanings, discussed below.

Contents
[hide]
• 1 Name resolution in computer languages
o 1.1 Static versus dynamic
• 2 Name resolution in computer networks
• 3 Name resolution in semantics and text extraction
o 3.1 Name resolution in simple text
o 3.2 Name resolution across documents

• 4 See also

[edit] Name resolution in computer languages


Expressions in computer languages can contain identifiers. The semantics of such expressions
depend on the entities that the identifiers refer to. The algorithm that determines what an
identifier in a given context refers to is part of the language definition.

The complexity of these algorithms is influenced by the sophistication of the language. For
example, name resolution in assembly language usually involves only a single simple table
lookup, while name resolution in C++ is extremely complicated as it involves:

• namespaces, which make it possible for an identifier to have different meanings


depending on its associated namespace;
• scopes, which make it possible for an identifier to have different meanings at different
scope levels, and which involves various scope overriding and hiding rules. At the most
basic level name resolution usually attempts to find the binding in the smallest enclosing
scope, so that for example local variables supersede global variables; this is called
shadowing.
• visibility rules, which determine whether identifiers from specific namespaces or scopes
are visible from the current context;
• overloading, which makes it possible for an identifier to have different meanings
depending on how it is used, even in a single namespace or scope;
• accessibility, which determines whether identifiers from an otherwise visible scope are
actually accessible and participate in the name resolution process.

[edit] Static versus dynamic

In programming languages, name resolution can be performed either at compile time or at


runtime. The former is called static name resolution, the latter is called dynamic name
resolution.

Examples of programming languages that use static name resolution include C, C++, Java, and
Pascal. Examples of programming languages that use dynamic name resolution include Lisp,
Perl, Python, Tcl, PHP, and REBOL.

[edit] Name resolution in computer networks


In computer networks, name resolution is used to find a lower level address (such as an IP
address) that corresponds to a given higher level address (such as a hostname). Commands that
allow name resolution are: nslookup and host. See Domain Name System, OSI Model.

[edit] Name resolution in semantics and text extraction


Also referred to as entity resolution, in this context name resolution refers to the ability of text
mining software to determine which actual person, actor, or object a particular references refers
to, by looking at natural language text.

[edit] Name resolution in simple text

For example, in the text mining field, software frequently needs to interpret the following text:

John gave Edward the book. He then stood up and called to John to come back into the room.

In these sentences, the software must determine whether the pronoun "he" refers to "John", or
"Edward" from the first sentence. The software must also determine whether the "John" referred
to in the second sentence is the same as the "John" in the first sentence, or a third person whose
name also happens to be "John". Such examples apply to almost all languages, and not just
English.

[edit] Name resolution across documents

Frequently, this type of name resolution is also used across documents, for example to determine
whether the "George Bush" referenced in an old newspaper article as President of the United
States (George H. W. Bush) is the same person as the "George Bush" mentioned in a separate
news article years later about a man who is running for President (George W. Bush.) Because
many people may have the same name, analysts and software must take into account
substantially more information than just a name in order to determine whether two identical
references ("George Bush") actually refer to the same specific entity or person.

Name/entity resolution in text extraction and semantics is a notoriously difficult problem, in part
because in many cases there is not sufficient information to make an accurate determination.
Numerous partial solutions exist that rely on specific contextual clues found in the data, but there
is no currently known general solution.

For examples of software that might provide name resolution benefits, see also:

• AeroText
• AlchemyAPI
• Attensity
• Autonomy

[edit] See also


• Identity resolution
• namespace (programming)
• Scope (programming)
• Named entity recognition
• Naming collision

Retrieved from "http://en.wikipedia.org/wiki/Name_resolution"


Categories: Computer libraries | Compiler theory

Views

• Article
• Discussion
• Edit this page
• History

Personal tools

• Log in / create account

Navigation

• Main page
• Contents
• Featured content
• Current events
• Random article

Search

Interaction

• About Wikipedia
• Community portal
• Recent changes
• Contact Wikipedia
• Donate to Wikipedia
• Help

Toolbox

• What links here


• Related changes
• Upload file
• Special pages
• Printable version
• Permanent link
• Cite this page

Languages

• Deutsch
• Français
• 日本語

• This page was last modified on 29 April 2009 at 03:29.


• Text is available under the Creative Commons Attribution/Share-Alike License;
additional terms may apply. See Terms of Use for details.
Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a non-profit
organization.
• Privacy policy
• About Wikipedia
• Disclaimers

You can support Wikipedia by making a tax-deductible donation.

Socket
From Wikipedia, the free encyclopedia

(Redirected from Sockets)


Jump to: navigation, search
Look up socket in Wiktionary, the free dictionary.

Socket can refer to:

In mechanics:

• Socket wrench, a type of wrench that uses separate, removable sockets to fit different
sizes of nuts and bolts
• Socket head screw, a screw (or bolt) with a cylindrical head containing a socket into
which the hexagonal ends of an Allan wrench will fit
• Socket termination, a termination used at the ends of wire rope
• An opening in any fitting that matches the outside diameter of a pipe or tube, with a
further recessed through opening matching the inside diameter of the same pipe or tube
In biology:

• Eye socket, a region in the skull where the eyes are positioned
• Tooth socket, a cavity containing a tooth, in those bones that bear teeth
• Dry socket, a painful opening as a result of the blood not clotting after a tooth is pulled
• Ball and socket joint

In computing:

• Internet socket, an end-point in the IP networking protocol


• CPU socket, the connector on a computer's motherboard for the CPU
• Unix domain socket, an end-point in local inter-process communication
• An end-point of a bi-directional communication link in the Berkeley sockets API

Electrical and electronic connectors:

• Electrical outlet, an electrical device connected to a power source onto which another
device can be plugged or screwed in
• Antenna socket, a female antenna connector for a television cable
• Jack (connector), one of several types of electronic connectors
• CPU socket, a physical and electrical specification of how to connect a CPU to a
motherboard

Socket may also refer to:

• Socket: Time Dominator, a video game created by Vic Tokai on the Sega Genesis
• Socket (film), a gay-themed science-fiction indie film

This disambiguation page lists articles associated with the same title. If an internal link led you
here, you may wish to change the link to point directly to the intended article.
Retrieved from "http://en.wikipedia.org/wiki/Socket"
Categories: Disambiguation pages
Hidden categories: All disambiguation pages | All article disambiguation pages

Views

• Article
• Discussion
• Edit this page
• History

Personal tools

• Log in / create account

Navigation
• Main page
• Contents
• Featured content
• Current events
• Random article

Search

Interaction

• About Wikipedia
• Community portal
• Recent changes
• Contact Wikipedia
• Donate to Wikipedia
• Help

Toolbox

• What links here


• Related changes
• Upload file
• Special pages
• Printable version
• Permanent link
• Cite this page

Languages

• ‫العربية‬
• Deutsch
• Español
• Français
• 한국어
• Italiano
• ‫עברית‬
• Lietuvių
• Nederlands
• 日本語
• Norsk (bokmål)
• Polski
• Português
• Русский
• Shqip
• Türkçe

• This page was last modified on 9 June 2009 at 09:51.


• Text is available under the Creative Commons Attribution/Share-Alike License;
additional terms may apply. See Terms of Use for details.
Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a non-profit
organization.
• Privacy policy
• About Wikipedia

Disclaimers Help us provide free content to the world by donating today!

IP address
From Wikipedia, the free encyclopedia

(Redirected from Internet address)

Jump to: navigation, search

An Internet Protocol (IP) address is a numerical identification and logical address that is
assigned to devices participating in a computer network utilizing the Internet Protocol for
communication between its nodes.[1] Although IP addresses are stored as binary numbers, they
are usually displayed in human-readable notations, such as 208.77.188.166 (for IPv4), and
2001:db8:0:1234:0:567:1:1 (for IPv6). The role of the IP address has been characterized as
follows: "A name indicates what we seek. An address indicates where it is. A route indicates how
to get there."[2]

The original designers of TCP/IP defined an IP address as a 32-bit number[1] and this system,
now named Internet Protocol Version 4 (IPv4), is still in use today. However, due to the
enormous growth of the Internet and the resulting depletion of the address space, a new
addressing system (IPv6), using 128 bits for the address, was developed in 1995[3] and last
standardized by RFC 2460 in 1998.[4]

The Internet Protocol also has the task of routing data packets between networks, and IP
addresses specify the locations of the source and destination nodes in the topology of the routing
system. For this purpose, some of the bits in an IP address are used to designate a subnetwork.
The number of these bits is indicated in CIDR notation, appended to the IP address, e.g.,
208.77.188.166/24.

With the development of private networks and the threat of IPv4 address exhaustion, a group of
private address spaces was set aside by RFC 1918. These private addresses may be used by
anyone on private networks. They are often used with network address translators to connect to
the global public Internet.
The Internet Assigned Numbers Authority (IANA) manages the IP address space allocations
globally. IANA works in cooperation with five Regional Internet Registries (RIRs) to allocate IP
address blocks to Local Internet Registries (Internet service providers) and other entities.

Contents
[hide]

• 1 IP versions
o 1.1 IP version 4 addresses
 1.1.1 IPv4 networks
 1.1.2 IPv4 private addresses
o 1.2 IPv4 address depletion
o 1.3 IP version 6 addresses
 1.3.1 IPv6 private addresses
• 2 IP subnetworks
• 3 Static and dynamic IP addresses
o 3.1 Method of assignment
o 3.2 Uses of dynamic addressing
 3.2.1 Sticky dynamic IP
address
o 3.3 Address autoconfiguration
o 3.4 Uses of static addressing
• 4 Modifications to IP addressing
o 4.1 IP blocking and firewalls
o 4.2 IP address translation
• 5 See also
• 6 References
• 7 External links

o 7.1 RFCs

[edit] IP versions
The Internet Protocol (IP) has two versions currently in use (see IP version history for details).
Each version has its own definition of an IP address. Because of its prevalence, the generic term
IP address typically still refers to the addresses defined by IPv4.

An illustration of an IP address (version 4), in both dot-decimal notation and binary.

[edit] IP version 4 addresses

Main article: IPv4#Addressing

IPv4 uses 32-bit (4-byte) addresses, which limits the address space to 4,294,967,296 (232)
possible unique addresses. IPv4 reserves some addresses for special purposes such as private
networks (~18 million addresses) or multicast addresses (~270 million addresses). This reduces
the number of addresses that can be allocated to end users and, as the number of addresses
available is consumed, IPv4 address exhaustion is inevitable. This foreseeable shortage was the
primary motivation for developing IPv6, which is in various deployment stages around the world
and is the only strategy for IPv4 replacement and continued Internet expansion.

IPv4 addresses are usually represented in dot-decimal notation (four numbers, each ranging from
0 to 255, separated by dots, e.g. 208.77.188.166). Each part represents 8 bits of the address, and
is therefore called an octet. In less common cases of technical writing, IPv4 addresses may be
presented in hexadecimal, octal, or binary representations. When converting, each octet is
usually treated as a separate number.

[edit] IPv4 networks

In the early stages of development of the Internet protocol,[1] network administrators interpreted
an IP address as a structure of network number and host number. The highest order octet (most
significant eight bits) was designated the network number and the rest of the bits were called the
rest field or host identifier and were used for host numbering within a network. This method soon
proved inadequate as additional networks developed that were independent from the existing
networks already designated by a network number. In 1981, the Internet addressing specification
was revised with the introduction of classful network architecture. [2]

Classful network design allowed for a larger number of individual network assignments. The
first three bits of the most significant octet of an IP address was defined as the class of the
address. Three classes (A, B, and C) were defined for universal unicast addressing. Depending on
the class derived, the network identification was based on octet boundary segments of the entire
address. Each class used successively additional octets in the network identifier, thus reducing
the possible number of hosts in the higher order classes (B and C). The following table gives an
overview of this system.

Possible Possible
Clas First octet Range of Network Host
number of number of
s in binary first octet ID ID
networks hosts

224 - 2 =
A 0XXXXXXX 0 - 127 a b.c.d 27 = 128
16,777,214

B 10XXXXXX 128 - 191 a.b c.d 214 = 16,384 216 - 2 = 65,534

C 110XXXXX 192 - 223 a.b.c d 221 = 2,097,152 28 - 2 = 254


The articles 'subnetwork' and 'classful network' explain the details of this design.

Although classful network design was a successful developmental stage, it proved unscalable in
the rapid expansion of the Internet and was abandoned when Classless Inter-Domain Routing
(CIDR) was created for the allocation of IP address blocks and new rules of routing protocol
packets using IPv4 addresses. CIDR is based on variable-length subnet masking (VLSM) to
allow allocation and routing on arbitrary-length prefixes.

Today, remnants of classful network concepts function only in a limited scope as the default
configuration parameters of some network software and hardware components (e.g. netmask),
and in the technical jargon used in network administrators' discussions.

[edit] IPv4 private addresses


Main article: Private network

Early network design, when global end-to-end connectivity was envisioned for communications
with all Internet hosts, intended that IP addresses be uniquely assigned to a particular computer
or device. However, it was found that this was not always necessary as private networks
developed and public address space needed to be conserved (IPv4 address exhaustion).

Computers not connected to the Internet, such as factory machines that communicate only with
each other via TCP/IP, need not have globally-unique IP addresses. Three ranges of IPv4
addresses for private networks, one range for each class (A, B, C), were reserved in RFC 1918.
These addresses are not routed on the Internet and thus their use need not be coordinated with an
IP address registry.

Today, when needed, such private networks typically connect to the Internet through network
address translation (NAT).

IANA-reserved private IPv4 network ranges

No. of
Start End
addresses

24-bit Block (/8 prefix, 1 x 10.255.255.25


10.0.0.0 16,777,216
A) 5

20-bit Block (/12 prefix, 16 172.31.255.25


172.16.0.0 1,048,576
x B) 5
16-bit Block (/16 prefix, 192.168.0. 192.168.255.2
65,536
256 x C) 0 55

Any user may use any of the reserved blocks. Typically, a network administrator will divide a
block into subnets; for example, many home routers automatically use a default address range of
192.168.0.0 - 192.168.0.255 (192.168.0.0/24).

[edit] IPv4 address depletion

Main article: IPv4 address exhaustion

The IP version 4 address space is rapidly nearing exhaustion of available, officially assignable
address blocks.

[edit] IP version 6 addresses

Main article: IPv6#Addressing

An illustration of an IP address (version 6), in hexadecimal and binary.

The rapid exhaustion of IPv4 address space, despite conservation techniques, prompted the
Internet Engineering Task Force (IETF) to explore new technologies to expand the Internet's
addressing capability. The permanent solution was deemed to be a redesign of the Internet
Protocol itself. This next generation of the Internet Protocol, aimed to replace IPv4 on the
Internet, was eventually named Internet Protocol Version 6 (IPv6) in 1995[3][4] The address size
was increased from 32 to 128 bits or 16 octets, which, even with a generous assignment of
network blocks, is deemed sufficient for the foreseeable future. Mathematically, the new address
space provides the potential for a maximum of 2128, or about 3.403 × 1038 unique addresses.

The new design is not based on the goal to provide a sufficient quantity of addresses alone, but
rather to allow efficient aggregation of subnet routing prefixes to occur at routing nodes. As a
result, routing table sizes are smaller, and the smallest possible individual allocation is a subnet
for 264 hosts, which is the size of the square of the size of the entire IPv4 Internet. At these levels,
actual address utilization rates will be small on any IPv6 network segment. The new design also
provides the opportunity to separate the addressing infrastructure of a network segment--that is
the local administration of the segment's available space--from the addressing prefix used to
route external traffic for a network. IPv6 has facilities that automatically change the routing
prefix of entire networks should the global connectivity or the routing policy change without
requiring internal redesign or renumbering.

The large number of IPv6 addresses allows large blocks to be assigned for specific purposes and,
where appropriate, to be aggregated for efficient routing. With a large address space, there is not
the need to have complex address conservation methods as used in classless inter-domain routing
(CIDR).
All modern desktop and enterprise server operating systems include native support for the IPv6
protocol, but it is not yet widely deployed in other devices, such as home networking routers,
voice over Internet Protocol (VoIP) and multimedia equipment, and network peripherals.

Example of an IPv6 address:

2001:0db8:85a3:08d3:1319:8a2e:0370:7334

[edit] IPv6 private addresses

Just as IPv4 reserves addresses for private or internal networks, there are blocks of addresses set
aside in IPv6 for private addresses. In IPv6, these are referred to as unique local addresses
(ULA). RFC 4193 sets aside the routing prefix fc00::/7 for this block which is divided into two
/8 blocks with different implied policies (cf. IPv6) The addresses include a 40-bit pseudorandom
number that minimizes the risk of address collisions if sites merge or packets are misrouted.

Early designs (RFC 3513) used a different block for this purpose (fec0::), dubbed site-local
addresses. However, the definition of what constituted sites remained unclear and the poorly
defined addressing policy created ambiguities for routing. The address range specification was
abandoned and must no longer be used in new systems.

Addresses starting with fe80: — called link-local addresses — are assigned only in the local link
area. The addresses are generated usually automatically by the operating system's IP layer for
each network interface. This provides instant automatic network connectivity for any IPv6 host
and means that if several hosts connect to a common hub or switch, they have an instant
communication path via their link-local IPv6 address. This feature is used extensively, and
invisibly to most users, in the lower layers of IPv6 network administration (cf. Neighbor
Discovery Protocol).

None of the private address prefixes may be routed in the public Internet.

[edit] IP subnetworks
Main article: Subnetwork

The technique of subnetting can operate in both IPv4 and IPv6 networks. The IP address is
divided into two parts: the network address and the host identifier. The subnet mask (in IPv4
only) or the CIDR prefix determines how the IP address is divided into network and host parts.

The term subnet mask is only used within IPv4. Both IP versions however use the Classless
Inter-Domain Routing (CIDR) concept and notation. In this, the IP address is followed by a slash
and the number (in decimal) of bits used for the network part, also called the routing prefix. For
example, an IPv4 address and its subnet mask may be 192.0.2.1 and 255.255.255.0, respectively.
The CIDR notation for the same IP address and subnet is 192.0.2.1/24, because the first 24 bits
of the IP address indicate the network and subnet.
[edit] Static and dynamic IP addresses
When a computer is configured to use the same IP address each time it powers up, this is known
as a Static IP address. In contrast, in situations when the computer's IP address is assigned
automatically, it is known as a Dynamic IP address.

[edit] Method of assignment

Static IP addresses are manually assigned to a computer by an administrator. The exact procedure
varies according to platform. This contrasts with dynamic IP addresses, which are assigned either
by the computer interface or host software itself, as in Zeroconf, or assigned by a server using
Dynamic Host Configuration Protocol (DHCP). Even though IP addresses assigned using DHCP
may stay the same for long periods of time, they can generally change. In some cases, a network
administrator may implement dynamically assigned static IP addresses. In this case, a DHCP
server is used, but it is specifically configured to always assign the same IP address to a
particular computer. This allows static IP addresses to be configured centrally, without having to
specifically configure each computer on the network in a manual procedure.

In the absence or failure of static or stateful (DHCP) address configurations, an operating system
may assign an IP address to a network interface using state-less autoconfiguration methods, such
as Zeroconf.

[edit] Uses of dynamic addressing

Dynamic IP addresses are most frequently assigned on LANs and broadband networks by
Dynamic Host Configuration Protocol (DHCP) servers. They are used because it avoids the
administrative burden of assigning specific static addresses to each device on a network. It also
allows many devices to share limited address space on a network if only some of them will be
online at a particular time. In most current desktop operating systems, dynamic IP configuration
is enabled by default so that a user does not need to manually enter any settings to connect to a
network with a DHCP server. DHCP is not the only technology used to assigning dynamic IP
addresses. Dialup and some broadband networks use dynamic address features of the Point-to-
Point Protocol.

[edit] Sticky dynamic IP address

A sticky dynamic IP address or sticky IP is an informal term used by cable and DSL Internet
access subscribers to describe a dynamically assigned IP address that does not change often. The
addresses are usually assigned with the DHCP protocol. Since the modems are usually powered-
on for extended periods of time, the address leases are usually set to long periods and simply
renewed upon expiration. If a modem is turned off and powered up again before the next
expiration of the address lease, it will most likely receive the same IP address.

[edit] Address autoconfiguration


RFC 3330 defines an address block, 169.254.0.0/16, for the special use in link-local addressing
for IPv4 networks. In IPv6, every interface, whether using static or dynamic address
assignments, also receives a local-link address automatically in the fe80::/10 subnet.

These addresses are only valid on the link, such as a local network segment or point-to-point
connection, that a host is connected to. These addresses are not routable and like private
addresses cannot be the source or destination of packets traversing the Internet.

When the link-local IPv4 address block was reserved, no standards existed for mechanisms of
address autoconfiguration. Filling the void, Microsoft created an implementation that called
Automatic Private IP Addressing (APIPA). Due to Microsoft's market power, APIPA has been
deployed on millions of machines and has, thus, become a de facto standard in the industry.
Many years later, the IETF defined a formal standard for this functionality, RFC 3927, entitled
Dynamic Configuration of IPv4 Link-Local Addresses.

[edit] Uses of static addressing

Some infrastructure situations have to use static addressing, such as when finding the Domain
Name System host that will translate domain names to IP addresses. Static addresses are also
convenient, but not absolutely necessary, to locate servers inside an enterprise. An address
obtained from a DNS server comes with a time to live, or caching time, after which it should be
looked up to confirm that it has not changed. Even static IP addresses do change as a result of
network administration (RFC 2072)

[edit] Modifications to IP addressing


[edit] IP blocking and firewalls

Main articles: IP blocking and Firewall

Firewalls are common on today's Internet. For increased network security, they control access to
private networks based on the public IP of the client. Whether using a blacklist or a whitelist, the
IP address that is blocked is the perceived public IP address of the client, meaning that if the
client is using a proxy server or NAT, blocking one IP address might block many individual
people.

[edit] IP address translation

Main article: Network Address Translation

Multiple client devices can appear to share IP addresses: either because they are part of a shared
hosting web server environment or because an IPv4 network address translator (NAT) or proxy
server acts as an intermediary agent on behalf of its customers, in which case the real originating
IP addresses might be hidden from the server receiving a request. A common practice is to have a
NAT hide a large number of IP addresses in a private network. Only the "outside" interface(s) of
the NAT need to have Internet-routable addresses[5].

Most commonly, the NAT device maps TCP or UDP port numbers on the outside to individual
private addresses on the inside. Just as a telephone number may have site-specific extensions, the
port numbers are site-specific extensions to an IP address.

In small home networks, NAT functions usually take place in a residential gateway device,
typically one marketed as a "router". In this scenario, the computers connected to the router
would have 'private' IP addresses and the router would have a 'public' address to communicate
with the Internet. This type of router allows several computers to share one public IP address.

[edit] See also


• Classful network
• Geolocation
• Geolocation software
• Hierarchical name space
• hostname: a human-readable alpha-numeric designation that may map to an
IP address
• Internet
• IP address spoofing
• IP blocking
• IP Multicast
• IP2Location, a geolocation system using IP addresses.
• List of assigned /8 IP address blocks
• MAC address
• Ping
• Private network
• Provider Aggregatable Address Space
• Provider Independent Address Space
• Regional Internet Registry
o African Network Information Center
o American Registry for Internet Numbers
o Asia-Pacific Network Information Centre
o Latin American and Caribbean Internet Addresses Registry
o RIPE Network Coordination Centre
• Subnet address
• Virtual IP address

[edit] References
• Comer, Douglas (2000). Internetworking with TCP/IP:Principles, Protocols, and
Architectures --4th ed.. Upper Saddle River, NJ: Prentice Hall. ISBN 0-13-
018380-6. http://www.cs.purdue.edu/homes/dec/netbooks.html.
1. ^ a b c RFC 760, "DOD Standard Internet Protocol". DARPA Request For
Comments. Internet Engineering Task Force. January 1980.
http://www.ietf.org/rfc/rfc0760.txt. Retrieved on 2008-07-08.
2. ^ a b RFC 791, "Internet Protocol". DARPA Request For Comments. Internet
Engineering Task Force. September 1981. 6. http://www.ietf.org/rfc/rfc791.txt.
Retrieved on 2008-07-08.
3. ^ a b RFC 1883, "Internet Protocol, Version 6 (IPv6) Specification". Request
For Comments. The Internet Society. December 1995.
http://www.ietf.org/rfc/rfc1883.txt. Retrieved on 2008-07-08.
4. ^ a b RFC 2460, Internet Protocol, Version 6 (IPv6) Specification, S. Deering, R.
Hinden, The Internet Society (December 1998)
5. ^ Comer pg.394

[edit] External links


• Articles on CircleID about IP addressing
• How to get a static IP address - clear instructions for all the major platforms
• IP at the Open Directory Project — including sites for identifying one's IP
address

• Understanding IP Addressing: Everything You Ever Wanted To Know

[edit] RFCs

• IPv4 addresses: RFC 791, RFC 1519, RFC 1918, RFC 2071, RFC 2072
• IPv6 addresses: RFC 4291, RFC 4192

Retrieved from "http://en.wikipedia.org/wiki/IP_address"


Categories: Network addressing | Internet Protocol

Hidden categories: Articles containing potentially dated statements from 2008 | All
articles containing potentially dated statements

Views

• Article
• Discussion
• Edit this page
• History

Personal tools

• Log in / create account

Navigation

• Main page
• Contents
• Featured content
• Current events
• Random article

Search

Interaction

• About Wikipedia
• Community portal
• Recent changes
• Contact Wikipedia
• Donate to Wikipedia
• Help

Toolbox

• What links here


• Related changes
• Upload file
• Special pages
• Printable version
• Permanent link
• Cite this page

Languages

• Afrikaans
• Alemannisch
• ‫العربية‬
• Aragonés
• Arpetan
• Boarisch
• Brezhoneg
• Català
• Česky
• Deutsch
• Eesti
• Ελληνικά
• Español
• Esperanto
• Euskara
• ‫فارسی‬
• Føroyskt
• Français
• Gaeilge
• Galego
• ુ રાતી
ગજ
• 한국어
• Hrvatski
• Bahasa Indonesia
• Interlingua
• Íslenska
• Italiano
• ‫עברית‬
• ქართული
• ລາວ
• Latina
• Latviešu
• Lietuvių
• Limburgs
• Lingála
• Lumbaart
• Magyar
• Māori
• Bahasa Melayu
• Nederlands
• 日本語
• Norsk (bokmål)
• Norsk (nynorsk)
• Occitan
• Polski
• Português
• Ripoarisch
• Română
• Русский
• Shqip
• Sicilianu
• Slovenčina
• Slovenščina
• Српски / Srpski
• Srpskohrvatski / Српскохрватски
• Suomi
• Svenska
• தமிழ
• Taqbaylit
• ไทย
• Türkçe
• Українська
• ‫اردو‬
• Tiếng Việt
• West-Vlams
• ‫ייִדיש‬
• Yorùbá
• Žemaitėška
• 中文

• This page was last modified on 27 July 2009 at 05:14.


• Text is available under the Creative Commons Attribution/Share-Alike License;
additional terms may apply. See Terms of Use for details.
Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a
non-profit organization.
• Privacy policy
• About Wikipedia

Disclaimers Make a donation to Wikipedia and give the gift of knowledge!

Protocol
From Wikipedia, the free encyclopedia

(Redirected from Protocols)


Jump to: navigation, search
Look up protocol in Wiktionary, the free dictionary.

Contents
[hide]

• 1 Standards in information automation


• 2 Procedures for human behavior
• 3 Other

• 4 See also

Protocol may also refer to:

[edit] Standards in information automation

• Communications protocol
• Protocol (computing), a set of instructions for transferring data
o Internet Protocol
• Protocol (object-oriented programming)
• Cryptographic protocol

[edit] Procedures for human behavior


• Protocol (diplomacy)
• Protocol (politics), a formal agreement between nation states (cf: treaty).
• Protocol, a.k.a. etiquette
• Clinical protocol, a.k.a. guideline (medical)
• Research methods:
o Protocol (natural sciences)
o Clinical trial protocol

[edit] Other

• Protocol (film)
• Protocol (band), British

[edit] See also


• List of network protocols
• The Protocols of the Elders of Zion

This disambiguation page lists articles associated with the same title. If an internal link led you
here, you may wish to change the link to point directly to the intended article.
Retrieved from "http://en.wikipedia.org/wiki/Protocol"
Categories: Disambiguation pages
Hidden categories: All disambiguation pages | All article disambiguation pages

Views

• Article
• Discussion
• Edit this page
• History

Personal tools

• Log in / create account

Navigation

• Main page
• Contents
• Featured content
• Current events
• Random article

Search
Interaction

• About Wikipedia
• Community portal
• Recent changes
• Contact Wikipedia
• Donate to Wikipedia
• Help

Toolbox

• What links here


• Related changes
• Upload file
• Special pages
• Printable version
• Permanent link
• Cite this page

Languages

• Български
• Català
• Česky
• Dansk
• Deutsch
• Español
• Esperanto
• Français
• Galego
• 한국어
• Italiano
• ‫עברית‬
• Magyar
• Norsk (bokmål)
• Polski
• Русский
• Slovenčina
• Slovenščina
• Svenska
• Українська

• This page was last modified on 16 June 2009 at 09:59.


• Text is available under the Creative Commons Attribution/Share-Alike License;
additional terms may apply. See Terms of Use for details.
Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a non-profit
organization.
• Privacy policy
• About Wikipedia
• Disclaimers

You can support Wikipedia by making a tax-deductible donation.

Internet Protocol Suite


From Wikipedia, the free encyclopedia

Jump to: navigation, search

The Internet Protocol Suite (commonly known as TCP/IP) is the set of communications
protocols used for the Internet and other similar networks. It is named from two of the most
important protocols in it: the Transmission Control Protocol (TCP) and the Internet Protocol (IP),
which were the first two networking protocols defined in this standard. Today's IP networking
represents a synthesis of several developments that began to evolve in the 1960s and 1970s,
namely the Internet and LANs (Local Area Networks), which emerged in the mid- to late-1980s,
together with the advent of the World Wide Web in the early 1990s.

The Internet Protocol Suite, like many protocol suites, may be viewed as a set of layers. Each
layer solves a set of problems involving the transmission of data, and provides a well-defined
service to the upper layer protocols based on using services from some lower layers. Upper
layers are logically closer to the user and deal with more abstract data, relying on lower layer
protocols to translate data into forms that can eventually be physically transmitted.

The TCP/IP model consists of four layers (RFC 1122).[1][2] From lowest to highest, these are the
Link Layer, the Internet Layer, the Transport Layer, and the Application Layer.

The Internet Protocol Suite

Application Layer

BGP · DHCP · DNS · FTP · GTP · HTTP · IMAP ·


IRC · Megaco · MGCP · NNTP · NTP · POP · RIP ·
RPC · RTP · RTSP · SDP · SIP · SMTP · SNMP ·
SOAP · SSH · Telnet · TLS/SSL · XMPP · (more)
Transport Layer

TCP · UDP · DCCP · SCTP · RSVP · ECN · (more)

Internet Layer

IP (IPv4, IPv6) · ICMP · ICMPv6 · IGMP · IPsec ·


(more)

Link Layer

ARP · RARP · NDP · OSPF · Tunnels (L2TP) · PPP ·


Media Access Control (Ethernet, MPLS, DSL, ISDN,
FDDI) · Device Drivers · (more)

This box: view • talk • edit

Contents
[hide]

• 1 History
• 2 Layers in the Internet Protocol Suite
o 2.1 The concept of layers
o 2.2 Layer names and number of layers in the literature
• 3 Implementations
• 4 See also
• 5 References
• 6 Further reading

• 7 External links

[edit] History
The Internet Protocol Suite resulted from work done by Defense Advanced Research Projects
Agency (DARPA) in the early 1970s. After building the pioneering ARPANET in 1969, DARPA
started work on a number of other data transmission technologies. In 1972, Robert E. Kahn was
hired at the DARPA Information Processing Technology Office, where he worked on both
satellite packet networks and ground-based radio packet networks, and recognized the value of
being able to communicate across them. In the spring of 1973, Vinton Cerf, the developer of the
existing ARPANET Network Control Program (NCP) protocol, joined Kahn to work on open-
architecture interconnection models with the goal of designing the next protocol generation for
the ARPANET.

By the summer of 1973, Kahn and Cerf had worked out a fundamental reformulation, where the
differences between network protocols were hidden by using a common internetwork protocol,
and, instead of the network being responsible for reliability, as in the ARPANET, the hosts
became responsible. Cerf credits Hubert Zimmerman and Louis Pouzin, designer of the
CYCLADES network, with important influences on this design.

With the role of the network reduced to the bare minimum, it became possible to join almost any
networks together, no matter what their characteristics were, thereby solving Kahn's initial
problem. One popular saying has it that TCP/IP, the eventual product of Cerf and Kahn's work,
will run over "two tin cans and a string."

A computer called a router (a name changed from gateway to avoid confusion with other types
of gateways) is provided with an interface to each network, and forwards packets back and forth
between them. Requirements for routers are defined in (Request for Comments 1812).[3]

The idea was worked out in more detailed form by Cerf's networking research group at Stanford
in the 1973–74 period, resulting in the first TCP specification (Request for Comments 675) [4].
(The early networking work at Xerox PARC, which produced the PARC Universal Packet
protocol suite, much of which existed around the same period of time, was also a significant
technical influence; people moved between the two.)

DARPA then contracted with BBN Technologies, Stanford University, and the University
College London to develop operational versions of the protocol on different hardware platforms.
Four versions were developed: TCP v1, TCP v2, a split into TCP v3 and IP v3 in the spring of
1978, and then stability with TCP/IP v4 — the standard protocol still in use on the Internet today.

In 1975, a two-network TCP/IP communications test was performed between Stanford and
University College London (UCL). In November, 1977, a three-network TCP/IP test was
conducted between sites in the US, UK, and Norway. Several other TCP/IP prototypes were
developed at multiple research centres between 1978 and 1983. The migration of the ARPANET
to TCP/IP was officially completed on January 1, 1983 when the new protocols were
permanently activated.[5]

In March 1982, the US Department of Defense declared TCP/IP as the standard for all military
computer networking.[6] In 1985, the Internet Architecture Board held a three day workshop on
TCP/IP for the computer industry, attended by 250 vendor representatives, promoting the
protocol and leading to its increasing commercial use.

Kahn and Cerf were honored with the Presidential Medal of Freedom on November 9, 2005 for
their contribution to American culture.

[edit] Layers in the Internet Protocol Suite


[edit] The concept of layers

The TCP/IP suite uses encapsulation to provide abstraction of protocols and services. Such
encapsulation usually is aligned with the division of the protocol suite into layers of general
functionality. In general, an application (the highest level of the model) uses a set of protocols to
send its data down the layers, being further encapsulated at each level.

This may be illustrated by an example network scenario, in which two Internet host computers
communicate across local network boundaries constituted by their internetworking gateways
(routers).

TCP/IP stack operating on two hosts connected via two Encapsulation of application data
routers and the corresponding layers used at each hop descending through the protocol stack.

The functional groups of protocols and methods are the Application Layer, the Transport Layer,
the Internet Layer, and the Link Layer (RFC 1122). It should be noted that this model was not
intended to be a rigid reference model into which new protocols have to fit in order to be
accepted as a standard.

The following table provides some examples of the protocols grouped in their respective layers.

DNS, TFTP, TLS/SSL, FTP, Gopher, HTTP, IMAP, IRC, NNTP, POP3,
SIP, SMTP,SMPP, SNMP, SSH, Telnet, Echo, RTP, PNRP, rlogin,
ENRP
Application

Routing protocols like BGP and RIP which run over TCP/UDP, may
also be considered part of the Internet Layer.

Transport TCP, UDP, DCCP, SCTP, IL, RUDP, RSVP

IP (IPv4, IPv6), ICMP, IGMP, and ICMPv6

Internet
OSPF for IPv4 was initially considered IP layer protocol since it runs
per IP-subnet, but has been placed on the Link since RFC 2740.

Link ARP, RARP, OSPF (IPv4/IPv6), IS-IS, NDP

[edit] Layer names and number of layers in the literature


The following table shows the layer names and the number of layers of networking models
presented in RFCs and textbooks in widespread use in today's university computer networking
courses.

Kurose[7], Arpanet Reference


Comer[9], [11] [12] Cisco
Forouzan Stallings Tanenbaum RFC 1122 Model 1982 (RFC
[8] Kozierok[10] Academy[13]
871)

Four+one
Five layers Five layers Four layers Four layers Four layers Three layers
layers

"Five-layer
Internet "TCP/IP 5-
"TCP/IP
model" or layer "TCP/IP "Internet "Internet "Arpanet reference
reference
"TCP/IP reference model" model" model" model"
model"
protocol model"
suite"

Application Application Application Application Application Application Application/Process

Host-to-host
Transport Transport Transport Transport Transport
or transport
Host-to-host

Network Internet Internet Internet Internet Internetwork

Data link
Network Host-to- Network
Data link (Network Link Network interface
access network interface
interface)

Physical (Hardware) Physical

These textbooks are secondary sources that may contravene the intent of RFC 1122 and other
IETF primary sources[14].

Different authors have interpreted the RFCs differently regarding the question whether the Link
Layer (and the TCP/IP model) covers Physical Layer issues, or if a hardware layer is assumed
below the Link Layer. Some authors have tried to use other names for the Link Layer, such as
network interface layer, in view to avoid confusion with the Data Link Layer of the seven layer
OSI model. Others have attempted to map the Internet Protocol model onto the OSI Model. The
mapping often results in a model with five layers where the Link Layer is split into a Data Link
Layer on top of a Physical Layer. In literature with a bottom-up approach to Internet
communication[8][9][11], in which hardware issues are emphasized, those are often discussed in
terms of Physical Layer and Data Link Layer.

The Internet Layer is usually directly mapped into the OSI Model's Network Layer, a more
general concept of network functionality. The Transport Layer of the TCP/IP model, sometimes
also described as the host-to-host layer, is mapped to OSI Layer 4 (Transport Layer), sometimes
also including aspects of OSI Layer 5 (Session Layer) functionality. OSI's Application Layer,
Presentation Layer, and the remaining functionality of the Session Layer are collapsed into
TCP/IP's Application Layer. The argument is that these OSI layers do usually not exist as
separate processes and protocols in Internet applications.[citation needed]

However, the Internet protocol stack has never been altered by the Internet Engineering Task
Force from the four layers defined in RFC 1122. The IETF makes no effort to follow the OSI
model although RFCs sometimes refer to it. The IETF has repeatedly stated[citation needed] that
Internet protocol and architecture development is not intended to be OSI-compliant.

RFC 3439, addressing Internet architecture, contains a section entitled: "Layering Considered
Harmful".[14]

[edit] Implementations
Most operating systems in use today, including all consumer-targeted systems, include a TCP/IP
implementation.

Unique implementations include Lightweight TCP/IP, an open source stack designed for
embedded systems and KA9Q NOS, a stack and associated protocols for amateur packet radio
systems and personal computers connected via serial lines.

[edit] See also


• List of TCP and UDP port numbers

[edit] References
1. ^ RFC 1122, Requirements for Internet Hosts -- Communication Layers, R. Braden (ed.), October
1989
2. ^ RFC 1123, Requirements for Internet Hosts -- Application and Support, R. Braden (ed.),
October 1989
3. ^ F. Baker (June 1995). "Requirements for IP Routers". http://www.isi.edu/in-notes/rfc1812.txt.
4. ^ V.Cerf et al. (December 1974). "Specification of Internet Transmission Control Protocol".
http://www.ietf.org/rfc/rfc0675.txt.
5. ^ Internet History
6. ^ Ronda Hauben. "From the ARPANET to the Internet". TCP Digest (UUCP).
http://www.columbia.edu/~rh120/other/tcpdigest_paper.txt. Retrieved on 2007-07-05.
7. ^ James F. Kurose, Keith W. Ross, Computer Networking: A Top-Down Approach, 2008, ISBN
0321497708
8. ^ a b Behrouz A. Forouzan, Data Communications and Networking
9. ^ a b Douglas E. Comer, Internetworking with TCP/IP: Principles, Protocols and Architecture,
Pearson Prentice Hall 2005, ISBN 0131876716
10.^ Charles M. Kozierok, "The TCP/IP Guide", No Starch Press 2005
11.^ a b William Stallings, Data and Computer Communications, Prentice Hall 2006, ISBN
0132433109
12.^ Andrew S. Tanenbaum, Computer Networks, Prentice Hall 2002, ISBN 0130661023
13.^ Mark Dye, Mark A. Dye, Wendell, Network Fundamentals: CCNA Exploration Companion
Guide, 2007, ISBN 1587132087
14.^ a b R. Bush; D. Meyer (December 2002), Some Internet Architectural Guidelines and
Philosophy, Internet Engineering Task Force, http://www.isi.edu/in-notes/rfc3439.txt, retrieved on
2007-11-20

[edit] Further reading


• Douglas E. Comer. Internetworking with TCP/IP - Principles, Protocols and Architecture.
ISBN 86-7991-142-9
• Joseph G. Davies and Thomas F. Lee. Microsoft Windows Server 2003 TCP/IP Protocols
and Services. ISBN 0-7356-1291-9
• Forouzan, Behrouz A. (2003). TCP/IP Protocol Suite (2nd ed.). McGraw-Hill. ISBN 0-
07-246060-1.
• Craig Hunt TCP/IP Network Administration. O'Reilly (1998) ISBN 1-56592-322-7
• Maufer, Thomas A. (1999). IP Fundamentals. Prentice Hall. ISBN 0-13-975483-0.
• Ian McLean. Windows(R) 2000 TCP/IP Black Book. ISBN 1-57610-687-X
• Ajit Mungale Pro .NET 1.1 Network Programming. ISBN 1-59059-345-6
• W. Richard Stevens. TCP/IP Illustrated, Volume 1: The Protocols. ISBN 0-201-63346-9
• W. Richard Stevens and Gary R. Wright. TCP/IP Illustrated, Volume 2: The
Implementation. ISBN 0-201-63354-X
• W. Richard Stevens. TCP/IP Illustrated, Volume 3: TCP for Transactions, HTTP, NNTP,
and the UNIX Domain Protocols. ISBN 0-201-63495-3
• Andrew S. Tanenbaum. Computer Networks. ISBN 0-13-066102-3
• David D. Clark, "The Design Philosophy of the DARPA Internet Protocols", Computer
Communications Review 18:4, August 1988, pp. 106–114

[edit] External links


• Internet History -- Pages on Robert Kahn, Vinton Cerf, and TCP/IP (reviewed by Cerf
and Kahn).
• RFC 675 - Specification of Internet Transmission Control Program, December 1974
Version
• TCP/IP State Transition Diagram (PDF)
• RFC 1180 A TCP/IP Tutorial - from the Internet Engineering Task Force (January 1991)
• TCP/IP FAQ
• The TCP/IP Guide - A comprehensive look at the protocols and the procedures/processes
involved
• A Study of the ARPANET TCP/IP Digest
• TCP/IP Sequence Diagrams
• The Internet in Practice
• TCP/IP - Directory & Informational Resource
• Daryl's TCP/IP Primer - Intro to TCP/IP LAN administration, conversational style
• Introduction to TCP/IP
• TCP/IP commands from command prompt
• cIPS — Robust TCP/IP stack for embedded devices without an Operating System

Retrieved from "http://en.wikipedia.org/wiki/Internet_Protocol_Suite"


Categories: Internet protocols | TCP/IP | Internet history
Hidden categories: All articles with unsourced statements | Articles with unsourced statements
from April 2009

Views

• Article
• Discussion
• Edit this page
• History

Personal tools

• Log in / create account

Navigation

• Main page
• Contents
• Featured content
• Current events
• Random article

Search

Interaction
• About Wikipedia
• Community portal
• Recent changes
• Contact Wikipedia
• Donate to Wikipedia
• Help

Toolbox

• What links here


• Related changes
• Upload file
• Special pages
• Printable version
• Permanent link
• Cite this page

Languages

• ‫العربية‬
• Bosanski
• Brezhoneg
• Català
• Česky
• Dansk
• Deutsch
• Ελληνικά
• Español
• Esperanto
• Euskara
• ‫فارسی‬
• Français
• Gaeilge
• Galego
• 한국어
• Hrvatski
• Bahasa Indonesia
• Íslenska
• Italiano
• ‫עברית‬
• ქართული
• Kurdî / ‫كوردی‬
• Latviešu
• Lëtzebuergesch
• Lietuvių
• Magyar
• മലയാളം
• Nederlands
• 日本語
• Norsk (bokmål)
• Norsk (nynorsk)
• O'zbek
• Polski
• Português
• Română
• Русский
• Shqip
• Simple English
• Slovenčina
• Slovenščina
• Suomi
• Svenska
• ไทย
• Türkçe
• Українська
• ‫اردو‬
• Tiếng Việt
• 中文

• This page was last modified on 21 July 2009 at 08:38.


• Text is available under the Creative Commons Attribution/Share-Alike License;
additional terms may apply. See Terms of Use for details.
Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a non-profit
organization.
• Privacy policy
• About Wikipedia
• Disclaimers

You can support Wikipedia by making a tax-deductible donation.

Web mapping
From Wikipedia, the free encyclopedia

Jump to: navigation, search


It has been suggested that Neogeography be merged into this article or section. (Discuss)

"Web mapping is the process of designing, implementing, generating and delivering maps on
the World Wide Web. While web mapping primarily deals with technological issues, web
cartography additionally studies theoretic aspects: the use of web maps, the evaluation and
optimization of techniques and workflows, the usability of web maps, social aspects, and more.
Web GIS is similar to web mapping but with an emphasis on analysis, processing of project
specific geodata and exploratory aspects. Often the terms web GIS and web mapping are used
synonymously, even if they don't mean exactly the same. In fact, the border between web maps
and web GIS is blurry. Web maps are often a presentation media in web GIS and web maps are
increasingly gaining analytical capabilities. A special case of web maps are mobile maps,
displayed on mobile computing devices, such as mobile phones, smart phones, PDAs, GPS and
other devices. If the maps on these devices are displayed by a mobile web browser or web user
agent, they can be regarded as mobile web maps. If the mobile web maps also display context
and location sensitive information, such as points of interest, the term Location-based services
is frequently used."[1]

"The use of the web as a dissemination medium for maps can be regarded as a major
advancement in cartography and opens many new opportunities, such as realtime maps, cheaper
dissemination, more frequent and cheaper updates of data and software, personalized map
content, distributed data sources and sharing of geographic information. It also implicates many
challenges due to technical restrictions (low display resolution and limited bandwidth, in
particular with mobile computing devices, many of which are physically small, and use slow
wireless Internet connections), copyright[2] and security issues, reliability issues and technical
complexity. While the first web maps were primarily static, due to technical restrictions, today's
web maps can be fully interactive and integrate multiple media. This means that both web
mapping and web cartography also have to deal with interactivity, usability and multimedia
issues."[3]

A more general term is neogeography.

Contents
[hide]

• 1 Development and implementation


• 2 Types of web maps
o 2.1 Static web maps
o 2.2 Dynamically created web maps
o 2.3 Distributed web maps
o 2.4 Animated web maps
o 2.5 Realtime web maps
o 2.6 Personalized web maps
o 2.7 Customisable web maps
o 2.8 Interactive web maps
o 2.9 Analytic web maps
o 2.10 Online atlases
o 2.11 Collaborative web maps
• 3 Advantages of web maps
• 4 Disadvantages of web maps and problematic issues
• 5 History of web mapping
• 6 Web mapping technologies
o 6.1 Server side technologies
o 6.2 Client side technologies
• 7 See also
• 8 Notes and references
• 9 Further reading

• 10 External links

[edit] Development and implementation


The advent of web mapping can be regarded as a major new trend in cartography. Previously,
cartography was restricted to a few companies, institutes and mapping agencies, requiring
expensive and complex hard- and software as well as skilled cartographers and geomatics
engineers. With web mapping, freely available mapping technologies and geodata potentially
allow every skilled person to produce web maps, with expensive geodata and technical
complexity (data harmonization, missing standards) being two of the remaining barriers
preventing web mapping from fully going mainstream. The cheap and easy transfer of geodata
across the internet allows the integration of distributed data sources, opening opportunities that
go beyond the possibilities of disjoint data storage. Everyone with minimal knowhow and
infrastructure can become a geodata provider. These facts can be regarded both as an advantage
and a disadvantage. While it allows everyone to produce maps and considerably enlarges the
audience, it also puts geodata in the hands of untrained people who potentially violate
cartographic and geographic principles and introduce flaws during the preparation, analysis and
presentation of geographic and cartographic data. Educating the general public about geographic
analysis and cartographic methods and principles should therefore be a priority to the
cartography community.[neutrality disputed]

[edit] Types of web maps


A first classification of web maps has been made by Kraak.[4] He distinguished static and
dynamic web maps and further distinguished interactive and view only web maps. However,
today in the light of an increased number of different web map types, this classification needs
some revision. Today, there are additional possibilities regarding distributed data sources,
collaborative maps, personalized maps, etc.

The following graphic lists potential types of web maps. While the graphic shows in principle an
order of increasing sophistication, the allocation within the order is not explicit. Many maps fall
into more than one category and it is not always clear that a personalized web map is more
complex or sophisticated than an interactive web map. Individual web map types are discussed
below.
[edit] Static web maps

A USGS DRG – a static map

Static web pages are view only with no animation and interactivity. They are only created once,
often manually and infrequently updated. Typical graphics formats for static web maps are PNG,
JPEG, GIF, or TIFF (e.g., drg) for raster files, SVG, PDF or SWF for vector files. Often, these
maps are scanned paper maps and had not been designed as screen maps. Paper maps have a
much higher resolution and information density than typical computer displays of the same
physical size, and might be unreadable when displayed on screens at the wrong resolution.[4]

[edit] Dynamically created web maps

These maps are created on demand each time the user reloads the webpages, often from dynamic
data sources, such as databases. The webserver generates the map using a web map server or a
self written software.

[edit] Distributed web maps

These maps are created from distributed data sources. The WMS protocol offers a standardized
method to access maps on other servers. WMS servers can collect these different sources,
reproject the map layers, if necessary, and send them back as a combined image containing all
requested map layers. One server may offer a topographic base map, while other servers may
offer thematic layers.
[edit] Animated web maps

Animated Maps show changes in the map over time by animating one of the graphical or
temporal variables. Various data and multimedia formats and technologies allow the display of
animated web maps: SVG, Adobe Flash, Java, Quicktime, etc., also with varying degrees of
interaction. Examples for animated web maps are weather maps, maps displaying dynamic
natural or other phenomena (such as water currents, wind patterns, traffic flow, trade flow,
communication patterns, etc.).

[edit] Realtime web maps

Realtime maps show the situation of a phenomenon in close to realtime (only a few seconds or
minutes delay). Data is collected by sensors and the maps are generated or updated at regular
intervals or immediately on demand. Examples are weather maps, traffic maps or vehicle
monitoring systems.

[edit] Personalized web maps

Personalized web maps allow the map user to apply his own data filtering, selective content and
the application of personal styling and map symbolization. The OGC (Open Geospatial
Consortium) provides the SLD standard (Styled Layer Description) that may be sent to a WMS
server for the application of individual styles. This implies that the content and data structure of
the remote WMS server is properly documented.

[edit] Customisable web maps

Web maps in this category are usually more complex web mapping systems that offer APIs for
reuse in other people's web pages and products. Example for such a system with an API for reuse
are the Open Layers Framework, Yahoo! Maps and Google Maps.

[edit] Interactive web maps

Interactivity is one of the major advantages of screen based maps and web maps. It helps to
compensate for the disadvantages of screen and web maps. Interactivity helps to explore maps,
change map parameters, navigate and interact with the map, reveal additional information, link to
other resources, and much more. Technically, it is achieved through the combination of events,
scripting and DOM manipulations. See section on Client Side Technologies.

[edit] Analytic web maps

These web maps offer GIS analysis, either with geodata provided, or with geodata uploaded by
the map user. As already mentioned, the borderline between analytic web maps and web GIS is
blurry. Often, parts of the analysis are carried out by a serverside GIS and the client displays the
result of the analysis. As web clients gain more and more capabilities, this task sharing may
gradually shift.
[edit] Online atlases

Atlas projects often went through a renaissance when they made a transition to a web based
project. In the past, atlas projects often suffered from expensive map production, small
circulation and limited audience. Updates were expensive to produce and took a long time until
they hit the public. Many atlas projects, after moving to the web, can now reach a wider
audience, produce cheaper, provide a larger number of maps and map types and integrate with
and benefit from other web resources. Some atlases even ceased their printed editions after going
online, sometimes offering printing on demand features from the online edition. Some atlases
(primarily from North America) also offer raw data downloads of the underlying geospatial data
sources.

[edit] Collaborative web maps

Main article: Collaborative mapping

Collaborative maps are still new, immature and complex to implement, but show a lot of
potential. The method parallels the Wikipedia project where various people collaborate to create
and improve maps on the web. Technically, an application allowing simultaneous editing across
the web would have to ensure that geometric features being edited by one person are locked, so
they can't be edited by other persons at the same time. Also, a minimal quality check would have
to be made, before data goes public. Some collaborative map projects:

• OpenStreetMap
• WikiMapia
• meta:Maps – survey of Wikimedia map proposals on Wikipedia:Meta
• (Please add additional notes, references and examples here!)

[edit] Advantages of web maps

A surface weather analysis for the United States on October 21, 2006.
• Web maps can easily deliver up to date information. If maps are generated automatically
from databases, they can display information in almost realtime. They don't need to be
printed, mastered and distributed. Examples:
o A map displaying election results, as soon as the election results become
available.
o A map displaying the traffic situation near realtime by using traffic data collected
by sensor networks.
o A map showing the current locations of mass transit vehicles such as buses or
trains, allowing patrons to minimize their waiting time at stops or stations, or be
aware of delays in service.
o Weather maps, such as NEXRAD.
• Software and hardware infrastructure for web maps is cheap. Web server hardware is
cheaply available and many open source tools exist for producing web maps.
• Product updates can easily be distributed. Because web maps distribute both logic and
data with each request or loading, product updates can happen every time the web user
reloads the application. In traditional cartography, when dealing with printed maps or
interactive maps distributed on offline media (CD, DVD, etc.), a map update caused
serious efforts, triggering a reprint or remastering as well as a redistribution of the media.
With web maps, data and product updates are easier, cheaper, and faster, and can occur
more often.
• They work across browsers and operating systems. If web maps are implemented based
on open standards, the underlying operating system and browser do not matter.
• Web maps can combine distributed data sources. Using open standards and documented
APIs one can integrate (mash up) different data sources, if the projection system, map
scale and data quality match. The use of centralized data sources removes the burden for
individual organizations to maintain copies of the same data sets. The down side is that
one has to rely on and trust the external data sources.
• Web maps allow for personalization. By using user profiles, personal filters and personal
styling and symbolization, users can configure and design their own maps, if the web
mapping systems supports personalization. Accessibility issues can be treated in the same
way. If users can store their favourite colors and patterns they can avoid color
combinations they can't easily distinguish (e.g. due to color blindness).
• Web maps enable collaborative mapping. Similar to the Wikipedia project, web mapping
technologies, such as DHTML/Ajax, SVG, Java, Adobe Flash, etc. enable distributed
data acquisition and collaborative efforts. Examples for such projects are the
OpenStreetMap project or the Google Earth community. As with other open projects,
quality assurance is very important, however!
• Web maps support hyperlinking to other information on the web. Just like any other web
page or a wiki, web maps can act like an index to other information on the web. Any
sensitive area in a map, a label text, etc. can provide hyperlinks to additional information.
As an example a map showing public transport options can directly link to the
corresponding section in the online train time table.
• It is easy to integrate multimedia in and with web maps. Current web browsers support
the playback of video, audio and animation (SVG, SWF, Quicktime, and other
multimedia frameworks).
[edit] Disadvantages of web maps and problematic issues
• Reliability issues – the reliability of the internet and web server infrastructure is not yet
good enough. Esp. if a web map relies on external, distributed data sources, the original
author often cannot guarantee the availability of the information.
• Geodata is expensive – Unlike in the US, where geodata collected by governmental
institutions is usually available for free or cheap, geodata is usually very expensive in
Europe or other parts of the world.
• Bandwidth issues – Web maps usually need a relatively high bandwidth.
• Limited screen space – Like with other screen based maps, web maps have the problem
of limited screen space. This is in particular a problem for mobile web maps and location
based services where maps have to be displayed in very small screens with resolutions as
low as 100×100 pixels. Hopefully, technological advances will help to overcome these
limitations.
• Quality and accuracy issues – Many web maps are of poor quality, both in symbolization,
content and data accuracy.
• Complex to develop – Despite the increasing availability of free and commercial tools to
create web mapping and web GIS applications, it is still a complex task to create
interactive web maps. Many technologies, modules, services and data sources have to be
mastered and integrated.
• Immature development tools – Compared to the development of standalone applications
with integrated development tools, the development and debugging environments of a
conglomerate of different web technologies is still awkward and uncomfortable.
• Copyright issues – Many people are still reluctant to publish geodata, esp. in the light that
geodata is expensive in some parts of the world. They fear copyright infringements of
other people using their data without proper requests for permission.
• Privacy issues – With detailed information available and the combination of distributed
data sources, it is possible to find out and combine a lot of private and personal
information of individual persons. Properties and estates of individuals are now
accessible through high resolution aerial and satellite images throughout the world to
anyone.

[edit] History of web mapping

Event types

• Cartography-related events
• Technical events directly related to web mapping
• General technical events

• Events relating to Web standards


This section contains some of the milestones of web mapping, online mapping services and
atlases. Because web mapping depends on enabling technologies of the web, this section also
includes a few milestones of the web.[5]

• 1989–09: Birth of the WWW, WWW invented at CERN for the exchange of research
documents.[6]
• 1990–12: First Web Browser and Web Server, Tim Berners-Lee wrote first web browser[7]
and web server.
• 1991–04: HTTP 0.9[8] protocol, Initial design of the HTTP protocol for communication
between browser and server.
• 1991–06: ViolaWWW 0.8 Browser, The first popular web browser. Written for X11 on
Unix.
• 1991–08: WWW project announced in public newsgroup, This is regarded as the debut
date of the Web. Announced in newsgroup alt.hypertext.
• 1992–06: HTTP 1.0[8] protocol, Version 1.0 of the HTTP protocol. Introduces the POST
method and persistent connections.
• 1993–04: CERN announced web as free, CERN announced that access to the web will be
free for all.[9] The web gained critical mass.
• 1993–06: HTML 1.0.[10] The first version of HTML,[11] published by T. Berners-Lee and
Dan Connolly.
• 1993–07: Xerox PARC Map Viewer, The first mapserver based on CGI/Perl, allowed
reprojection styling and definition of map extent.
• 1994–06: The National Atlas of Canada, The first version of the National Atlas of
Canada was released. Can be regarded as the first online atlas.
• 1994–10: Netscape Browser 0.9 (Mosaic), The first version of the highly popular browser
Netscape Navigator.
• 1995–03: Java 1.0, The first public version of Java.
• 1995–11: HTML 2.0,[10] Introduced forms, file upload, internationalization and client-side
image maps.
• 1995–12: Javascript 1.0, Introduced first script based interactivity.
• 1995: MapGuide, First introduced as Argus MapGuide.
• 1996–01: JDK 1.0, First version of the Sun JDK.
• 1996–02: Mapquest, The first popular online Address Matching and Routing Service
with mapping output.
• 1996–06: MultiMap, The UK-based MultiMap website launched offering online
mapping, routing and location based services. Grew into one of the most popular UK web
sites.
• 1996–11: Geomedia WebMap 1.0, First version of Geomedia WebMap, already supports
vector graphics through the use of ActiveCGM.[12]
• 1996-fall: MapGuide, Autodesk acquired Argus Technologies.and introduced Autodesk
MapGuide 2.0.
• 1996–12: Macromedia Flash 1.0, First version of the Macromedia Flash plugin.
• 1997–01: HTML 3.2,[10] Introduced tables, applets, script elements, multimedia elements,
flowtext around images, etc.

National Atlas of the United States logo


• 1997–03:Norwegian company Mapnet launches application for www.epi.no with active
POI layer for real estate listings.
• 1997–06: US Online National Atlas Initiative, The USGS received the mandate to
coordinate and create the online National Atlas of the United States of America [2].
• 1997–07: UMN MapServer 1.0, Developed as Part of the NASA ForNet Project. Grew
out of the need to deliver remote sensing data across the web for foresters.
• 1997–12: HTML 4.0,[10] Introduced styling with CSS, absolute and relative positioning of
elements, frames, object element, etc.
• 1998–06: Terraserver USA, A Web Map Service serving aerial images (mainly b+w) and
USGS DRGs was released. One of the first popular WMS. This service is a joint effort of
USGS, Microsoft and HP.
• 1998–07: UMN MapServer 2.0, Added reprojection support (PROJ.4).
• 1998–08: MapObjects Internet Map Server, ESRI's entry into the web mapping
business.
• 1999–03: HTTP 1.1[8] protocol, Version 1.1 of the HTTP protocol. Introduces the request
pipelining for multiple connections between server and client. This version is still in use
as of 2007.
• 1999–08: National Atlas of Canada, 6th edition, This new version was launched at the
ICA 1999 conference in Ottawa. Introduced many new features and topics. Is being
improved gradually, since then, and kept up-to-date with technical advancements.
• 2000–02: ArcIMS 3.0, The first public release of ESRI's ArcIMS.
• 2000–06: ESRI Geography Network, ESRI founded Geography Network to distribute
data and web map services.
• 2000–06: UMN MapServer 3.0, Developed as part of the NASA TerraSIP Project. This
is also the first public, open source release of UMN Mapserver. Added raster support and
support for TrueType fonts (FreeType).
• 2000–08: Flash Player 5, This introduced ActionScript 1.0 (ECMAScript compatible).
• 2001–06: MapScript [3] 1.0 for UMN MapServer, Adds a lot of flexibility to UMN
MapServer solutions.
• 2001–09: SVG 1.0[13] W3C Recommendation, SVG (Scalable Vector Graphics) 1.0
became a W3C Recommendation.
• 2001–09: Tirolatlas, A highly interactive online atlas, the first to be based on the SVG
standard.
• 2002–06: UMN MapServer 3.5, Added support for PostGIS and ArcSDE. Version 3.6
adds initial OGC WMS support.
• 2002–07: ArcIMS 4.0, Version 4 of the ArcIMS web map server.
• 2003–01: SVG 1.1[13] W3C Recommendation, SVG 1.1 became a W3C
Recommendation. This introduced the mobile profiles SVG Tiny[13] and SVG Basic.[13]

Screenshot from NASA World Wind

• 2003–06: NASA World Wind, NASA World Wind Released. An open virtual globe that
loads data from distributed resources across the internet. Terrain and buildings can be
viewed 3 dimensionally. The (XML based) markup language allows users to integrate
their own personal content. This virtual globe needs special software and doesn't run in a
web browser.
• 2003–07: UMN MapServer 4.0, Adds 24bit raster output support and support for PDF
and SWF.
• 2003–09: Flash Player 7, This introduced ActionScript 2.0 (ECMAScript 2.0 compatible
(improved object orientation)). Also initial Video Playback support.
• 2004-07: OpenStreetMap was founded by Steve Coast. OSM is a web based
collaborative project to create a world map under a free license.
• 2005–01: Nikolas Schiller creates the interactive "Inaugural Map"[14] of downtown
Washington, DC
• 2005–02: Google Maps, The first version of Google Maps. Based on raster tiles
organized in a quad tree scheme, data loading done with XMLHttpRequests. This
mapping application became highly popular on the web, also because it allowed other
people to integrate google map services into their own website.
• 2005–04: UMN MapServer 4.6, Adds support for SVG.

• 2005–06: Google Earth, The first version of Google Earth was released building on the
virtual globe metaphor. Terrain and buildings can be viewed 3 dimensionally. The KML
(XML based) markup language allows users to integrate their own personal content. This
virtual globe needs special software and doesn't run in a web browser.
• 2005–11: Firefox 1.5, First Firefox release with native SVG support. Supports Scripting
but no animation.
• 2006-05: Wikimapia Launched
• 2006–06: Opera 9, Opera releases version 9 with extensive SVG support (including
scripting and animation).
• 2006–08: SVG 1.2[13] Mobile Candidate Recommendation, This SVG Mobile Profile
introduces improved multimedia support and many features required to build online Rich
Internet Applications.

[edit] Web mapping technologies


The potential number of technologies to implement web mapping projects is almost infinite. Any
programming environment, programming language and serverside framework can be used to
implement web mapping projects. In any case, both server and client side technologies have to
be used. Following is a list of potential and popular server and client side technologies utilized
for web mapping.

[edit] Server side technologies

• Web server – The webserver is responsible for handling http requests by web browsers
and other user agents. In the simplest case they serve static files, such as HTML pages or
static image files. Web servers also handle authentication, content negotiation, server side
includes, URL rewriting and forward requests to dynamic resources, such as CGI
applications or serverside scripting languages. The functionality of a webserver can
usually be enhanced using modules or extensions. The most popular web server is
Apache, followed by Microsoft Internet Information Server and others.
o CGI (common gateway interface) applications are executables running on the
webserver under the environment and user permissions of the webserver user.
They may be written in any programming language (compiled) or scripting
language (e.g. perl). A CGI application implements the common gateway
interface protocol, processes the information sent by the client, does whatever the
application should do and sends the result back in a web-readable form to the
client. As an example a web browser may send a request to a CGI application for
getting a web map with a certain map extent, styling and map layer combination.
The result is an image format, e.g. JPEG, PNG or SVG. For performance
enhancements one can also install CGI applications such as FastCGI. This loads
the application after the web server is started and keeps the application in
memory, eliminating the need to spawn a separate process each time a request is
being made.
o Alternatively, one can use scripting languages built into the webserver as a
module, such as PHP, Perl, Python, ASP, Ruby, etc. If built into the web server as
a module, the scripting engine is already loaded and doesn't have to be loaded
each time a request is being made.
• Web application servers are middleware which connects various software components
with the web server and a programming language. As an example, a web application
server can enable the communication between the API of a GIS and the webserver, a
spatial database or other proprietary applications. Typical web application servers are
written in Java, C, C++, C# or other scripting languages. Web application servers are also
useful when developing complex realtime web mapping applications or Web GIS.
• Spatial databases are usually object relational databases enhanced with geographic data
types, methods and properties. They are necessary whenever a web mapping application
has to deal with dynamic data (that changes frequently) or with huge amount of
geographic data. Spatial databases allow spatial queries, sub selects, reprojections,
geometry manipulations and offer various import and export formats. A popular example
for an open source spatial database is PostGIS. MySQL also implements some spatial
features, although not as mature as PostGIS. Commercial alternatives are Oracle Spatial
or spatial extensions of Microsoft SQL Server and IBM DB2. The OGC Simple Features
for SQL Specification is a standard geometry data model and operator set for spatial
databases. Most spatial databases implement this OGC standard.
• WMS server are specialized web mapping servers implemented as a CGI application,
Java Servlet or other web application server. They either work as a standalone web server
or in collaboration with existing web servers or web application servers (the general
case). WMS Servers can generate maps on request, using parameters, such as map layer
order, styling/symbolization, map extent, data format, projection, etc. The OGC
Consortium defined the WMS standard to define the map requests and return data
formats. Typical image formats for the map result are PNG, JPEG, GIF or SVG. There
are open source WMS Servers such as UMN Mapserver and Mapnik. Commercial
alternatives exist from most commercial GIS vendors, such as ESRI ArcIMS, Intergraph
Geomedia WebMap and others.

[edit] Client side technologies


• Web browser – In the simplest setup, only a web browser is required. All modern web
browsers support the display of HTML and raster images (JPEG, PNG and GIF format).
Some solutions require additional plugins (see below).
o ECMAScript support – ECMAScript is the standardized version of JavaScript.
It is necessary to implement client side interaction, refactoring of the DOM of a
webpage and for doing network requests. ECMAScript is currently part of any
modern web browser.
o Events support – Various events are necessary to implement interactive client
side maps. Events can trigger script execution or SMIL operations. We distinguish
between:
 Mouse events (mousedown, mouseup, mouseover, mousemove, click)
 Keyboard events (keydown, keypress, keyup)
 State events (load, unload, abort, error)
 Mutation events (reacts on modifications of the DOM tree, e.g.
DOMNodeInserted)
 SMIL animation events (reacts on different states in SMIL animation,
beginEvent, endEvent, repeatEvent)
 UI events (focusin, focusout, activate)
 SVG specific events (SVGZoom, SVGScroll, SVGResize)
o Network requests – This is necessary to load additional data and content into a
web page. Most modern browsers provide the XMLHttpRequest object which
allows for get and post http requests and provides some feedback on the data
loading state. The data received can be processed by ECMAScript and can be
included into the current DOM tree of the web page / web map. SVG user agents
alternatively provide the getURL() and postURL() methods for network requests.
It is recommended to test for the existence of a network request method and
provide alternatives if one method isn't present. As an example, a wrapper
function could handle the network requests and test whether XMLHttpRequests or
getURL() or alternative methods are available and choose the best one available.
These network requests are also known under the term Ajax.
o DOM support – The Document Object Model provides a language independent
API for the manipulation of the document tree of the webpage. It exposes
properties of the individual nodes of the document tree, allows to insert new
nodes, delete nodes, reorder nodes and change existing nodes. DOM support is
included in any modern web browser. DOM support together with scripting is also
known as DHTML or Dynamic HTML. Google Maps and many other web
mapping sites use a combination of DHTML, Ajax, SVG and VML.
o SVG support or SVG image support – SVG is the abbreviation of "Scalable
Vector Graphics" and integrates vector graphics, raster graphics and text. SVG
also supports animation, internationalization, interactivity, scripting and XML
based extension mechanisms. SVG is a huge step forward when it comes to
delivering high quality, interactive maps. At the time of writing (2007–01), SVG
is natively supported in Mozilla/Firefox >version 1.5, Opera >version 9 and the
developer version of Safari/Webkit. Internet Explorer users still need the Adobe
SVG viewer plugin provided by Adobe. For a German book on web mapping with
SVG see[15] and for an English paper on SVG mapping see.[16]
o Java support – some browsers still provide old versions of the Java virtual
machine. An alternative is the use of the Sun Java Plugin. Java is a full featured
programming language that can be used to create very sophisticated and
interactive web maps. The Java2D and Java3D libraries provide 2d and 3d vector
graphics support. The creation of Java based web maps requires a lot of
programming know how. Adrian Herzog ([17] discusses the use of Java applets for
the presentation of interactive choroplethe and cartogram maps.
o Web browser plugins
 Adobe Acrobat – provides vector graphics and high quality printing
support. Allows toggling of map layers, hyper links, multimedia
embedding, some basic interactivity and scripting (ECMAScript).
 Adobe Flash – provides vector graphics, animation and multimedia
support. Allows the creation of sophisticated interactive maps, as with
Java and SVG. Features a programming language (ActionScript) which is
similar to ECMAScript. Supports Audio and Video.
 Apple Quicktime – Adds support for additional image formats, video,
audio and Quicktime VR (Panorama Images). Only available to Mac OS X
and Windows.
 Adobe SVG viewer – provide SVG 1.0 support for web browsers, only
required for Internet Explorer Users, because it doesn't yet natively
support SVG. The Adobe SVG viewer isn't developed any further and only
fills the gap until Internet Explorer gains native SVG support.
 Sun Java plugin provides support for newer and advanced Java Features.

[edit] See also


Atlas portal

• Comparison of web map services


• List of online map services

[edit] Notes and references


1. ^ Andreas Neumann Encyclopedia of GIS pg 1261
2. ^ See Trap street for examples of how map vendors trap copyright violators, by introducing
deliberate errors into their maps.
3. ^ Andreas Neumann in Encyclopedia of GIS, Springer, 2007. pg 1262
4. ^ a b Kraak, Menno Jan (2001): Settings and needs for web cartography, in: Kraak and Allan
Brown (eds), Web Cartography, Francis and Taylor, New York, p. 3–4. see also webpage [1].
Accessed 2007-01-04.
5. ^ For much more detail, see History of the World Wide Web and related topics under History of
computer hardware.
6. ^ More details are in: History of the World Wide Web#1980–91: Development of the WWW.
7. ^ For a list of early Web browsers, see: List of web browsers#Historically important browsers.
8. ^ a b c For the version history of HTTP, see: HTTP#HTTP versions.
9. ^ For more details on CERN's decision to give away early web technology, see: History of the
World Wide Web#Web organization.
10.^ a b c d For the version history of HTML, see: HTML#Version history of the standard.
11.^ See HTML#History of HTML.
12.^ ActiveCGM is evidently an ActiveX control that displays CGM files. References needed.
13.^ a b c d e See: SVG format#Development history
14.^ David Montgomery (Mar. 14, 2007). "Here Be Dragons" (HTML). News. Washington Post.
http://www.washingtonpost.com/wp-dyn/content/article/2007/03/13/AR2007031301854.html.
Retrieved on 2007-03-14.
15.^ Überschär, Nicole and André M. Winter (2006): Visualisieren von Geodaten mit SVG im
Internet, Band 1: Scalable Vector Graphics – Einführung, clientseitige Interaktionen und
Dynamik, Wichmann Verlag, Heidelberg, ISBN 3-87907-431-3.
16.^ Neumann, Andreas and André M. Winter (2003): Webmapping with Scalable Vector Graphics
(SVG): Delivering the promise of high quality and interactive web maps, in: Peterson, M. (ed.),
Maps and the Internet, Elsevier, p. 197–220.
17.^ Herzog, Adrian (2003):Developing Cartographic Applets for the Internet, in: Peterson, M. (ed.)
Maps and the Internet, Elsevier, p. 117–130.

[edit] Further reading


• Kraak, Menno-Jan and Allan Brown (2001): Web Cartography – Developments and
prospects, Taylor & Francis, New York, ISBN 0-7484-0869-X.
• Mitchel, Tyler (2005): WebMapping Illustrated, O'Reilly, Sebastopol, 350 pages, ISBN 0-
569-00865-1. This book discusses various Open Source WebMapping projects and
provides hints and tricks as well as examples.
• Peterson, Michael P. (ed.) (2003): Maps and the Internet, Elsevier, ISBN 0-08-044201-3.
• Rambaldi G, Chambers R., McCall M, And Fox J. 2006. Practical ethics for PGIS
practitioners, facilitators, technology intermediaries and researchers. PLA 54:106-113,
IIED, London, UK

[edit] External links


• vMap-Portal (german)
• UMN MapServer documentation and tutorials
• Webmapping with SVG, Postgis and UMN MapServer tutorials

Retrieved from "http://en.wikipedia.org/wiki/Web_mapping"


Categories: Web mapping
Hidden categories: Articles to be merged from April 2008 | All articles to be merged | All pages
needing cleanup | Articles with minor POV problems from June 2009

Views

• Article
• Discussion
• Edit this page
• History
Personal tools

• Log in / create account

Navigation

• Main page
• Contents
• Featured content
• Current events
• Random article

Search

Interaction

• About Wikipedia
• Community portal
• Recent changes
• Contact Wikipedia
• Donate to Wikipedia
• Help

Toolbox

• What links here


• Related changes
• Upload file
• Special pages
• Printable version
• Permanent link
• Cite this page

Languages

• Español
• Français
• Русский
• 中文

• This page was last modified on 27 July 2009 at 09:40.


• Text is available under the Creative Commons Attribution/Share-Alike License;
additional terms may apply. See Terms of Use for details.
Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a non-profit
organization.
• Privacy policy
• About Wikipedia
• Disclaimers

Help us improve Wikipedia by supporting it financially.

Web server
From Wikipedia, the free encyclopedia

Jump to: navigation, search


This article does not cite any references or sources. Please help improve this article by
adding citations to reliable sources. Unsourced material may be challenged and removed.
(March 2009)

The inside and front of a Dell PowerEdge web server

The term web server or webserver can mean one of two things:

1. A computer program that is responsible for accepting HTTP requests from clients (user
agents such as web browsers), and serving them HTTP responses along with optional
data contents, which usually are web pages such as HTML documents and linked objects
(images, etc.).
2. A computer that runs a computer program as described above.

Contents
[hide]

• 1 Common features
• 2 Origin of returned content
• 3 Path translation
• 4 Load limits
o 4.1 Overload causes
o 4.2 Overload symptoms
o 4.3 Anti-overload techniques
• 5 Historical notes
• 6 Market structure
• 7 See also

• 8 External links

[edit] Common features

A standard 19" Rack of servers as seen from the front.

Although web server programs differ in detail, they all share some basic common features.

1. HTTP: every web server program operates by accepting HTTP requests from the client,
and providing an HTTP response to the client. The HTTP response usually consists of an
HTML or XHTML document, but can also be a raw file, an image, or some other type of
document (defined by MIME-types). If some error is found in client request or while
trying to serve it, a web server has to send an error response which may include some
custom HTML or text messages to better explain the problem to end users.
2. Logging: usually web servers have also the capability of logging some detailed
information, about client requests and server responses, to log files; this allows the
webmaster to collect statistics by running log analyzers on these files.

In practice many web servers also implement the following features:

1. Authentication, optional authorization request (request of user name and password)


before allowing access to some or all kind of resources.
2. Handling of static content (file content recorded in server's filesystem(s)) and dynamic
content by supporting one or more related interfaces (SSI, CGI, SCGI, FastCGI,
JSP,ColdFusion, PHP, ASP, WhizBase, ASP.NET, Server API such as NSAPI, ISAPI,
etc.).
3. HTTPS support (by SSL or TLS) to allow secure (encrypted) connections to the server
on the standard port 443 instead of usual port 80.
4. Content compression (i.e. by gzip encoding) to reduce the size of the responses (to
lower bandwidth usage, etc.).
5. Virtual hosting to serve many web sites using one IP address.
6. Large file support to be able to serve files whose size is greater than 2 GB on 32 bit OS.
7. Bandwidth throttling to limit the speed of responses in order to not saturate the network
and to be able to serve more clients.

[edit] Origin of returned content


The origin of the content sent by server is called:

• static if it comes from an existing file lying on a filesystem;


• dynamic if it is dynamically generated by some other program or script or application
programming interface (API) called by the web server.

Serving static content is usually much faster (from 2 to 100 times) than serving dynamic
content, especially if the latter involves data pulled from a database.

[edit] Path translation


Web servers are able to map the path component of a Uniform Resource Locator (URL) into:

• a local file system resource (for static requests);


• an internal or external program name (for dynamic requests).

For a static request the URL path specified by the client is relative to the Web server's root
directory.

Consider the following URL as it would be requested by a client:

http://www.example.com/path/file.html

The client's web browser will translate it into a connection to www.example.com with the
following HTTP 1.1 request:

GET /path/file.html HTTP/1.1


Host: www.example.com

The web server on www.example.com will append the given path to the path of its root directory.
On Unix machines, this is commonly /var/www. The result is the local file system resource:

/var/www/path/file.html
The web server will then read the file, if it exists, and send a response to the client's web
browser. The response will describe the content of the file and contain the file itself.

[edit] Load limits


A web server (program) has defined load limits, because it can handle only a limited number of
concurrent client connections (usually between 2 and 60,000, by default between 500 and 1,000)
per IP address (and TCP port) and it can serve only a certain maximum number of requests per
second depending on:

• its own settings;


• the HTTP request type;
• content origin (static or dynamic);
• the fact that the served content is or is not cached;
• the hardware and software limits of the OS where it is working.

When a web server is near to or over its limits, it becomes overloaded and thus unresponsive.

[edit] Overload causes

At any time web servers can be overloaded because of:

• Too much legitimate web traffic (i.e. thousands or even millions of clients hitting the
web site in a short interval of time. e.g. Slashdot effect);
• DDoS (Distributed Denial of Service) attacks;
• Computer worms that sometimes cause abnormal traffic because of millions of infected
computers (not coordinated among them);
• XSS viruses can cause high traffic because of millions of infected browsers and/or web
servers;
• Internet web robots traffic not filtered/limited on large web sites with very few
resources (bandwidth, etc.);
• Internet (network) slowdowns, so that client requests are served more slowly and the
number of connections increases so much that server limits are reached;
• Web servers (computers) partial unavailability, this can happen because of required or
urgent maintenance or upgrade, HW or SW failures, back-end (i.e. DB) failures, etc.; in
these cases the remaining web servers get too much traffic and become overloaded.

[edit] Overload symptoms

The symptoms of an overloaded web server are:

• requests are served with (possibly long) delays (from 1 second to a few hundred
seconds);
• 500, 502, 503, 504 HTTP errors are returned to clients (sometimes also unrelated 404
error or even 408 error may be returned);
• TCP connections are refused or reset (interrupted) before any content is sent to clients;
• in very rare cases, only partial contents are sent (but this behavior may well be considered
a bug, even if it usually depends on unavailable system resources).

[edit] Anti-overload techniques

To partially overcome above load limits and to prevent overload, most popular web sites use
common techniques like:

• managing network traffic, by using:


o Firewalls to block unwanted traffic coming from bad IP sources or having bad
patterns;
o HTTP traffic managers to drop, redirect or rewrite requests having bad HTTP
patterns;
o Bandwidth management and traffic shaping, in order to smooth down peaks in
network usage;
• deploying web cache techniques;
• using different domain names to serve different (static and dynamic) content by separate
Web servers, i.e.:
o http://images.example.com
o
o http://www.example.com
o
• using different domain names and/or computers to separate big files from small and
medium sized files; the idea is to be able to fully cache small and medium sized files and
to efficiently serve big or huge (over 10 - 1000 MB) files by using different settings;
• using many Web servers (programs) per computer, each one bound to its own network
card and IP address;
• using many Web servers (computers) that are grouped together so that they act or are seen
as one big Web server, see also: Load balancer;
• adding more hardware resources (i.e. RAM, disks) to each computer;
• tuning OS parameters for hardware capabilities and usage;
• using more efficient computer programs for web servers, etc.;
• using other workarounds, especially if dynamic content is involved.

[edit] Historical notes

The world's first web server.


In 1989 Tim Berners-Lee proposed to his employer CERN (European Organization for Nuclear
Research) a new project, which had the goal of easing the exchange of information between
scientists by using a hypertext system. As a result of the implementation of this project, in 1990
Berners-Lee wrote two programs:

• a browser called WorldWideWeb;


• the world's first web server, later known as CERN HTTPd, which ran on NeXTSTEP.

Between 1991 and 1994 the simplicity and effectiveness of early technologies used to surf and
exchange data through the World Wide Web helped to port them to many different operating
systems and spread their use among lots of different social groups of people, first in scientific
organizations, then in universities and finally in industry.

In 1994 Tim Berners-Lee decided to constitute the World Wide Web Consortium to regulate the
further development of the many technologies involved (HTTP, HTML, etc.) through a
standardization process.

The following years are recent history which has seen an exponential growth of the number of
web sites and servers.

[edit] Market structure


Given below is a list of top Web server software vendors published in a Netcraft survey in
January 2009.

Market share of major web servers

Vendor Product Web Sites Hosted Percent

Apache Apache 96,531,033 52.05%

Microsoft IIS 61,023,474 32.90%

Google GWS 9,864,303 5.32%

nginx nginx 3,462,551 1.87%

lighttpd lighttpd 2,989,416 1.61%


Oversee Oversee 1,847,039 1.00%

Others - 9,756,650 5.26%

Total - 185,474,466 100.00%

See Category:Web server software for a longer list of HTTP server programs.

[edit] See also


• Application server
• Comparison of web server software
• Comparison of lightweight web servers
• HTTP compression
• Open source web application
• SSI, CGI, SCGI, FastCGI, PHP, Java Servlet, JavaServer Pages, ASP, ASP .NET, Server
API
• Virtual hosting
• Web hosting service
• Web service

[edit] External links


• Debugging Apache Web Server Problems
• RFC 2616, the Request for Comments document that defines the HTTP 1.1 protocol.
• C64WEB.COM - Commodore 64 running as a webserver using Contiki
[hide]
v•d•e
Website management

Drop registrar · Overselling · Web document ·


Concepts Web content · Web hosting service · Web server ·
Webmaster

Comparison of control panels · cPanel ·


DirectAdmin · Domain Technologie Control ·
Web hosting tools
ehcp · H-Sphere · InterWorx · ISPConfig · ispCP ·
LxAdmin · Plesk · Usermin · Webmin

AusRegistry · CZ.NIC · CIRA · CNNIC · DENIC ·


DNS Belgium · Domainz · ENom · Go Daddy ·
Domain name managers and registrars Melbourne IT · Museum Domain Management
Association · Network Solutions · NeuStar ·
OLM.net · Register.com · Tucows · Web.com

Conference management system · Document


Web content management system management system · Wiki software · Weblog
software
Retrieved from "http://en.wikipedia.org/wiki/Web_server"
Categories: Servers | Web server software | Website management | Web development
Hidden categories: Articles lacking sources from March 2009 | All articles lacking sources

Views

• Article
• Discussion
• Edit this page
• History

Personal tools

• Log in / create account

Navigation

• Main page
• Contents
• Featured content
• Current events
• Random article
Search

Interaction

• About Wikipedia
• Community portal
• Recent changes
• Contact Wikipedia
• Donate to Wikipedia
• Help

Toolbox

• What links here


• Related changes
• Upload file
• Special pages
• Printable version
• Permanent link
• Cite this page

Languages

• ‫العربية‬
• Bosanski
• Български
• Català
• Česky
• Dansk
• Deutsch
• Español
• Esperanto
• ‫فارسی‬
• Français
• 한국어
• Hrvatski
• Bahasa Indonesia
• Interlingua
• Íslenska
• Italiano
• ‫עברית‬
• Latviešu
• Magyar
• Bahasa Melayu
• Монгол
• Nederlands
• 日本語
• Polski
• Português
• Русский
• Simple English
• Slovenčina
• Slovenščina
• Srpskohrvatski / Српскохрватски
• Suomi
• Svenska
• ไทย
• Türkçe
• Українська
• ‫اردو‬
• 中文

• This page was last modified on 26 July 2009 at 16:00.


• Text is available under the Creative Commons Attribution/Share-Alike License;
additional terms may apply. See Terms of Use for details.
Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a non-profit
organization.
• Privacy policy
• About Wikipedia

Disclaimers Help us improve Wikipedia by supporting it financially.

Internet Resource Management


From Wikipedia, the free encyclopedia

Jump to: navigation, search


This article is an orphan, as few or no other articles link to it. Please introduce links to this
page from other articles related to it. (February 2009)

Internet resource management has been the domain of Internet technicians in managing the
addressing structure of the Internet to enable the explosive growth of Internet use, and to have
enough addressing space for that growth. There is however a different version of this concept
used by Sequel Technology upon its release of "Internet Resource Manager'. This concept has to
do with managing all network resources available to an enterprise, obtaining a view of exactly
what resources are being used, and a tool to manage those resources in tandem with Acceptable
Use Policies to reduce the total cost of Internet ownership.
This Sequel concept of "Internet Resource Management" has been further moved from the highly
technical term, towards meaning a management process taken up at the enterprise level by
visionGateway. In this concept, "Internet Resources" are all those network and Internet
templates, tools, software systems, gateways and websites that an enterprise owns, manages, or
to those systems and sites that the enterprise may have access to through a network.

Contents
[hide]

• 1 Need for Internet Resource Management


• 2 Software Tools
• 3 Competing Philosophies

• 4 External References

[edit] Need for Internet Resource Management


Internet Resource Management (IRM) Practices are "Business" practices as distinct from
"Technology" practices; the need for business management of Internet Resources comes about
because so many business activities are now reliant on the Internet being "available" that
business managers are now requiring capabilities and tools to be available for the best
management of campaigns, online activities, purchasing and more.

Everyday Internet Resources of a business are being used by employees, administrators, 3rd
parties, customers & Joe and Jill public. Managers want to know that employees are able to see
any server across the Internet that has information or services that employees needs to get her job
done. More and more services such as accounting packages, Customer Relationship Management
tools, spreadsheets, document editors are obtained from servers outside the local area network of
the employee.

In addition, however, employees with Internet access now have available the biggest
entertainment, video playing, music acquirement tools, shopping facilities, banking and more. A
study of 10,000 Internet users has reported that 44.7% workers use the Internet for an average of
2.9 hours a day on personal web/Internet activity.

[edit] Software Tools


Currently there are a raft of programs available to businesses each of which focus control over
one or two facets of management. Businesses use a software filtering product that maintains a
substantial database of sites in many 'categories'. Users can determine which categories they
want 'blocked'. The Internet requests are passed through a server and a user policy defines those
sites available to employees.
Software packages that offer businesses with a means to manage one or other aspect of Internet
Resource usage includes: VitalSuite performance management, NetIQ, Watchfire, SurfControl,
InetSoft, Clearswift, Elron Software Webfilter, Fine Ground AppScope, Cordiant Performance
Management, Boost Works Boost Edge, Redline, Accrue, Seerun, Maxamine, Packeteer and
INTERScepter by visionGateway.

What needs to be managed includes:

• the role a connected user from the business needs to play when online,
• the range of services that this particular role may require such as whether connection to
the Internet for this role requires to see only the software packages to which the company
subscribes,
• how the employee needs to connect to the Internet for example desktop computer, laptop,
PDA, mobile phone etc
• connectivity for the employee say whether we need to give this employee special online
support for handling large scale documents, maybe some special packet switching, or
additional security levels
• the range of online assets to be made available to this employee, usually managed by
category, such as Banking assets where all Banks online would be seen by an employee,
or a catalogue of small to medium businesses.

While the software packages named above could provide a management team with the tools
needed to manage Internet connectivity within an organization, there would be significant
difficulties arise in the cross-over of one tool with another. While there are cross over issues from
one package to another there are also significant gaps where there is no product offering.

[edit] Competing Philosophies


There are three competing philosophies of Internet Resource Management:

• spying on employee without their knowledge, and cracking down on a recalcitrant


whenever she/he does anything wrong online (the underlying philosophy of eTelemetry's
METRON appliance);

• filter out (censor) all the sites that would not be useful to an employee while doing
business things on a workplace computer (Secure Computing's Gateway Security,
SurfControl Enterprise Protection Suite, and Websense Enterprise)

• a self-management approach where employees are given an account, and in that account
it tallies total time on the Internet, activity by protocol, and policies required of the
company (visionGateway's INTERScepter)

Spying and censoring have huge drawbacks. Spying creates a rift between employers and
employees and does nothing for morale in an organization. After the first two or three examples
of employees being "caught out" the required effects of spying reduce as does employee
ingenuity to "get around" being spied upon.
Although censoring tools enjoy majority market share, censoring has major drawbacks; the main
filtering database can be a bottleneck when employees are working, and also a database of
filtered sites is near impossible to keep up to date because there are thousands of sites a day that
spring up as new, and it is near impossible for those sites to be viewed by staff to put on the
blacklist. Machine listing of sites on filters is not a perfect science and a lot of good sites are
wrongly filtered out.

A self-management approach has fewer drawbacks, however, management would need to play an
active part in reviewing with employees their goals for the month and dutifully revisit with each
staff member how they are progressing in meeting company goals of Internet use.

[edit] External References


• eTelemetry Inc.
• Secure Computing
• SurfControl Inc.
• visionGateway Inc.
• Websense Inc.

Retrieved from "http://en.wikipedia.org/wiki/Internet_Resource_Management"


Categories: Network architecture | Internet culture | Internet terminology
Hidden categories: Orphaned articles from February 2009 | All orphaned articles

Views

• Article
• Discussion
• Edit this page
• History

Personal tools

• Log in / create account

Navigation

• Main page
• Contents
• Featured content
• Current events
• Random article

Search
Interaction

• About Wikipedia
• Community portal
• Recent changes
• Contact Wikipedia
• Donate to Wikipedia
• Help

Toolbox

• What links here


• Related changes
• Upload file
• Special pages
• Printable version
• Permanent link
• Cite this page

• This page was last modified on 26 March 2009 at 13:38.


• Text is available under the Creative Commons Attribution/Share-Alike License;
additional terms may apply. See Terms of Use for details.
Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a non-profit
organization.
• Privacy policy
• About Wikipedia

Disclaimers Hardware and software requirements


These requirements provide a guideline for the minimum system requirements for each Tivoli
Compliance Insight Manager component. Requirements might vary depending on your company
environment and how you plan to deploy the product.

Important:
Be sure that your system meets all the requirements listed and is running only supported versions of all
software. For best results, start with a system with only the Windows® operating system, Internet
Information Server (IIS), and the latest service pack and security patches installed. Otherwise, Tivoli
Compliance Insight Manager might not work correctly.
General

• Minimum screen resolution for working with the Tivoli Compliance Insight Manager and
its components is 1024x768.
• The Server, Management Console, and Actuator components of Tivoli Compliance Insight
Manager use a number of network ports for communication. The default base port
number is 5992; this port is used for Point-of-Presence <-> Server communication. This
communication requires that a two-way connection can be made on the base port
between the Tivoli Compliance Insight Manager Server and the Point-of-Presence. The
base port + 1 (5993 by default) is used by the Management Console to connect to the
server (but always locally on a server or a Point-of-Presence). Other local ports are
assigned dynamically in the range 49152-65535 for the Tivoli Compliance Insight
Manager server's internal communication. Ports that are already in use in this range are
detected and skipped. If the default base port (5992) and the next sequential port (5993)
are not available in your environment, or on a particular system, use the Add Machine
wizard to specify a different base port number.

Note:

Ensure that the base port and the base port + 1 are available locally on the Tivoli
Compliance Insight Manager Server and Point-of-Presence. Verify that any network
devices, such as routers and firewalls, that are located between the Tivoli Compliance
Insight Manager Server and the Point-of-Presence permit two-way network traffic over
the base port.

Monitored Windows event sources require TCP 139 with a one-way connection.

Monitored UNIX SSH event sources require TCP 22 (by default) with a one-way
connection

DB2 is used between the Enterprise Server and Standard Server. Ports 50000 and
50001 are used by default, but you can specify different ports during installation.

• In all cases, connect the systems hosting the Tivoli Compliance Insight Manager
components to the Server through a TCP/IP network.
• The Tivoli Compliance Insight Manager Setup Program installs Java™ 1.4.2 in the
C:\Program Files directory if the program does not find a valid version installed. If a
custom JRE installation path is desired, Java 1.4.2 must be installed. For best results,
use the version provided with Tivoli Compliance Insight Manager 8.5. The Java
installation package can be found in the NT\Support\Java folder on the CD labeled IBM
Tivoli Compliance Insight Manager for Windows 2003 CD 3 of 3. Select the custom install
option and select the Support for Additional Languages option.

Important:

Other versions of Java are not supported. If you use a version of Java other than the one
provided, unpredictable results might occur.

Tivoli Compliance Insight Manager Standard Server and Enterprise Server


Hardware requirements

• Dedicated physical server (no virtualization)


• Processor and RAM requirements:

Minimum Enterprise Server requirements

o Quad Core Intel® Xeon™ 3.0 GHz processor


o 6 GB RAM

Minimum Standard Server requirements


o Duo Core Intel Xeon 3.0 GHz processor
o 4 GB RAM + 0.5 GB per scheduled General Event Model (GEM) database
• Minimum of 200 GB free disk space.

For detailed information about determining the required memory and disk space, see
Determining disk space and memory requirements.

Software requirements

• Microsoft Windows 2003 Enterprise Server (SP1, SP2); 32-bit version


o NetBIOS enabled
o TCP/IP network connection configured to all other systems hosting Tivoli
Compliance Insight Manager components
o NTFS file system
• Microsoft Internet Information Server (IIS) 6 for Windows Server 2003 (required for Web
applications)

Tivoli Compliance Insight Manager Management Console

• Microsoft Windows Server 2003 with Service Pack 1


• Internet Explorer 6.0

Tivoli Compliance Insight Manager Web Applications


Hardware requirements
Tivoli Compliance Insight Manager Standard Server or Enterprise Server installed and configured
Software requirements
Internet Explorer 6.0

• Style sheet supported and enabled


• JavaScript™ supported and enabled
• Java applets supported and enabled
• Cookies enabled

Tivoli Compliance Insight Manager Actuator


Supported operating systems

• AIX 5L™ 5.1, 5L 5.2, and 5L 5.3


• Sun Solaris 7 - 10
• HP-UX 10.20, 11i = 11.11
• Windows NT® 4.0 with Service Pack 6, Windows 2000 with Service Pack 2, Windows XP
Professional with Service Pack 2, or Windows Server 2003 with Service Pack 1
o NetBIOS enabled
o NTFS file system

Disk space requirements

The amount of disk space required for the log files on the audited systems depends on the
amount of activity, the log settings, and the IBM Tivoli Compliance Insight Manager
collect schedule. For guidelines for calculating disk space see Determining disk space
and memory requirements.

Software prerequisites

• Install the Standard Server and the Management Console before installing the Actuator.
For details, see Installing a Standard Server.
• The Actuator must have access to the Tivoli Compliance Insight Manager servers through
a TCP/IP network.
• The Server, Management Console, and Actuator components of the Tivoli Compliance
Insight Manager system use a number of network ports for communication. The default
base port number is 5992; this port is used for Point-of-Presence <-> Server
communication. This communication requires that a two-way connection can be made on
the base port between the Tivoli Compliance Insight Manager Server and the Point-of-
Presence. The base port + 1 (5993 by default) is used by the Management Console to
connect to the server (but always locally on a server or a Point-of-Presence). Other local
ports are assigned dynamically in the range 49152-65535 for the Tivoli Compliance
Insight Manager server's internal communication. Ports that are already in use in this
range are detected and skipped. If the default base port (5992) and the next sequential
port (5993) are not available in your environment, or on a particular system, use the Add
Machine wizard to specify a different base port number.

Ensure that the base port and the base port + 1 are available locally on the Tivoli
Compliance Insight Manager Server and Point-of-Presence. Verify that any
network devices, such as routers and firewalls, that are located between the Tivoli
Compliance Insight Manager Server and the Point-of-Presence permit two-way
network traffic over the base port.

• Connect the Actuator to the Tivoli Compliance Insight Manager servers through a TCP/IP
network.
• To work with the Tivoli Compliance Insight Manager and its components, use a minimum
screen resolution of 1024x768.

Вам также может понравиться