Академический Документы
Профессиональный Документы
Культура Документы
Firewall
From Wikipedia, the free encyclopedia
1. Packet filter: Looks at each packet entering or leaving the network and accepts or rejects
it based on user-defined rules. Packet filtering is fairly effective and transparent to users,
but it is difficult to configure. In addition, it is susceptible to IP spoofing.
2. Application gateway: Applies security mechanisms to specific applications, such as FTP
and Telnet servers. This is very effective, but can impose a performance degradation.
3. Circuit-level gateway: Applies security mechanisms when a TCP or UDP connection is
established. Once the connection has been made, packets can flow between the hosts
without further checking.
4. Proxy server: Intercepts all messages entering and leaving the network. The proxy server
effectively hides the true network addresses.
Contents
[hide]
• 1 Function
• 2 History
o 2.1 First generation - packet filters
o 2.2 Second generation - "stateful" filters
o 2.3 Third generation - application layer
o 2.4 Subsequent developments
• 3 Types
o 3.1 Network layer and packet filters
o 3.2 Example of some basic firewall rules
o 3.3 Application-layer
o 3.4 Proxies
o 3.5 Network address translation
• 4 See also
• 5 References
• 6 External links
[edit] Function
A firewall is a dedicated appliance, or software running on a computer, which inspects network
traffic passing through it, and denies or permits passage based on a set of rules.
It is a software or hardware that is normally placed between a protected network and a not
protected network and acts like a gate to protect assets to ensure that nothing private goes out
and nothing malicious comes in.
A firewall's basic task is to regulate some of the flow of traffic between computer networks of
different trust levels. Typical examples are the Internet which is a zone with no trust and an
internal network which is a zone of higher trust. A zone with an intermediate trust level, situated
between the Internet and a trusted internal network, is often referred to as a "perimeter network"
or Demilitarized zone (DMZ).
A firewall's function within a network is similar to physical firewalls with fire doors in building
construction. In the former case, it is used to prevent network intrusion to the private network. In
the latter case, it is intended to contain and delay structural fire from spreading to adjacent
structures.
Without proper configuration, a firewall can often become worthless. Standard security practices
dictate a "default-deny" firewall ruleset, in which the only network connections which are
allowed are the ones that have been explicitly allowed. Unfortunately, such a configuration
requires detailed understanding of the network applications and endpoints required for the
organization's day-to-day operation. Many businesses lack such understanding, and therefore
implement a "default-allow" ruleset, in which all traffic is allowed unless it has been specifically
blocked. This configuration makes inadvertent network connections and system compromise
much more likely.
[edit] History
The term "firewall" originally meant a wall to confine a fire or potential fire within a building,
cf. firewall (construction). Later uses refer to similar structures, such as the metal sheet
separating the engine compartment of a vehicle or aircraft from the passenger compartment.
Firewall technology emerged in the late 1980s when the Internet was a fairly new technology in
terms of its global use and connectivity. The predecessors to firewalls for network security were
the routers used in the late 1980s to separate networks from one another.[1] The view of the
Internet as a relatively small community of compatible users who valued openness for sharing
and collaboration was ended by a number of major internet security breaches, which occurred in
the late 1980s:[1]
We are currently under attack from an Internet VIRUS! It has hit Berkeley, UC San
“ Diego, Lawrence Livermore, Stanford, and NASA Ames. ”
• The Morris Worm spread itself through multiple vulnerabilities in the machines of the
time. Although it was not malicious in intent, the Morris Worm was the first large scale
attack on Internet security; the online community was neither expecting an attack nor
prepared to deal with one.[3]
The first paper published on firewall technology was in 1988, when engineers from Digital
Equipment Corporation (DEC) developed filter systems known as packet filter firewalls. This
fairly basic system was the first generation of what would become a highly evolved and technical
internet security feature. At AT&T Bell Labs, Bill Cheswick and Steve Bellovin were continuing
their research in packet filtering and developed a working model for their own company based
upon their original first generation architecture.
Packet filters act by inspecting the "packets" which represent the basic unit of data transfer
between computers on the Internet. If a packet matches the packet filter's set of rules, the packet
filter will drop (silently discard) the packet, or reject it (discard it, and send "error responses" to
the source).
This type of packet filtering pays no attention to whether a packet is part of an existing stream of
traffic (it stores no information on connection "state"). Instead, it filters each packet based only
on information contained in the packet itself (most commonly using a combination of the
packet's source and destination address, its protocol, and, for TCP and UDP traffic, the port
number).
TCP and UDP protocols comprise most communication over the Internet, and because TCP and
UDP traffic by convention uses well known ports for particular types of traffic, a "stateless"
packet filter can distinguish between, and thus control, those types of traffic (such as web
browsing, remote printing, email transmission, file transfer), unless the machines on each side of
the packet filter are both using the same non-standard ports.
From 1989-1990 three colleagues from AT&T Bell Laboratories, Dave Presetto, Janardan
Sharma, and Kshitij Nigam developed the second generation of firewalls, calling them circuit
level firewalls.
Second(2nd) Generation firewalls in addition regard placement of each individual packet within
the packet series. This technology is generally referred to as a stateful packet inspection as it
maintains records of all connections passing through the firewall and is able to determine
whether a packet is either the start of a new connection, a part of an existing connection, or is an
invalid packet. Though there is still a set of static rules in such a firewall, the state of a
connection can in itself be one of the criteria which trigger specific rules.
This type of firewall can help prevent attacks which exploit existing connections, or certain
Denial-of-service attacks.
Publications by Gene Spafford of Purdue University, Bill Cheswick at AT&T Laboratories, and
Marcus Ranum described a third generation firewall known as an application layer firewall, also
known as a proxy-based firewall. Marcus Ranum's work on the technology spearheaded the
creation of the first commercial product. The product was released by DEC who named it the
DEC SEAL product. DEC’s first major sale was on June 13, 1991 to a chemical company based
on the East Coast of the USA.
TIS, under a broader DARPA contract, developed the Firewall Toolkit (FWTK), and made it
freely available under license on October 1, 1993. The purposes for releasing the freely-
available, not for commercial use, FWTK were: to demonstrate, via the software, documentation,
and methods used, how a company with (at the time) 11 years' experience in formal security
methods, and individuals with firewall experience, developed firewall software; to create a
common base of very good firewall software for others to build on (so people did not have to
continue to "roll their own" from scratch); and to "raise the bar" of firewall software being used.
The key benefit of application layer filtering is that it can "understand" certain applications and
protocols (such as File Transfer Protocol, DNS, or web browsing), and it can detect whether an
unwanted protocol is being sneaked through on a non-standard port or whether a protocol is
being abused in any harmful way.
In 1992, Bob Braden and Annette DeSchon at the University of Southern California (USC) were
refining the concept of a firewall. The product known as "Visas" was the first system to have a
visual integration interface with colours and icons, which could be easily implemented to and
accessed on a computer operating system such as Microsoft's Windows or Apple's MacOS. In
1994 an Israeli company called Check Point Software Technologies built this into readily
available software known as FireWall-1.
The existing deep packet inspection functionality of modern firewalls can be shared by Intrusion-
prevention systems (IPS).
Currently, the Middlebox Communication Working Group of the Internet Engineering Task
Force (IETF) is working on standardizing protocols for managing firewalls and other
middleboxes.
Another axis of development is about integrating identity of users into Firewall rules. Many
firewalls provide such features by binding user identities to IP or MAC addresses, which is very
approximate and can be easily turned around. The NuFW firewall provides real identity based
firewalling, by requesting user's signature for each connection.
[edit] Types
There are several classifications of firewalls depending on where the communication is taking
place, where the communication is intercepted and the state that is being traced.
Network layer firewalls, also called packet filters, operate at a relatively low level of the TCP/IP
protocol stack, not allowing packets to pass through the firewall unless they match the
established rule set. The firewall administrator may define the rules; or default rules may apply.
The term "packet filter" originated in the context of BSD operating systems.
Network layer firewalls generally fall into two sub-categories, stateful and stateless. Stateful
firewalls maintain context about active sessions, and use that "state information" to speed packet
processing. Any existing network connection can be described by several properties, including
source and destination IP address, UDP or TCP ports, and the current stage of the connection's
lifetime (including session initiation, handshaking, data transfer, or completion connection). If a
packet does not match an existing connection, it will be evaluated according to the ruleset for
new connections. If a packet matches an existing connection based on comparison with the
firewall's state table, it will be allowed to pass without further processing.
Stateless firewalls require less memory, and can be faster for simple filters that require less time
to filter than to look up a session. They may also be necessary for filtering stateless network
protocols that have no concept of a session. However, they cannot make more complex decisions
based on what stage communications between hosts have reached.
Modern firewalls can filter traffic based on many packet attributes like source IP address, source
port, destination IP address or port, destination service like WWW or FTP. They can filter based
on protocols, TTL values, netblock of originator, domain name of the source, and many other
attributes.
Commonly used packet filters on various versions of Unix are ipf (various), ipfw (FreeBSD/Mac
OS X), pf (OpenBSD, and all other BSDs), iptables/ipchains (Linux).
Examples using a subnet address of 10.10.10.x and 255.255.255.0 as the subnet mask for the
local area network (LAN).
It is common to allow a response to a request for information coming from a computer inside the
local network, like NetBIOS.
Firewall rule for SMTP (default port 25), allows packets governed by this protocol to access the
local SMTP Gateway (which in this example has the IP 10.10.10.6). (it is far more common to
not specify the Destination Address, or if desired, to use the ISP SMTP service address).
General Rule for the final firewall entry. If a policy does not explicitly allow a request for
service, that service should be denied by this catch-all rule which should be the last in the list of
rules.
Other useful rules would be allowing ICMP error messages, restricting all destination ports
except port 80 in order to allow only web browsing, etc.
[edit] Application-layer
Application-layer firewalls work on the application level of the TCP/IP stack (i.e., all browser
traffic, or all telnet or ftp traffic), and may intercept all packets traveling to or from an
application. They block other packets (usually dropping them without acknowledgment to the
sender). In principle, application firewalls can prevent all unwanted outside traffic from reaching
protected machines.
On inspecting all packets for improper content, firewalls can restrict or prevent outright the
spread of networked computer worms and trojans. In practice, however, this becomes so
complex and so difficult to attempt (given the variety of applications and the diversity of content
each may allow in its packet traffic) that comprehensive firewall design does not generally
attempt this approach.
[edit] Proxies
Proxies make tampering with an internal system from the external network more difficult and
misuse of one internal system would not necessarily cause a security breach exploitable from
outside the firewall (as long as the application proxy remains intact and properly configured).
Conversely, intruders may hijack a publicly-reachable system and use it as a proxy for their own
purposes; the proxy then masquerades as that system to other internal machines. While use of
internal address spaces enhances security, crackers may still employ methods such as IP spoofing
to attempt to pass packets to a target network.
Firewalls often have network address translation (NAT) functionality, and the hosts protected
behind a firewall commonly have addresses in the "private address range", as defined in RFC
1918. Firewalls often have such functionality to hide the true address of protected hosts.
Originally, the NAT function was developed to address the limited number of IPv4 routable
addresses that could be used or assigned to companies or individuals as well as reduce both the
amount and therefore cost of obtaining enough public addresses for every computer in an
organization. Hiding the addresses of protected devices has become an increasingly important
defense against network reconnaissance.
[edit] References
1. ^ a b c d A History and Survey of Network Firewalls Kenneth Ingham and Stephanie Forrest
2. ^ [1] Firewalls by Dr.Talal Alkharobi
3. ^ RFC 1135 The Helminthiasis of the Internet
• Internet Firewalls: Frequently Asked Questions, compiled by Matt Curtin, Marcus Ranum
and Paul Robertson.
• Evolution of the Firewall Industry - Discusses different architectures and their
differences, how packets are processed, and provides a timeline of the evolution.
• A History and Survey of Network Firewalls - provides an overview of firewalls at the
various ISO levels, with references to the original papers where first firewall work was
reported.
• Software Firewalls: Made of Straw? Part 1 of 2 and Software Firewalls: Made of Straw?
Part 2 of 2 - a technical view on software firewall design and potential weaknesses
[hide]
v•d•e
Firewall software
Views
• Article
• Discussion
• Edit this page
• History
Personal tools
Navigation
• Main page
• Contents
• Featured content
• Current events
• Random article
Search
Interaction
• About Wikipedia
• Community portal
• Recent changes
• Contact Wikipedia
• Donate to Wikipedia
• Help
Toolbox
Languages
• العربية
• বাংলা
• Bân-lâm-gú
• Bosanski
• Български
• Català
• Česky
• Dansk
• Deutsch
• Eesti
• Ελληνικά
• Español
• Esperanto
• Euskara
• فارسی
• Français
• Galego
• 한국어
• Hornjoserbsce
• Hrvatski
• Bahasa Indonesia
Port
From Wikipedia, the free encyclopedia
A prerequisite for a port is a harbor with water of sufficient depth to receive ships whose draft
will allow passage into and out of the harbor.
Ports sometimes fall out of use. Rye, East Sussex was an important English port in the Middle
Ages, but the coastline changed and it is now 2 miles (3.2 km) from the sea, while the ports of
Ravenspurn and Dunwich have been lost to coastal erosion. Also in the United Kingdom,
London, on the River Thames was once an important international ports but changes in shipping
methods, such as the use of containers and larger ships, put it at a disadvantage.
Contents
[hide]
• 1 Port types
• 2 See also
o 2.1 Water port topics
o 2.2 Other types of ports
o 2.3 Lists
• 3 External links
A fishing port is a type of port or harbor facility particularly suitable for landing and distributing
fish.
Port can also be used to refer to the left side of a craft either an airplane or ship.
A "dry port" is a term sometimes used to describe a yard used to place containers or conventional
bulk cargo, usually connected to a seaport by rail or road.
A warm water port is a port where the water does not freeze in winter. Because they are
available year-round, warm water ports can be of great geopolitical or economic interest, with
the ports of Saint Petersburg, Dalian, and Valdez being notable examples.
A seaport is further categorized as a "cruise port" or a "cargo port". Additionally, "cruise ports"
are also known as a "home port" or a "port of call". The "cargo port" is also further categorized
into a "bulk" or "break bulk port" or as a "container port".
A cruise home port is the port where the passengers board to start their cruise and also debark
the cruise ships at the end of their cruise. It is also where the cruise ship's supplies are loaded for
the cruise. this includes everything from the wate and fuels to fruits, vegetable, champagne, and
any other supplies needed for the cruise. "Cruise home ports" are a very busy place during the
day the cruise ship is in port as the passengers along with their baggage debark and the new
passengers board the ship in addition to all the supplies. Currently, the Cruise Capital of the
World is the Port of Miami closely followed behind by Port Everglades and the Port of San Juan,
Puerto Rico.
A port of call is an intermediate stop for a ship on its sailing itinerary which may be half-a-
dozen ports. At these ports a cargo ship may take on supplies or fuel as well as unloading and
loading their cargo. But for a cruise ship, it is their premier stop where the cruise lines take their
passengers to enjoy their vacation.
Cargo ports on the other hand are much more different than cruise ports. They are very different
since each handles very different cargo which has to be loaded and unloaded by very different
mechanical means. The port may handle one particular type of cargo or it may handle numerous
cargoes such as grains, liquid fuels, liquid chemicals, wood, automobiles, etc. Such ports are
known as the "bulk" or "break bulk ports". Those ports that handle containerized cargo are
known as container ports. Most cargo ports handle all sorts of cargo but some ports are very
specific as to what cargo they handle. Additionally, the individual cargo ports are divided into
different operating terminals which handle the different cargoes and are operated by different
companies also known as terminal operators or stevedores.
• Airport
• Spaceport
• Port Wine
[edit] Lists
• List of seaports
• World's busiest port
• List of world's busiest transshipment ports
• List of world's busiest port regions
• List of busiest container ports
• Sea rescue organisations
Views
• Article
• Discussion
• Edit this page
• History
Personal tools
Navigation
• Main page
• Contents
• Featured content
• Current events
• Random article
Search
Interaction
• About Wikipedia
• Community portal
• Recent changes
• Contact Wikipedia
• Donate to Wikipedia
• Help
Toolbox
Languages
• العربية
• Brezhoneg
• Български
• Català
• Česky
• Dansk
• Eesti
• Ελληνικά
• Español
• Euskara
• فارسی
• Français
• Қазақша
• 한국어
• Hrvatski
• Bahasa Indonesia
• Italiano
• עברית
• Latina
• Lietuvių
• Lingála
• 日本語
• Polski
• Português
• Română
• Русский
• Simple English
• Slovenščina
• Suomi
• Svenska
• Tagalog
• Türkçe
• Tiếng Việt
• 中文
• Walon
Tunneling protocol
From Wikipedia, the free encyclopedia
Computer networks use a tunneling protocol when one network protocol (the delivery
protocol) encapsulates a different payload protocol. By using tunneling one can (for example)
carry a payload over an incompatible delivery-network, or provide a secure path through an
untrusted network.
Tunneling typically contrasts with a layered protocol model such as those of OSI or TCP/IP. The
tunnel protocol usually (but not always) operates at a higher level in the model than does the
payload protocol, or at the same level. Protocol encapsulation carried out by conventional
layered protocols, in accordance with the OSI model or TCP/IP model (for example: HTTP over
TCP over IP over PPP over a V.92 modem) does not count as tunneling.
To understand a particular protocol stack, network engineers must understand both the payload
and delivery protocol sets.
As an example of network layer over network layer, Generic Routing Encapsulation (GRE), a
protocol running over IP (IP Protocol Number 47), often serves to carry IP packets, with RFC
1918 private addresses, over the Internet using delivery packets with public IP addresses. In this
case, the delivery and payload protocols are compatible, but the payload addresses are
incompatible with those of the delivery network.
In contrast, an IP payload might believe it sees a data link layer delivery when it is carried inside
the Layer 2 Tunneling Protocol (L2TP), which appears to the payload mechanism as a protocol
of the data link layer. L2TP, however, actually runs over the transport layer using User Datagram
Protocol (UDP) over IP. The IP in the delivery protocol could run over any data-link protocol
from IEEE 802.2 over IEEE 802.3 (i.e., standards-based Ethernet) to the Point-to-Point Protocol
(PPP) over a dialup modem link.
Tunneling protocols may use data encryption to transport insecure payload protocols over a
public network (such as the Internet), thereby providing VPN functionality. IPSec has an end-to-
end Transport Mode, but can also operate in a tunneling mode through a trusted security
gateway.
Application Layer
Transport Layer
Internet Layer
Link Layer
ARP · RARP · NDP · OSPF · Tunnels (L2TP) · PPP ·
Media Access Control (Ethernet, MPLS, DSL, ISDN,
FDDI) · Device Drivers · (more)
Contents
[hide]
• 1 SSH tunneling
• 2 Tunneling to circumvent firewall policy
• 3 See also
• 4 External links
To set up an SSH tunnel, one configures an SSH client to forward a specified local port to a port
on the remote machine. Once the SSH tunnel has been established, the user can connect to the
specified local port to access the network service. The local port need not have the same port
number as the remote port.
SSH tunnels provide a means to bypass firewalls that prohibit certain Internet services — so long
as a site allows outgoing connections. For example, an organization may prohibit a user from
accessing Internet web pages (port 80) directly without passing through the organization's proxy
filter (which provides the organization with a means of monitoring and controlling what the user
sees through the web). But users may not wish to have their web traffic monitored or blocked by
the organization's proxy filter. If users can connect to an external SSH server, they can create an
SSH tunnel to forward a given port on their local machine to port 80 on a remote web server. To
access the remote web server users would point their browser to http://localhost/.
Some SSH clients support dynamic port forwarding that allows the user to create a SOCKS 4/5
proxy. In this case users can configure their applications to use their local SOCKS proxy server.
This gives more flexibility than creating an SSH tunnel to a single port as previously described.
SOCKS can free the user from the limitations of connecting only to a predefined remote port and
server.
Another HTTP-based tunneling method uses the HTTP CONNECT method/command. A client
issues the HTTP CONNECT command to an HTTP proxy. The proxy then makes a TCP
connection to a particular server:port, and relays data between that server:port and the client
connection. Because this creates a security hole, CONNECT-capable HTTP proxies commonly
restrict access to the CONNECT method. The proxy allows access only to TLS/SSL-based
HTTPS services.
This article was originally based on material from the Free On-line Dictionary of Computing,
which is licensed under the GFDL.
Views
• Article
• Discussion
• Edit this page
• History
Personal tools
Navigation
• Main page
• Contents
• Featured content
• Current events
• Random article
Search
Interaction
• About Wikipedia
• Community portal
• Recent changes
• Contact Wikipedia
• Donate to Wikipedia
• Help
Toolbox
Languages
• العربية
• Česky
• Deutsch
• Español
• Euskara
• Français
• Italiano
• Nederlands
• Polski
• Português
• Русский
• Svenska
• 中文
Internet standard
From Wikipedia, the free encyclopedia
Contents
[hide]
• 1 Overview
• 2 Standardization process
o 2.1 Proposed Standard
o 2.2 Draft Standard
o 2.3 Standard
• 3 See also
• 4 References
• 5 External links
[edit] Overview
An Internet Standard is a special Request for Comments (RFC) or set of RFCs. An RFC that is to
become a Standard or part of a Standard begins as an Internet Draft, and is later (usually after
several revisions) accepted and published by the RFC Editor as a RFC and labeled a Proposed
Standard. Later, an RFC is labelled a Draft Standard, and finally a Standard. Collectively, these
stages are known as the standards track, and are defined in RFC 2026. The label Historic (sic) is
applied to deprecated standards-track documents or obsolete RFCs that were published before
the standards track was established.
Only the IETF, represented by the Internet Engineering Steering Group (IESG), can approve
standards-track RFCs. The definitive list of Internet Standards is maintained in Internet
Standards document STD 1: Internet Official Protocol Standards.[1]
Becoming a standard is a three step process within the IETF called Proposed Standards, Draft
Standards and finally Internet Standards. If an RFC is part of a proposal that is on the standard
track, then at the first stage, the standard is proposed and subsequently organizations decide
whether to implement this Proposed Standard. After three separate implementations, more
review and corrections are made to the RFC, and a Draft Standard is created. At the final stage,
the RFC becomes a Standard.
A Proposed Standard (PS) is generally stable, has resolved known design choices, is believed to
be well-understood, has received significant community review, and appears to enjoy enough
community interest to be considered valuable. However, further experience might result in a
change or even retraction of the specification before it advances. Usually, neither implementation
nor operational experience is required.
A specification from which at least two independent and interoperable implementations from
different code bases have been developed, and for which sufficient successful operational
experience has been obtained, may be elevated to the Draft Standard (DS) level.
A Draft Standard is normally considered to be a final specification, and changes are likely to be
made only to solve specific problems encountered. In most circumstances, it is reasonable for
vendors to deploy implementations of Draft Standards into a disruption sensitive environment.
[edit] Standard
A specification for which significant implementation and successful operational experience has
been obtained may be elevated to the Internet Standard (STD) level. An Internet Standard, which
may simply be referred to as a Standard, is characterized by a high degree of technical maturity
and by a generally held belief that the specified protocol or service provides significant benefit to
the Internet community.
Generally Internet Standards cover interoperability of systems on the internet through defining
protocols, messages formats, schemas, and languages. The most fundamental of the Standards
are the ones defining the Internet Protocol.
All Internet Standards are given a number in the STD series - The first document in this series,
STD 1, describes the remaining documents in the series, and has a list of Proposed Standards.
Each RFC is static; if the document is changed, it is submitted again and assigned a new RFC
number. If an RFC becomes an Internet Standard (STD), it is assigned an STD number but
retains its RFC number. When an Internet Standard is updated, its number stays the same and it
simply refers to a different RFC or set of RFCs. A given Internet Standard, STD n, may be RFCs
x and y at a given time, but later the same standard may be updated to be RFC z instead. For
example, in 2007 RFC 3700 was an Internet Standard—STD 1—and in May 2008 it was
replaced with RFC 5000, so RFC 3700 changed to Historic status, and now STD 1 is RFC 5000.
When STD 1 is updated again, it will simply refer to a newer RFC, but it will still be STD 1.
Note that not all RFCs are standards-track documents, but all Internet Standards and other
standards-track documents are RFCs.[2]
[edit] References
1. ^ "Internet Official Protocol Standards (STD 1)" (plain text). RFC Editor. May 2008. ftp://ftp.rfc-
editor.org/in-notes/std/std1.txt. Retrieved on 2008-05-25.
2. ^ Huitema, C.; Postel, J.; Crocker, S. (April 1995). "Not All RFCs are Standards (RFC 1796)".
The Internet Engineering Task Force. http://tools.ietf.org/html/rfc1796. Retrieved on 2008-05-25.
"[E]ach RFC has a status…: Informational, Experimental, or Standards Track (Proposed
Standard, Draft Standard, Internet Standard), or Historic."
The Internet Standards Process is defined in a "Best Current Practice" document BCP 9
(currently RFC 2026).
Views
• Article
• Discussion
• Edit this page
• History
Personal tools
Navigation
• Main page
• Contents
• Featured content
• Current events
• Random article
Search
Interaction
• About Wikipedia
• Community portal
• Recent changes
• Contact Wikipedia
• Donate to Wikipedia
• Help
Toolbox
Languages
• العربية
• Dansk
• Deutsch
• 日本語
Proxy server
From Wikipedia, the free encyclopedia
A proxy server that passes requests and replies unmodified is usually called a gateway or
sometimes tunneling proxy.
A proxy server can be placed in the user's local computer or at various points between the user
and the destination servers or the Internet. A reverse proxy is a proxy used as a front-end to
accelerate and cache in-demand resources (such as a web page).
Contents
[hide]
• 5 External links
A caching proxy server accelerates service requests by retrieving content saved from a previous
request made by the same client or even other clients. Caching proxies keep local copies of
frequently requested resources, allowing large organizations to significantly reduce their
upstream bandwidth usage and cost, while significantly increasing performance. Most ISPs and
large businesses have a caching proxy. These machines are built to deliver superb file system
performance (often with RAID and journaling) and also contain hot-rodded versions of TCP.
Caching proxies were the first kind of proxy server.
Some poorly-implemented caching proxies have had downsides (e.g., an inability to use user
authentication). Some problems are described in RFC 3143 (Known HTTP Proxy/Caching
Problems).
Another important use of the proxy server is to reduce the hardware cost. An organization
may have many systems on the same network or under control of a single server, prohibiting the
possibility of an individual connection to the Internet for each system. In such a case, the
individual systems can be connected to one proxy server, and the proxy server connected to the
main server.
A proxy that focuses on WWW traffic is called a "web proxy". The most common use of a web
proxy is to serve as a web cache. Most proxy programs (e.g. Squid) provide a means to deny
access to certain URLs in a blacklist, thus providing content filtering. This is often used in a
corporate, educational or library environment, and anywhere else where content filtering is
desired. Some web proxies reformat web pages for a specific purpose or audience (e.g., cell
phones and PDAs).
AOL dialup customers used to have their requests routed through an extensible proxy that
'thinned' or reduced the detail in JPEG pictures. This sped up performance but caused problems,
either when more resolution was needed or when the thinning program produced incorrect
results. This is why in the early days of the web many web pages would contain a link saying
"AOL Users Click Here" to bypass the web proxy and to avoid the bugs in the thinning software.
A content-filtering web proxy server provides administrative control over the content that may be
relayed through the proxy. It is commonly used in both commercial and non-commercial
organizations (especially schools) to ensure that Internet usage conforms to acceptable use
policy. In some cases users can circumvent the proxy, since there are services designed to proxy
information from a filtered website through a non filtered site to allow it through the users proxy.
Some common methods used for content filtering include: URL or DNS blacklists, URL regex
filtering, MIME filtering, or content keyword filtering. Some products have been known to
employ content analysis techniques to look for traits commonly used by certain types of content
providers.
A content filtering proxy will often support user authentication, to control web access. It also
usually produces logs, either to give detailed information about the URLs accessed by specific
users, or to monitor bandwidth usage statistics. It may also communicate to daemon based and/or
ICAP based antivirus software to provide security against virus and other malware by scanning
incoming content in real time before it enters the network.
An anonymous proxy server (sometimes called a web proxy) generally attempts to anonymize
web surfing. There are different varieties of anonymizers. One of the more common variations is
the open proxy. Because they are typically difficult to track, open proxies are especially useful to
those seeking online anonymity, from political dissidents to computer criminals. Some users are
merely interested in anonymity on principle, to facilitate constitutional human rights of freedom
of speech, for instance. The server receives requests from the anonymizing proxy server, and
thus does not receive information about the end user's address. However, the requests are not
anonymous to the anonymizing proxy server, and so a degree of trust is present between that
server and the user. Many of them are funded through a continued advertising link to the user.
Access control: Some proxy servers implement a logon requirement. In large organizations,
authorized users must log on to gain access to the web. The organization can thereby track usage
to individuals.
Some anonymizing proxy servers may forward data packets with header lines such as
HTTP_VIA, HTTP_X_FORWARDED_FOR, or HTTP_FORWARDED, which may reveal the
IP address of the client. Other anonymizing proxy servers, known as elite or high anonymity
proxies, only include the REMOTE_ADDR header with the IP address of the proxy server,
making it appear that the proxy server is the client. A website could still suspect a proxy is being
used if the client sends packets which include a cookie from a previous visit that did not use the
high anonymity proxy server. Clearing cookies, and possibly the cache, would solve this
problem.
Proxies can also be installed in order to eavesdrop upon the dataflow between client machines
and the web. All accessed pages, as well as all forms submitted, can be captured and analyzed by
the proxy operator. For this reason, passwords to online services (such as webmail and banking)
should always be exchanged over a cryptographically secured connection, such as SSL.
Intercepting proxies are commonly used in businesses to prevent avoidance of acceptable use
policy, and to ease administrative burden, since no client browser configuration is required.
It is often possible to detect the use of an intercepting proxy server by comparing the external IP
address to the address seen by an external web server, or by examining the HTTP headers on the
server side.
The term "transparent proxy" is most often used incorrectly to mean "intercepting proxy"
(because the client does not need to configure a proxy and cannot directly detect that its requests
are being proxied). Transparent proxies can be implemented using Cisco's WCCP (Web Cache
Control Protocol). This proprietary protocol resides on the router and is configured from the
cache, allowing the cache to determine what ports and traffic is sent to it via transparent
redirection from the router. This redirection can occur in one of two ways: GRE Tunneling (OSI
Layer 3) or MAC rewrites (OSI Layer 2).
"A 'transparent proxy' is a proxy that does not modify the request or response beyond
what is required for proxy authentication and identification".
"A 'non-transparent proxy' is a proxy that modifies the request or response in order to
provide some added service to the user agent, such as group annotation services, media
type transformation, protocol reduction, or anonymity filtering".
The term "forced proxy" is ambiguous. It means both "intercepting proxy" (because it filters all
traffic on the only available gateway to the Internet) and its exact opposite, "non-intercepting
proxy" (because the user is forced to configure a proxy in order to access the Internet).
Forced proxy operation is sometimes necessary due to issues with the interception of TCP
connections and HTTP. For instance, interception of HTTP requests can affect the usability of a
proxy cache, and can greatly affect certain authentication mechanisms. This is primarily because
the client thinks it is talking to a server, and so request headers required by a proxy are unable to
be distinguished from headers that may be required by an upstream server (esp authorization
headers). Also the HTTP specification prohibits caching of responses where the request
contained an authorization header.
Suffix proxy servers are easier to use than regular proxy servers. The concept appeared in 2003
in form of the IPv6Gate and in 2004 in form of the Coral Content Distribution Network, but the
term suffix proxy was only coined in October 2008 by "6a.nl"[citation needed].
Because proxies might be used to abuse, system administrators have developed a number of
ways to refuse service to open proxies. Many IRC networks automatically test client systems for
known types of open proxy. Likewise, an email server may be configured to automatically test e-
mail senders for open proxies.
Groups of IRC and electronic mail operators run DNSBLs publishing lists of the IP addresses of
known open proxies, such as AHBL, CBL, NJABL, and SORBS.
The ethics of automatically testing clients for open proxies are controversial. Some experts, such
as Vernon Schryver, consider such testing to be equivalent to an attacker portscanning the client
host. [1] Others consider the client to have solicited the scan by connecting to a server whose
terms of service include testing.
A reverse proxy is a proxy server that is installed in the neighborhood of one or more web
servers. All traffic coming from the Internet and with a destination of one of the web servers goes
through the proxy server. There are several reasons for installing reverse proxy servers:
• Encryption / SSL acceleration: when secure web sites are created, the SSL encryption is
often not done by the web server itself, but by a reverse proxy that is equipped with SSL
acceleration hardware. See Secure Sockets Layer. Furthermore, a host can provide a
single "SSL proxy" to provide SSL encryption for an arbitrary number of hosts; removing
the need for a separate SSL Server Certificate for each host, with the downside that all
hosts behind the SSL proxy have to share a common DNS name or IP address for SSL
connections.
• Load balancing: the reverse proxy can distribute the load to several web servers, each
web server serving its own application area. In such a case, the reverse proxy may need to
rewrite the URLs in each web page (translation from externally known URLs to the
internal locations).
• Serve/cache static content: A reverse proxy can offload the web servers by caching static
content like pictures and other static graphical content.
• Compression: the proxy server can optimize and compress the content to speed up the
load time.
• Spoon feeding: reduces resource usage caused by slow clients on the web servers by
caching the content the web server sent and slowly "spoon feeding" it to the client. This
especially benefits dynamically generated pages.
• Security: the proxy server is an additional layer of defense and can protect against some
OS and WebServer specific attacks. However, it does not provide any protection to
attacks against the web application or service itself, which is generally considered the
larger threat.
• Extranet Publishing: a reverse proxy server facing the Internet can be used to
communicate to a firewalled server internal to an organization, providing extranet access
to some functions while keeping the servers behind the firewalls. If used in this way,
security measures should be considered to protect the rest of your infrastructure in case
this server is compromised, as its web application is exposed to attack from the Internet.
[edit] Circumventor
A circumventor is a web-based page that takes a site that is blocked and "circumvents" it through
to an unblocked web site, allowing the user to view blocked pages. A famous example is elgooG,
which allowed users in China to use Google after it had been blocked there. elgooG differs from
most circumventors in that it circumvents only one block.
A September 2007 report from Citizen Lab recommended Web based proxies Proxify[2],
StupidCensorship[3], and CGIProxy.[4] Alternatively, users could partner with individuals outside
the censored network running Psiphon[5] or Peacefire/Circumventor.[6] A more elaborate approach
suggested was to run free tunneling software such as UltraSurf[7], and FreeGate,[8] or pay services
Anonymizer[9] and Ghost Surf.[10] Also listed were free application tunneling software Gpass[11]
and HTTP Tunnel,[12] and pay application software Relakks[13] and Guardster.[3] Lastly,
anonymous communication networks JAP ANON,[14] Tor,[15] and I2P[16] offer a range of
possibilities for secure publication and browsing.[4]
Students are able to access blocked sites (games, chatrooms, messenger, offensive material,
internet pornography, social networking, etc.) through a circumventor. As fast as the filtering
software blocks circumventors, others spring up. However, in some cases the filter may still
intercept traffic to the circumventor, thus the person who manages the filter can still see the sites
that are being visited.
Circumventors are also used by people who have been blocked from a web site.
Another use of a circumventor is to allow access to country-specific services, so that Internet
users from other countries may also make use of them. An example is country-restricted
reproduction of media and webcasting.
The use of circumventors is usually safe with the exception that circumventor sites run by an
untrusted third party can be run with hidden intentions, such as collecting personal information,
and as a result users are typically advised against running personal data such as credit card
numbers or passwords through a circumventor.
In some network configurations, clients attempting to access the proxy server are given different
levels of access privilege on the grounds of their computer location or even the MAC address of
the network card. However, if one has access to a system with higher access rights, they could
use that system as a proxy server for which the other clients use to access the original proxy
server, consequently altering their access privileges.
Many work places, schools, and colleges restrict the web sites and online services that are made
available in their buildings. This is done either with a specialized proxy, called a content filter
(both commercial and free products are available), or by using a cache-extension protocol such
as ICAP, that allows plug-in extensions to an open caching architecture.
Requests made to the open internet must first pass through an outbound proxy filter. The web-
filtering company provides a database of URL patterns (regular expressions) with associated
content attributes. This database is updated weekly by site-wide subscription, much like a virus
filter subscription. The administrator instructs the web filter to ban broad classes of content (such
as sports, pornography, online shopping, gambling, or social networking). Requests that match a
banned URL pattern are rejected immediately.
Assuming the requested URL is acceptable, the content is then fetched by the proxy. At this point
a dynamic filter may be applied on the return path. For example, JPEG files could be blocked
based on fleshtone matches, or language filters could dynamically detect unwanted language. If
the content is rejected then an HTTP fetch error is returned and nothing is cached.
Most web filtering companies use an internet-wide crawling robot that assesses the likelihood
that a content is a certain type (i.e. "This content is 70% chance of porn, 40% chance of sports,
and 30% chance of news" could be the outcome for one web page). The resultant database is then
corrected by manual labor based on complaints or known flaws in the content-matching
algorithms.
Web filtering proxies are not able to peer inside secure sockets HTTP transactions. As a result,
users wanting to bypass web filtering will typically search the internet for an open and
anonymous HTTPS transparent proxy. They will then program their browser to proxy all
requests through the web filter to this anonymous proxy. Those requests will be encrypted with
https. The web filter cannot distinguish these transactions from, say, a legitimate access to a
financial website. Thus, content filters are only effective against unsophisticated users.
A special case of web proxies is "CGI proxies". These are web sites that allow a user to access a
site through them. They generally use PHP or CGI to implement the proxy functionality. These
types of proxies are frequently used to gain access to web sites blocked by corporate or school
proxies. Since they also hide the user's own IP address from the web sites they access through the
proxy, they are sometimes also used to gain a degree of anonymity, called "Proxy Avoidance".
In using a proxy server (for example, anonymizing HTTP proxy), all data sent to the service
being used (for example, HTTP server in a website) must pass through the proxy server before
being sent to the service, mostly in unencrypted form. It is therefore a feasible risk that a
malicious proxy server may record everything sent: including unencrypted logins and passwords.
By chaining proxies which do not reveal data about the original requester, it is possible to
obfuscate activities from the eyes of the user's destination. However, more traces will be left on
the intermediate hops, which could be used or offered up to trace the user's activities. If the
policies and administrators of these other proxies are unknown, the user may fall victim to a false
sense of security just because those details are out of sight and mind.
The bottom line of this is to be wary when using anonymizing proxy servers, and only use proxy
servers of known integrity (e.g., the owner is known and trusted, has a clear privacy policy, etc.),
and never use proxy servers of unknown integrity. If there is no choice but to use unknown proxy
servers, do not pass any private information (unless it is over an encrypted connection) through
the proxy.
In what is more of an inconvenience than a risk, proxy users may find themselves being blocked
from certain Web sites, as numerous forums and Web sites block IP addresses from proxies
known to have spammed or trolled the site.
[edit] References
1. ^ "How-to". Linux.org. http://www.linux.org/docs/ldp/howto/Firewall-HOWTO-11.html#ss11.4.
"The proxy server is, above all, a security device."
2. ^ Thomas, Keir (2006). Beginning Ubuntu Linux: From Novice to Professional. Apress. "A proxy
server helps speed up Internet access by storing frequently accessed pages"
3. ^ Site at www.guardster.com
4. ^ "Everyone's Guide to By-Passing Internet Censorship".
http://www.civisec.org/guides/everyones-guides.
Views
• Article
• Discussion
• Edit this page
• History
Personal tools
Navigation
• Main page
• Contents
• Featured content
• Current events
• Random article
Search
Interaction
• About Wikipedia
• Community portal
• Recent changes
• Contact Wikipedia
• Donate to Wikipedia
• Help
Toolbox
Languages
• Afrikaans
• العربية
• Česky
• Dansk
• Deutsch
• Ελληνικά
• Esperanto
• Español
• Euskara
• Suomi
• Français
• Galego
• עברית
• Hrvatski
• Magyar
• Bahasa Indonesia
• Italiano
• 日本語
• 한국어
• Nederlands
• Polski
• Português
• Русский
• Simple English
• Slovenčina
• Slovenščina
• Svenska
• தமிழ
• ไทย
• Türkçe
• Українська
• اردو
• Tiếng Việt
• 中文
• 粵語
Name resolution
From Wikipedia, the free encyclopedia
In computer science, name resolution (also called name lookup) can have one of several
meanings, discussed below.
Contents
[hide]
• 1 Name resolution in computer languages
o 1.1 Static versus dynamic
• 2 Name resolution in computer networks
• 3 Name resolution in semantics and text extraction
o 3.1 Name resolution in simple text
o 3.2 Name resolution across documents
• 4 See also
The complexity of these algorithms is influenced by the sophistication of the language. For
example, name resolution in assembly language usually involves only a single simple table
lookup, while name resolution in C++ is extremely complicated as it involves:
Examples of programming languages that use static name resolution include C, C++, Java, and
Pascal. Examples of programming languages that use dynamic name resolution include Lisp,
Perl, Python, Tcl, PHP, and REBOL.
For example, in the text mining field, software frequently needs to interpret the following text:
John gave Edward the book. He then stood up and called to John to come back into the room.
In these sentences, the software must determine whether the pronoun "he" refers to "John", or
"Edward" from the first sentence. The software must also determine whether the "John" referred
to in the second sentence is the same as the "John" in the first sentence, or a third person whose
name also happens to be "John". Such examples apply to almost all languages, and not just
English.
Frequently, this type of name resolution is also used across documents, for example to determine
whether the "George Bush" referenced in an old newspaper article as President of the United
States (George H. W. Bush) is the same person as the "George Bush" mentioned in a separate
news article years later about a man who is running for President (George W. Bush.) Because
many people may have the same name, analysts and software must take into account
substantially more information than just a name in order to determine whether two identical
references ("George Bush") actually refer to the same specific entity or person.
Name/entity resolution in text extraction and semantics is a notoriously difficult problem, in part
because in many cases there is not sufficient information to make an accurate determination.
Numerous partial solutions exist that rely on specific contextual clues found in the data, but there
is no currently known general solution.
For examples of software that might provide name resolution benefits, see also:
• AeroText
• AlchemyAPI
• Attensity
• Autonomy
Views
• Article
• Discussion
• Edit this page
• History
Personal tools
Navigation
• Main page
• Contents
• Featured content
• Current events
• Random article
Search
Interaction
• About Wikipedia
• Community portal
• Recent changes
• Contact Wikipedia
• Donate to Wikipedia
• Help
Toolbox
Languages
• Deutsch
• Français
• 日本語
Socket
From Wikipedia, the free encyclopedia
In mechanics:
• Socket wrench, a type of wrench that uses separate, removable sockets to fit different
sizes of nuts and bolts
• Socket head screw, a screw (or bolt) with a cylindrical head containing a socket into
which the hexagonal ends of an Allan wrench will fit
• Socket termination, a termination used at the ends of wire rope
• An opening in any fitting that matches the outside diameter of a pipe or tube, with a
further recessed through opening matching the inside diameter of the same pipe or tube
In biology:
• Eye socket, a region in the skull where the eyes are positioned
• Tooth socket, a cavity containing a tooth, in those bones that bear teeth
• Dry socket, a painful opening as a result of the blood not clotting after a tooth is pulled
• Ball and socket joint
In computing:
• Electrical outlet, an electrical device connected to a power source onto which another
device can be plugged or screwed in
• Antenna socket, a female antenna connector for a television cable
• Jack (connector), one of several types of electronic connectors
• CPU socket, a physical and electrical specification of how to connect a CPU to a
motherboard
• Socket: Time Dominator, a video game created by Vic Tokai on the Sega Genesis
• Socket (film), a gay-themed science-fiction indie film
This disambiguation page lists articles associated with the same title. If an internal link led you
here, you may wish to change the link to point directly to the intended article.
Retrieved from "http://en.wikipedia.org/wiki/Socket"
Categories: Disambiguation pages
Hidden categories: All disambiguation pages | All article disambiguation pages
Views
• Article
• Discussion
• Edit this page
• History
Personal tools
Navigation
• Main page
• Contents
• Featured content
• Current events
• Random article
Search
Interaction
• About Wikipedia
• Community portal
• Recent changes
• Contact Wikipedia
• Donate to Wikipedia
• Help
Toolbox
Languages
• العربية
• Deutsch
• Español
• Français
• 한국어
• Italiano
• עברית
• Lietuvių
• Nederlands
• 日本語
• Norsk (bokmål)
• Polski
• Português
• Русский
• Shqip
• Türkçe
IP address
From Wikipedia, the free encyclopedia
An Internet Protocol (IP) address is a numerical identification and logical address that is
assigned to devices participating in a computer network utilizing the Internet Protocol for
communication between its nodes.[1] Although IP addresses are stored as binary numbers, they
are usually displayed in human-readable notations, such as 208.77.188.166 (for IPv4), and
2001:db8:0:1234:0:567:1:1 (for IPv6). The role of the IP address has been characterized as
follows: "A name indicates what we seek. An address indicates where it is. A route indicates how
to get there."[2]
The original designers of TCP/IP defined an IP address as a 32-bit number[1] and this system,
now named Internet Protocol Version 4 (IPv4), is still in use today. However, due to the
enormous growth of the Internet and the resulting depletion of the address space, a new
addressing system (IPv6), using 128 bits for the address, was developed in 1995[3] and last
standardized by RFC 2460 in 1998.[4]
The Internet Protocol also has the task of routing data packets between networks, and IP
addresses specify the locations of the source and destination nodes in the topology of the routing
system. For this purpose, some of the bits in an IP address are used to designate a subnetwork.
The number of these bits is indicated in CIDR notation, appended to the IP address, e.g.,
208.77.188.166/24.
With the development of private networks and the threat of IPv4 address exhaustion, a group of
private address spaces was set aside by RFC 1918. These private addresses may be used by
anyone on private networks. They are often used with network address translators to connect to
the global public Internet.
The Internet Assigned Numbers Authority (IANA) manages the IP address space allocations
globally. IANA works in cooperation with five Regional Internet Registries (RIRs) to allocate IP
address blocks to Local Internet Registries (Internet service providers) and other entities.
Contents
[hide]
• 1 IP versions
o 1.1 IP version 4 addresses
1.1.1 IPv4 networks
1.1.2 IPv4 private addresses
o 1.2 IPv4 address depletion
o 1.3 IP version 6 addresses
1.3.1 IPv6 private addresses
• 2 IP subnetworks
• 3 Static and dynamic IP addresses
o 3.1 Method of assignment
o 3.2 Uses of dynamic addressing
3.2.1 Sticky dynamic IP
address
o 3.3 Address autoconfiguration
o 3.4 Uses of static addressing
• 4 Modifications to IP addressing
o 4.1 IP blocking and firewalls
o 4.2 IP address translation
• 5 See also
• 6 References
• 7 External links
o 7.1 RFCs
[edit] IP versions
The Internet Protocol (IP) has two versions currently in use (see IP version history for details).
Each version has its own definition of an IP address. Because of its prevalence, the generic term
IP address typically still refers to the addresses defined by IPv4.
IPv4 uses 32-bit (4-byte) addresses, which limits the address space to 4,294,967,296 (232)
possible unique addresses. IPv4 reserves some addresses for special purposes such as private
networks (~18 million addresses) or multicast addresses (~270 million addresses). This reduces
the number of addresses that can be allocated to end users and, as the number of addresses
available is consumed, IPv4 address exhaustion is inevitable. This foreseeable shortage was the
primary motivation for developing IPv6, which is in various deployment stages around the world
and is the only strategy for IPv4 replacement and continued Internet expansion.
IPv4 addresses are usually represented in dot-decimal notation (four numbers, each ranging from
0 to 255, separated by dots, e.g. 208.77.188.166). Each part represents 8 bits of the address, and
is therefore called an octet. In less common cases of technical writing, IPv4 addresses may be
presented in hexadecimal, octal, or binary representations. When converting, each octet is
usually treated as a separate number.
In the early stages of development of the Internet protocol,[1] network administrators interpreted
an IP address as a structure of network number and host number. The highest order octet (most
significant eight bits) was designated the network number and the rest of the bits were called the
rest field or host identifier and were used for host numbering within a network. This method soon
proved inadequate as additional networks developed that were independent from the existing
networks already designated by a network number. In 1981, the Internet addressing specification
was revised with the introduction of classful network architecture. [2]
Classful network design allowed for a larger number of individual network assignments. The
first three bits of the most significant octet of an IP address was defined as the class of the
address. Three classes (A, B, and C) were defined for universal unicast addressing. Depending on
the class derived, the network identification was based on octet boundary segments of the entire
address. Each class used successively additional octets in the network identifier, thus reducing
the possible number of hosts in the higher order classes (B and C). The following table gives an
overview of this system.
Possible Possible
Clas First octet Range of Network Host
number of number of
s in binary first octet ID ID
networks hosts
224 - 2 =
A 0XXXXXXX 0 - 127 a b.c.d 27 = 128
16,777,214
Although classful network design was a successful developmental stage, it proved unscalable in
the rapid expansion of the Internet and was abandoned when Classless Inter-Domain Routing
(CIDR) was created for the allocation of IP address blocks and new rules of routing protocol
packets using IPv4 addresses. CIDR is based on variable-length subnet masking (VLSM) to
allow allocation and routing on arbitrary-length prefixes.
Today, remnants of classful network concepts function only in a limited scope as the default
configuration parameters of some network software and hardware components (e.g. netmask),
and in the technical jargon used in network administrators' discussions.
Early network design, when global end-to-end connectivity was envisioned for communications
with all Internet hosts, intended that IP addresses be uniquely assigned to a particular computer
or device. However, it was found that this was not always necessary as private networks
developed and public address space needed to be conserved (IPv4 address exhaustion).
Computers not connected to the Internet, such as factory machines that communicate only with
each other via TCP/IP, need not have globally-unique IP addresses. Three ranges of IPv4
addresses for private networks, one range for each class (A, B, C), were reserved in RFC 1918.
These addresses are not routed on the Internet and thus their use need not be coordinated with an
IP address registry.
Today, when needed, such private networks typically connect to the Internet through network
address translation (NAT).
No. of
Start End
addresses
Any user may use any of the reserved blocks. Typically, a network administrator will divide a
block into subnets; for example, many home routers automatically use a default address range of
192.168.0.0 - 192.168.0.255 (192.168.0.0/24).
The IP version 4 address space is rapidly nearing exhaustion of available, officially assignable
address blocks.
The rapid exhaustion of IPv4 address space, despite conservation techniques, prompted the
Internet Engineering Task Force (IETF) to explore new technologies to expand the Internet's
addressing capability. The permanent solution was deemed to be a redesign of the Internet
Protocol itself. This next generation of the Internet Protocol, aimed to replace IPv4 on the
Internet, was eventually named Internet Protocol Version 6 (IPv6) in 1995[3][4] The address size
was increased from 32 to 128 bits or 16 octets, which, even with a generous assignment of
network blocks, is deemed sufficient for the foreseeable future. Mathematically, the new address
space provides the potential for a maximum of 2128, or about 3.403 × 1038 unique addresses.
The new design is not based on the goal to provide a sufficient quantity of addresses alone, but
rather to allow efficient aggregation of subnet routing prefixes to occur at routing nodes. As a
result, routing table sizes are smaller, and the smallest possible individual allocation is a subnet
for 264 hosts, which is the size of the square of the size of the entire IPv4 Internet. At these levels,
actual address utilization rates will be small on any IPv6 network segment. The new design also
provides the opportunity to separate the addressing infrastructure of a network segment--that is
the local administration of the segment's available space--from the addressing prefix used to
route external traffic for a network. IPv6 has facilities that automatically change the routing
prefix of entire networks should the global connectivity or the routing policy change without
requiring internal redesign or renumbering.
The large number of IPv6 addresses allows large blocks to be assigned for specific purposes and,
where appropriate, to be aggregated for efficient routing. With a large address space, there is not
the need to have complex address conservation methods as used in classless inter-domain routing
(CIDR).
All modern desktop and enterprise server operating systems include native support for the IPv6
protocol, but it is not yet widely deployed in other devices, such as home networking routers,
voice over Internet Protocol (VoIP) and multimedia equipment, and network peripherals.
2001:0db8:85a3:08d3:1319:8a2e:0370:7334
Just as IPv4 reserves addresses for private or internal networks, there are blocks of addresses set
aside in IPv6 for private addresses. In IPv6, these are referred to as unique local addresses
(ULA). RFC 4193 sets aside the routing prefix fc00::/7 for this block which is divided into two
/8 blocks with different implied policies (cf. IPv6) The addresses include a 40-bit pseudorandom
number that minimizes the risk of address collisions if sites merge or packets are misrouted.
Early designs (RFC 3513) used a different block for this purpose (fec0::), dubbed site-local
addresses. However, the definition of what constituted sites remained unclear and the poorly
defined addressing policy created ambiguities for routing. The address range specification was
abandoned and must no longer be used in new systems.
Addresses starting with fe80: — called link-local addresses — are assigned only in the local link
area. The addresses are generated usually automatically by the operating system's IP layer for
each network interface. This provides instant automatic network connectivity for any IPv6 host
and means that if several hosts connect to a common hub or switch, they have an instant
communication path via their link-local IPv6 address. This feature is used extensively, and
invisibly to most users, in the lower layers of IPv6 network administration (cf. Neighbor
Discovery Protocol).
None of the private address prefixes may be routed in the public Internet.
[edit] IP subnetworks
Main article: Subnetwork
The technique of subnetting can operate in both IPv4 and IPv6 networks. The IP address is
divided into two parts: the network address and the host identifier. The subnet mask (in IPv4
only) or the CIDR prefix determines how the IP address is divided into network and host parts.
The term subnet mask is only used within IPv4. Both IP versions however use the Classless
Inter-Domain Routing (CIDR) concept and notation. In this, the IP address is followed by a slash
and the number (in decimal) of bits used for the network part, also called the routing prefix. For
example, an IPv4 address and its subnet mask may be 192.0.2.1 and 255.255.255.0, respectively.
The CIDR notation for the same IP address and subnet is 192.0.2.1/24, because the first 24 bits
of the IP address indicate the network and subnet.
[edit] Static and dynamic IP addresses
When a computer is configured to use the same IP address each time it powers up, this is known
as a Static IP address. In contrast, in situations when the computer's IP address is assigned
automatically, it is known as a Dynamic IP address.
Static IP addresses are manually assigned to a computer by an administrator. The exact procedure
varies according to platform. This contrasts with dynamic IP addresses, which are assigned either
by the computer interface or host software itself, as in Zeroconf, or assigned by a server using
Dynamic Host Configuration Protocol (DHCP). Even though IP addresses assigned using DHCP
may stay the same for long periods of time, they can generally change. In some cases, a network
administrator may implement dynamically assigned static IP addresses. In this case, a DHCP
server is used, but it is specifically configured to always assign the same IP address to a
particular computer. This allows static IP addresses to be configured centrally, without having to
specifically configure each computer on the network in a manual procedure.
In the absence or failure of static or stateful (DHCP) address configurations, an operating system
may assign an IP address to a network interface using state-less autoconfiguration methods, such
as Zeroconf.
Dynamic IP addresses are most frequently assigned on LANs and broadband networks by
Dynamic Host Configuration Protocol (DHCP) servers. They are used because it avoids the
administrative burden of assigning specific static addresses to each device on a network. It also
allows many devices to share limited address space on a network if only some of them will be
online at a particular time. In most current desktop operating systems, dynamic IP configuration
is enabled by default so that a user does not need to manually enter any settings to connect to a
network with a DHCP server. DHCP is not the only technology used to assigning dynamic IP
addresses. Dialup and some broadband networks use dynamic address features of the Point-to-
Point Protocol.
A sticky dynamic IP address or sticky IP is an informal term used by cable and DSL Internet
access subscribers to describe a dynamically assigned IP address that does not change often. The
addresses are usually assigned with the DHCP protocol. Since the modems are usually powered-
on for extended periods of time, the address leases are usually set to long periods and simply
renewed upon expiration. If a modem is turned off and powered up again before the next
expiration of the address lease, it will most likely receive the same IP address.
These addresses are only valid on the link, such as a local network segment or point-to-point
connection, that a host is connected to. These addresses are not routable and like private
addresses cannot be the source or destination of packets traversing the Internet.
When the link-local IPv4 address block was reserved, no standards existed for mechanisms of
address autoconfiguration. Filling the void, Microsoft created an implementation that called
Automatic Private IP Addressing (APIPA). Due to Microsoft's market power, APIPA has been
deployed on millions of machines and has, thus, become a de facto standard in the industry.
Many years later, the IETF defined a formal standard for this functionality, RFC 3927, entitled
Dynamic Configuration of IPv4 Link-Local Addresses.
Some infrastructure situations have to use static addressing, such as when finding the Domain
Name System host that will translate domain names to IP addresses. Static addresses are also
convenient, but not absolutely necessary, to locate servers inside an enterprise. An address
obtained from a DNS server comes with a time to live, or caching time, after which it should be
looked up to confirm that it has not changed. Even static IP addresses do change as a result of
network administration (RFC 2072)
Firewalls are common on today's Internet. For increased network security, they control access to
private networks based on the public IP of the client. Whether using a blacklist or a whitelist, the
IP address that is blocked is the perceived public IP address of the client, meaning that if the
client is using a proxy server or NAT, blocking one IP address might block many individual
people.
Multiple client devices can appear to share IP addresses: either because they are part of a shared
hosting web server environment or because an IPv4 network address translator (NAT) or proxy
server acts as an intermediary agent on behalf of its customers, in which case the real originating
IP addresses might be hidden from the server receiving a request. A common practice is to have a
NAT hide a large number of IP addresses in a private network. Only the "outside" interface(s) of
the NAT need to have Internet-routable addresses[5].
Most commonly, the NAT device maps TCP or UDP port numbers on the outside to individual
private addresses on the inside. Just as a telephone number may have site-specific extensions, the
port numbers are site-specific extensions to an IP address.
In small home networks, NAT functions usually take place in a residential gateway device,
typically one marketed as a "router". In this scenario, the computers connected to the router
would have 'private' IP addresses and the router would have a 'public' address to communicate
with the Internet. This type of router allows several computers to share one public IP address.
[edit] References
• Comer, Douglas (2000). Internetworking with TCP/IP:Principles, Protocols, and
Architectures --4th ed.. Upper Saddle River, NJ: Prentice Hall. ISBN 0-13-
018380-6. http://www.cs.purdue.edu/homes/dec/netbooks.html.
1. ^ a b c RFC 760, "DOD Standard Internet Protocol". DARPA Request For
Comments. Internet Engineering Task Force. January 1980.
http://www.ietf.org/rfc/rfc0760.txt. Retrieved on 2008-07-08.
2. ^ a b RFC 791, "Internet Protocol". DARPA Request For Comments. Internet
Engineering Task Force. September 1981. 6. http://www.ietf.org/rfc/rfc791.txt.
Retrieved on 2008-07-08.
3. ^ a b RFC 1883, "Internet Protocol, Version 6 (IPv6) Specification". Request
For Comments. The Internet Society. December 1995.
http://www.ietf.org/rfc/rfc1883.txt. Retrieved on 2008-07-08.
4. ^ a b RFC 2460, Internet Protocol, Version 6 (IPv6) Specification, S. Deering, R.
Hinden, The Internet Society (December 1998)
5. ^ Comer pg.394
[edit] RFCs
• IPv4 addresses: RFC 791, RFC 1519, RFC 1918, RFC 2071, RFC 2072
• IPv6 addresses: RFC 4291, RFC 4192
Hidden categories: Articles containing potentially dated statements from 2008 | All
articles containing potentially dated statements
Views
• Article
• Discussion
• Edit this page
• History
Personal tools
Navigation
• Main page
• Contents
• Featured content
• Current events
• Random article
Search
Interaction
• About Wikipedia
• Community portal
• Recent changes
• Contact Wikipedia
• Donate to Wikipedia
• Help
Toolbox
Languages
• Afrikaans
• Alemannisch
• العربية
• Aragonés
• Arpetan
• Boarisch
• Brezhoneg
• Català
• Česky
• Deutsch
• Eesti
• Ελληνικά
• Español
• Esperanto
• Euskara
• فارسی
• Føroyskt
• Français
• Gaeilge
• Galego
• ુ રાતી
ગજ
• 한국어
• Hrvatski
• Bahasa Indonesia
• Interlingua
• Íslenska
• Italiano
• עברית
• ქართული
• ລາວ
• Latina
• Latviešu
• Lietuvių
• Limburgs
• Lingála
• Lumbaart
• Magyar
• Māori
• Bahasa Melayu
• Nederlands
• 日本語
• Norsk (bokmål)
• Norsk (nynorsk)
• Occitan
• Polski
• Português
• Ripoarisch
• Română
• Русский
• Shqip
• Sicilianu
• Slovenčina
• Slovenščina
• Српски / Srpski
• Srpskohrvatski / Српскохрватски
• Suomi
• Svenska
• தமிழ
• Taqbaylit
• ไทย
• Türkçe
• Українська
• اردو
• Tiếng Việt
• West-Vlams
• ייִדיש
• Yorùbá
• Žemaitėška
• 中文
Protocol
From Wikipedia, the free encyclopedia
Contents
[hide]
• 4 See also
• Communications protocol
• Protocol (computing), a set of instructions for transferring data
o Internet Protocol
• Protocol (object-oriented programming)
• Cryptographic protocol
[edit] Other
• Protocol (film)
• Protocol (band), British
This disambiguation page lists articles associated with the same title. If an internal link led you
here, you may wish to change the link to point directly to the intended article.
Retrieved from "http://en.wikipedia.org/wiki/Protocol"
Categories: Disambiguation pages
Hidden categories: All disambiguation pages | All article disambiguation pages
Views
• Article
• Discussion
• Edit this page
• History
Personal tools
Navigation
• Main page
• Contents
• Featured content
• Current events
• Random article
Search
Interaction
• About Wikipedia
• Community portal
• Recent changes
• Contact Wikipedia
• Donate to Wikipedia
• Help
Toolbox
Languages
• Български
• Català
• Česky
• Dansk
• Deutsch
• Español
• Esperanto
• Français
• Galego
• 한국어
• Italiano
• עברית
• Magyar
• Norsk (bokmål)
• Polski
• Русский
• Slovenčina
• Slovenščina
• Svenska
• Українська
The Internet Protocol Suite (commonly known as TCP/IP) is the set of communications
protocols used for the Internet and other similar networks. It is named from two of the most
important protocols in it: the Transmission Control Protocol (TCP) and the Internet Protocol (IP),
which were the first two networking protocols defined in this standard. Today's IP networking
represents a synthesis of several developments that began to evolve in the 1960s and 1970s,
namely the Internet and LANs (Local Area Networks), which emerged in the mid- to late-1980s,
together with the advent of the World Wide Web in the early 1990s.
The Internet Protocol Suite, like many protocol suites, may be viewed as a set of layers. Each
layer solves a set of problems involving the transmission of data, and provides a well-defined
service to the upper layer protocols based on using services from some lower layers. Upper
layers are logically closer to the user and deal with more abstract data, relying on lower layer
protocols to translate data into forms that can eventually be physically transmitted.
The TCP/IP model consists of four layers (RFC 1122).[1][2] From lowest to highest, these are the
Link Layer, the Internet Layer, the Transport Layer, and the Application Layer.
Application Layer
Internet Layer
Link Layer
Contents
[hide]
• 1 History
• 2 Layers in the Internet Protocol Suite
o 2.1 The concept of layers
o 2.2 Layer names and number of layers in the literature
• 3 Implementations
• 4 See also
• 5 References
• 6 Further reading
• 7 External links
[edit] History
The Internet Protocol Suite resulted from work done by Defense Advanced Research Projects
Agency (DARPA) in the early 1970s. After building the pioneering ARPANET in 1969, DARPA
started work on a number of other data transmission technologies. In 1972, Robert E. Kahn was
hired at the DARPA Information Processing Technology Office, where he worked on both
satellite packet networks and ground-based radio packet networks, and recognized the value of
being able to communicate across them. In the spring of 1973, Vinton Cerf, the developer of the
existing ARPANET Network Control Program (NCP) protocol, joined Kahn to work on open-
architecture interconnection models with the goal of designing the next protocol generation for
the ARPANET.
By the summer of 1973, Kahn and Cerf had worked out a fundamental reformulation, where the
differences between network protocols were hidden by using a common internetwork protocol,
and, instead of the network being responsible for reliability, as in the ARPANET, the hosts
became responsible. Cerf credits Hubert Zimmerman and Louis Pouzin, designer of the
CYCLADES network, with important influences on this design.
With the role of the network reduced to the bare minimum, it became possible to join almost any
networks together, no matter what their characteristics were, thereby solving Kahn's initial
problem. One popular saying has it that TCP/IP, the eventual product of Cerf and Kahn's work,
will run over "two tin cans and a string."
A computer called a router (a name changed from gateway to avoid confusion with other types
of gateways) is provided with an interface to each network, and forwards packets back and forth
between them. Requirements for routers are defined in (Request for Comments 1812).[3]
The idea was worked out in more detailed form by Cerf's networking research group at Stanford
in the 1973–74 period, resulting in the first TCP specification (Request for Comments 675) [4].
(The early networking work at Xerox PARC, which produced the PARC Universal Packet
protocol suite, much of which existed around the same period of time, was also a significant
technical influence; people moved between the two.)
DARPA then contracted with BBN Technologies, Stanford University, and the University
College London to develop operational versions of the protocol on different hardware platforms.
Four versions were developed: TCP v1, TCP v2, a split into TCP v3 and IP v3 in the spring of
1978, and then stability with TCP/IP v4 — the standard protocol still in use on the Internet today.
In 1975, a two-network TCP/IP communications test was performed between Stanford and
University College London (UCL). In November, 1977, a three-network TCP/IP test was
conducted between sites in the US, UK, and Norway. Several other TCP/IP prototypes were
developed at multiple research centres between 1978 and 1983. The migration of the ARPANET
to TCP/IP was officially completed on January 1, 1983 when the new protocols were
permanently activated.[5]
In March 1982, the US Department of Defense declared TCP/IP as the standard for all military
computer networking.[6] In 1985, the Internet Architecture Board held a three day workshop on
TCP/IP for the computer industry, attended by 250 vendor representatives, promoting the
protocol and leading to its increasing commercial use.
Kahn and Cerf were honored with the Presidential Medal of Freedom on November 9, 2005 for
their contribution to American culture.
The TCP/IP suite uses encapsulation to provide abstraction of protocols and services. Such
encapsulation usually is aligned with the division of the protocol suite into layers of general
functionality. In general, an application (the highest level of the model) uses a set of protocols to
send its data down the layers, being further encapsulated at each level.
This may be illustrated by an example network scenario, in which two Internet host computers
communicate across local network boundaries constituted by their internetworking gateways
(routers).
TCP/IP stack operating on two hosts connected via two Encapsulation of application data
routers and the corresponding layers used at each hop descending through the protocol stack.
The functional groups of protocols and methods are the Application Layer, the Transport Layer,
the Internet Layer, and the Link Layer (RFC 1122). It should be noted that this model was not
intended to be a rigid reference model into which new protocols have to fit in order to be
accepted as a standard.
The following table provides some examples of the protocols grouped in their respective layers.
DNS, TFTP, TLS/SSL, FTP, Gopher, HTTP, IMAP, IRC, NNTP, POP3,
SIP, SMTP,SMPP, SNMP, SSH, Telnet, Echo, RTP, PNRP, rlogin,
ENRP
Application
Routing protocols like BGP and RIP which run over TCP/UDP, may
also be considered part of the Internet Layer.
Internet
OSPF for IPv4 was initially considered IP layer protocol since it runs
per IP-subnet, but has been placed on the Link since RFC 2740.
Four+one
Five layers Five layers Four layers Four layers Four layers Three layers
layers
"Five-layer
Internet "TCP/IP 5-
"TCP/IP
model" or layer "TCP/IP "Internet "Internet "Arpanet reference
reference
"TCP/IP reference model" model" model" model"
model"
protocol model"
suite"
Host-to-host
Transport Transport Transport Transport Transport
or transport
Host-to-host
Data link
Network Host-to- Network
Data link (Network Link Network interface
access network interface
interface)
These textbooks are secondary sources that may contravene the intent of RFC 1122 and other
IETF primary sources[14].
Different authors have interpreted the RFCs differently regarding the question whether the Link
Layer (and the TCP/IP model) covers Physical Layer issues, or if a hardware layer is assumed
below the Link Layer. Some authors have tried to use other names for the Link Layer, such as
network interface layer, in view to avoid confusion with the Data Link Layer of the seven layer
OSI model. Others have attempted to map the Internet Protocol model onto the OSI Model. The
mapping often results in a model with five layers where the Link Layer is split into a Data Link
Layer on top of a Physical Layer. In literature with a bottom-up approach to Internet
communication[8][9][11], in which hardware issues are emphasized, those are often discussed in
terms of Physical Layer and Data Link Layer.
The Internet Layer is usually directly mapped into the OSI Model's Network Layer, a more
general concept of network functionality. The Transport Layer of the TCP/IP model, sometimes
also described as the host-to-host layer, is mapped to OSI Layer 4 (Transport Layer), sometimes
also including aspects of OSI Layer 5 (Session Layer) functionality. OSI's Application Layer,
Presentation Layer, and the remaining functionality of the Session Layer are collapsed into
TCP/IP's Application Layer. The argument is that these OSI layers do usually not exist as
separate processes and protocols in Internet applications.[citation needed]
However, the Internet protocol stack has never been altered by the Internet Engineering Task
Force from the four layers defined in RFC 1122. The IETF makes no effort to follow the OSI
model although RFCs sometimes refer to it. The IETF has repeatedly stated[citation needed] that
Internet protocol and architecture development is not intended to be OSI-compliant.
RFC 3439, addressing Internet architecture, contains a section entitled: "Layering Considered
Harmful".[14]
[edit] Implementations
Most operating systems in use today, including all consumer-targeted systems, include a TCP/IP
implementation.
Unique implementations include Lightweight TCP/IP, an open source stack designed for
embedded systems and KA9Q NOS, a stack and associated protocols for amateur packet radio
systems and personal computers connected via serial lines.
[edit] References
1. ^ RFC 1122, Requirements for Internet Hosts -- Communication Layers, R. Braden (ed.), October
1989
2. ^ RFC 1123, Requirements for Internet Hosts -- Application and Support, R. Braden (ed.),
October 1989
3. ^ F. Baker (June 1995). "Requirements for IP Routers". http://www.isi.edu/in-notes/rfc1812.txt.
4. ^ V.Cerf et al. (December 1974). "Specification of Internet Transmission Control Protocol".
http://www.ietf.org/rfc/rfc0675.txt.
5. ^ Internet History
6. ^ Ronda Hauben. "From the ARPANET to the Internet". TCP Digest (UUCP).
http://www.columbia.edu/~rh120/other/tcpdigest_paper.txt. Retrieved on 2007-07-05.
7. ^ James F. Kurose, Keith W. Ross, Computer Networking: A Top-Down Approach, 2008, ISBN
0321497708
8. ^ a b Behrouz A. Forouzan, Data Communications and Networking
9. ^ a b Douglas E. Comer, Internetworking with TCP/IP: Principles, Protocols and Architecture,
Pearson Prentice Hall 2005, ISBN 0131876716
10.^ Charles M. Kozierok, "The TCP/IP Guide", No Starch Press 2005
11.^ a b William Stallings, Data and Computer Communications, Prentice Hall 2006, ISBN
0132433109
12.^ Andrew S. Tanenbaum, Computer Networks, Prentice Hall 2002, ISBN 0130661023
13.^ Mark Dye, Mark A. Dye, Wendell, Network Fundamentals: CCNA Exploration Companion
Guide, 2007, ISBN 1587132087
14.^ a b R. Bush; D. Meyer (December 2002), Some Internet Architectural Guidelines and
Philosophy, Internet Engineering Task Force, http://www.isi.edu/in-notes/rfc3439.txt, retrieved on
2007-11-20
Views
• Article
• Discussion
• Edit this page
• History
Personal tools
Navigation
• Main page
• Contents
• Featured content
• Current events
• Random article
Search
Interaction
• About Wikipedia
• Community portal
• Recent changes
• Contact Wikipedia
• Donate to Wikipedia
• Help
Toolbox
Languages
• العربية
• Bosanski
• Brezhoneg
• Català
• Česky
• Dansk
• Deutsch
• Ελληνικά
• Español
• Esperanto
• Euskara
• فارسی
• Français
• Gaeilge
• Galego
• 한국어
• Hrvatski
• Bahasa Indonesia
• Íslenska
• Italiano
• עברית
• ქართული
• Kurdî / كوردی
• Latviešu
• Lëtzebuergesch
• Lietuvių
• Magyar
• മലയാളം
• Nederlands
• 日本語
• Norsk (bokmål)
• Norsk (nynorsk)
• O'zbek
• Polski
• Português
• Română
• Русский
• Shqip
• Simple English
• Slovenčina
• Slovenščina
• Suomi
• Svenska
• ไทย
• Türkçe
• Українська
• اردو
• Tiếng Việt
• 中文
Web mapping
From Wikipedia, the free encyclopedia
"Web mapping is the process of designing, implementing, generating and delivering maps on
the World Wide Web. While web mapping primarily deals with technological issues, web
cartography additionally studies theoretic aspects: the use of web maps, the evaluation and
optimization of techniques and workflows, the usability of web maps, social aspects, and more.
Web GIS is similar to web mapping but with an emphasis on analysis, processing of project
specific geodata and exploratory aspects. Often the terms web GIS and web mapping are used
synonymously, even if they don't mean exactly the same. In fact, the border between web maps
and web GIS is blurry. Web maps are often a presentation media in web GIS and web maps are
increasingly gaining analytical capabilities. A special case of web maps are mobile maps,
displayed on mobile computing devices, such as mobile phones, smart phones, PDAs, GPS and
other devices. If the maps on these devices are displayed by a mobile web browser or web user
agent, they can be regarded as mobile web maps. If the mobile web maps also display context
and location sensitive information, such as points of interest, the term Location-based services
is frequently used."[1]
"The use of the web as a dissemination medium for maps can be regarded as a major
advancement in cartography and opens many new opportunities, such as realtime maps, cheaper
dissemination, more frequent and cheaper updates of data and software, personalized map
content, distributed data sources and sharing of geographic information. It also implicates many
challenges due to technical restrictions (low display resolution and limited bandwidth, in
particular with mobile computing devices, many of which are physically small, and use slow
wireless Internet connections), copyright[2] and security issues, reliability issues and technical
complexity. While the first web maps were primarily static, due to technical restrictions, today's
web maps can be fully interactive and integrate multiple media. This means that both web
mapping and web cartography also have to deal with interactivity, usability and multimedia
issues."[3]
Contents
[hide]
• 10 External links
The following graphic lists potential types of web maps. While the graphic shows in principle an
order of increasing sophistication, the allocation within the order is not explicit. Many maps fall
into more than one category and it is not always clear that a personalized web map is more
complex or sophisticated than an interactive web map. Individual web map types are discussed
below.
[edit] Static web maps
Static web pages are view only with no animation and interactivity. They are only created once,
often manually and infrequently updated. Typical graphics formats for static web maps are PNG,
JPEG, GIF, or TIFF (e.g., drg) for raster files, SVG, PDF or SWF for vector files. Often, these
maps are scanned paper maps and had not been designed as screen maps. Paper maps have a
much higher resolution and information density than typical computer displays of the same
physical size, and might be unreadable when displayed on screens at the wrong resolution.[4]
These maps are created on demand each time the user reloads the webpages, often from dynamic
data sources, such as databases. The webserver generates the map using a web map server or a
self written software.
These maps are created from distributed data sources. The WMS protocol offers a standardized
method to access maps on other servers. WMS servers can collect these different sources,
reproject the map layers, if necessary, and send them back as a combined image containing all
requested map layers. One server may offer a topographic base map, while other servers may
offer thematic layers.
[edit] Animated web maps
Animated Maps show changes in the map over time by animating one of the graphical or
temporal variables. Various data and multimedia formats and technologies allow the display of
animated web maps: SVG, Adobe Flash, Java, Quicktime, etc., also with varying degrees of
interaction. Examples for animated web maps are weather maps, maps displaying dynamic
natural or other phenomena (such as water currents, wind patterns, traffic flow, trade flow,
communication patterns, etc.).
Realtime maps show the situation of a phenomenon in close to realtime (only a few seconds or
minutes delay). Data is collected by sensors and the maps are generated or updated at regular
intervals or immediately on demand. Examples are weather maps, traffic maps or vehicle
monitoring systems.
Personalized web maps allow the map user to apply his own data filtering, selective content and
the application of personal styling and map symbolization. The OGC (Open Geospatial
Consortium) provides the SLD standard (Styled Layer Description) that may be sent to a WMS
server for the application of individual styles. This implies that the content and data structure of
the remote WMS server is properly documented.
Web maps in this category are usually more complex web mapping systems that offer APIs for
reuse in other people's web pages and products. Example for such a system with an API for reuse
are the Open Layers Framework, Yahoo! Maps and Google Maps.
Interactivity is one of the major advantages of screen based maps and web maps. It helps to
compensate for the disadvantages of screen and web maps. Interactivity helps to explore maps,
change map parameters, navigate and interact with the map, reveal additional information, link to
other resources, and much more. Technically, it is achieved through the combination of events,
scripting and DOM manipulations. See section on Client Side Technologies.
These web maps offer GIS analysis, either with geodata provided, or with geodata uploaded by
the map user. As already mentioned, the borderline between analytic web maps and web GIS is
blurry. Often, parts of the analysis are carried out by a serverside GIS and the client displays the
result of the analysis. As web clients gain more and more capabilities, this task sharing may
gradually shift.
[edit] Online atlases
Atlas projects often went through a renaissance when they made a transition to a web based
project. In the past, atlas projects often suffered from expensive map production, small
circulation and limited audience. Updates were expensive to produce and took a long time until
they hit the public. Many atlas projects, after moving to the web, can now reach a wider
audience, produce cheaper, provide a larger number of maps and map types and integrate with
and benefit from other web resources. Some atlases even ceased their printed editions after going
online, sometimes offering printing on demand features from the online edition. Some atlases
(primarily from North America) also offer raw data downloads of the underlying geospatial data
sources.
Collaborative maps are still new, immature and complex to implement, but show a lot of
potential. The method parallels the Wikipedia project where various people collaborate to create
and improve maps on the web. Technically, an application allowing simultaneous editing across
the web would have to ensure that geometric features being edited by one person are locked, so
they can't be edited by other persons at the same time. Also, a minimal quality check would have
to be made, before data goes public. Some collaborative map projects:
• OpenStreetMap
• WikiMapia
• meta:Maps – survey of Wikimedia map proposals on Wikipedia:Meta
• (Please add additional notes, references and examples here!)
A surface weather analysis for the United States on October 21, 2006.
• Web maps can easily deliver up to date information. If maps are generated automatically
from databases, they can display information in almost realtime. They don't need to be
printed, mastered and distributed. Examples:
o A map displaying election results, as soon as the election results become
available.
o A map displaying the traffic situation near realtime by using traffic data collected
by sensor networks.
o A map showing the current locations of mass transit vehicles such as buses or
trains, allowing patrons to minimize their waiting time at stops or stations, or be
aware of delays in service.
o Weather maps, such as NEXRAD.
• Software and hardware infrastructure for web maps is cheap. Web server hardware is
cheaply available and many open source tools exist for producing web maps.
• Product updates can easily be distributed. Because web maps distribute both logic and
data with each request or loading, product updates can happen every time the web user
reloads the application. In traditional cartography, when dealing with printed maps or
interactive maps distributed on offline media (CD, DVD, etc.), a map update caused
serious efforts, triggering a reprint or remastering as well as a redistribution of the media.
With web maps, data and product updates are easier, cheaper, and faster, and can occur
more often.
• They work across browsers and operating systems. If web maps are implemented based
on open standards, the underlying operating system and browser do not matter.
• Web maps can combine distributed data sources. Using open standards and documented
APIs one can integrate (mash up) different data sources, if the projection system, map
scale and data quality match. The use of centralized data sources removes the burden for
individual organizations to maintain copies of the same data sets. The down side is that
one has to rely on and trust the external data sources.
• Web maps allow for personalization. By using user profiles, personal filters and personal
styling and symbolization, users can configure and design their own maps, if the web
mapping systems supports personalization. Accessibility issues can be treated in the same
way. If users can store their favourite colors and patterns they can avoid color
combinations they can't easily distinguish (e.g. due to color blindness).
• Web maps enable collaborative mapping. Similar to the Wikipedia project, web mapping
technologies, such as DHTML/Ajax, SVG, Java, Adobe Flash, etc. enable distributed
data acquisition and collaborative efforts. Examples for such projects are the
OpenStreetMap project or the Google Earth community. As with other open projects,
quality assurance is very important, however!
• Web maps support hyperlinking to other information on the web. Just like any other web
page or a wiki, web maps can act like an index to other information on the web. Any
sensitive area in a map, a label text, etc. can provide hyperlinks to additional information.
As an example a map showing public transport options can directly link to the
corresponding section in the online train time table.
• It is easy to integrate multimedia in and with web maps. Current web browsers support
the playback of video, audio and animation (SVG, SWF, Quicktime, and other
multimedia frameworks).
[edit] Disadvantages of web maps and problematic issues
• Reliability issues – the reliability of the internet and web server infrastructure is not yet
good enough. Esp. if a web map relies on external, distributed data sources, the original
author often cannot guarantee the availability of the information.
• Geodata is expensive – Unlike in the US, where geodata collected by governmental
institutions is usually available for free or cheap, geodata is usually very expensive in
Europe or other parts of the world.
• Bandwidth issues – Web maps usually need a relatively high bandwidth.
• Limited screen space – Like with other screen based maps, web maps have the problem
of limited screen space. This is in particular a problem for mobile web maps and location
based services where maps have to be displayed in very small screens with resolutions as
low as 100×100 pixels. Hopefully, technological advances will help to overcome these
limitations.
• Quality and accuracy issues – Many web maps are of poor quality, both in symbolization,
content and data accuracy.
• Complex to develop – Despite the increasing availability of free and commercial tools to
create web mapping and web GIS applications, it is still a complex task to create
interactive web maps. Many technologies, modules, services and data sources have to be
mastered and integrated.
• Immature development tools – Compared to the development of standalone applications
with integrated development tools, the development and debugging environments of a
conglomerate of different web technologies is still awkward and uncomfortable.
• Copyright issues – Many people are still reluctant to publish geodata, esp. in the light that
geodata is expensive in some parts of the world. They fear copyright infringements of
other people using their data without proper requests for permission.
• Privacy issues – With detailed information available and the combination of distributed
data sources, it is possible to find out and combine a lot of private and personal
information of individual persons. Properties and estates of individuals are now
accessible through high resolution aerial and satellite images throughout the world to
anyone.
Event types
• Cartography-related events
• Technical events directly related to web mapping
• General technical events
• 1989–09: Birth of the WWW, WWW invented at CERN for the exchange of research
documents.[6]
• 1990–12: First Web Browser and Web Server, Tim Berners-Lee wrote first web browser[7]
and web server.
• 1991–04: HTTP 0.9[8] protocol, Initial design of the HTTP protocol for communication
between browser and server.
• 1991–06: ViolaWWW 0.8 Browser, The first popular web browser. Written for X11 on
Unix.
• 1991–08: WWW project announced in public newsgroup, This is regarded as the debut
date of the Web. Announced in newsgroup alt.hypertext.
• 1992–06: HTTP 1.0[8] protocol, Version 1.0 of the HTTP protocol. Introduces the POST
method and persistent connections.
• 1993–04: CERN announced web as free, CERN announced that access to the web will be
free for all.[9] The web gained critical mass.
• 1993–06: HTML 1.0.[10] The first version of HTML,[11] published by T. Berners-Lee and
Dan Connolly.
• 1993–07: Xerox PARC Map Viewer, The first mapserver based on CGI/Perl, allowed
reprojection styling and definition of map extent.
• 1994–06: The National Atlas of Canada, The first version of the National Atlas of
Canada was released. Can be regarded as the first online atlas.
• 1994–10: Netscape Browser 0.9 (Mosaic), The first version of the highly popular browser
Netscape Navigator.
• 1995–03: Java 1.0, The first public version of Java.
• 1995–11: HTML 2.0,[10] Introduced forms, file upload, internationalization and client-side
image maps.
• 1995–12: Javascript 1.0, Introduced first script based interactivity.
• 1995: MapGuide, First introduced as Argus MapGuide.
• 1996–01: JDK 1.0, First version of the Sun JDK.
• 1996–02: Mapquest, The first popular online Address Matching and Routing Service
with mapping output.
• 1996–06: MultiMap, The UK-based MultiMap website launched offering online
mapping, routing and location based services. Grew into one of the most popular UK web
sites.
• 1996–11: Geomedia WebMap 1.0, First version of Geomedia WebMap, already supports
vector graphics through the use of ActiveCGM.[12]
• 1996-fall: MapGuide, Autodesk acquired Argus Technologies.and introduced Autodesk
MapGuide 2.0.
• 1996–12: Macromedia Flash 1.0, First version of the Macromedia Flash plugin.
• 1997–01: HTML 3.2,[10] Introduced tables, applets, script elements, multimedia elements,
flowtext around images, etc.
• 2003–06: NASA World Wind, NASA World Wind Released. An open virtual globe that
loads data from distributed resources across the internet. Terrain and buildings can be
viewed 3 dimensionally. The (XML based) markup language allows users to integrate
their own personal content. This virtual globe needs special software and doesn't run in a
web browser.
• 2003–07: UMN MapServer 4.0, Adds 24bit raster output support and support for PDF
and SWF.
• 2003–09: Flash Player 7, This introduced ActionScript 2.0 (ECMAScript 2.0 compatible
(improved object orientation)). Also initial Video Playback support.
• 2004-07: OpenStreetMap was founded by Steve Coast. OSM is a web based
collaborative project to create a world map under a free license.
• 2005–01: Nikolas Schiller creates the interactive "Inaugural Map"[14] of downtown
Washington, DC
• 2005–02: Google Maps, The first version of Google Maps. Based on raster tiles
organized in a quad tree scheme, data loading done with XMLHttpRequests. This
mapping application became highly popular on the web, also because it allowed other
people to integrate google map services into their own website.
• 2005–04: UMN MapServer 4.6, Adds support for SVG.
• 2005–06: Google Earth, The first version of Google Earth was released building on the
virtual globe metaphor. Terrain and buildings can be viewed 3 dimensionally. The KML
(XML based) markup language allows users to integrate their own personal content. This
virtual globe needs special software and doesn't run in a web browser.
• 2005–11: Firefox 1.5, First Firefox release with native SVG support. Supports Scripting
but no animation.
• 2006-05: Wikimapia Launched
• 2006–06: Opera 9, Opera releases version 9 with extensive SVG support (including
scripting and animation).
• 2006–08: SVG 1.2[13] Mobile Candidate Recommendation, This SVG Mobile Profile
introduces improved multimedia support and many features required to build online Rich
Internet Applications.
• Web server – The webserver is responsible for handling http requests by web browsers
and other user agents. In the simplest case they serve static files, such as HTML pages or
static image files. Web servers also handle authentication, content negotiation, server side
includes, URL rewriting and forward requests to dynamic resources, such as CGI
applications or serverside scripting languages. The functionality of a webserver can
usually be enhanced using modules or extensions. The most popular web server is
Apache, followed by Microsoft Internet Information Server and others.
o CGI (common gateway interface) applications are executables running on the
webserver under the environment and user permissions of the webserver user.
They may be written in any programming language (compiled) or scripting
language (e.g. perl). A CGI application implements the common gateway
interface protocol, processes the information sent by the client, does whatever the
application should do and sends the result back in a web-readable form to the
client. As an example a web browser may send a request to a CGI application for
getting a web map with a certain map extent, styling and map layer combination.
The result is an image format, e.g. JPEG, PNG or SVG. For performance
enhancements one can also install CGI applications such as FastCGI. This loads
the application after the web server is started and keeps the application in
memory, eliminating the need to spawn a separate process each time a request is
being made.
o Alternatively, one can use scripting languages built into the webserver as a
module, such as PHP, Perl, Python, ASP, Ruby, etc. If built into the web server as
a module, the scripting engine is already loaded and doesn't have to be loaded
each time a request is being made.
• Web application servers are middleware which connects various software components
with the web server and a programming language. As an example, a web application
server can enable the communication between the API of a GIS and the webserver, a
spatial database or other proprietary applications. Typical web application servers are
written in Java, C, C++, C# or other scripting languages. Web application servers are also
useful when developing complex realtime web mapping applications or Web GIS.
• Spatial databases are usually object relational databases enhanced with geographic data
types, methods and properties. They are necessary whenever a web mapping application
has to deal with dynamic data (that changes frequently) or with huge amount of
geographic data. Spatial databases allow spatial queries, sub selects, reprojections,
geometry manipulations and offer various import and export formats. A popular example
for an open source spatial database is PostGIS. MySQL also implements some spatial
features, although not as mature as PostGIS. Commercial alternatives are Oracle Spatial
or spatial extensions of Microsoft SQL Server and IBM DB2. The OGC Simple Features
for SQL Specification is a standard geometry data model and operator set for spatial
databases. Most spatial databases implement this OGC standard.
• WMS server are specialized web mapping servers implemented as a CGI application,
Java Servlet or other web application server. They either work as a standalone web server
or in collaboration with existing web servers or web application servers (the general
case). WMS Servers can generate maps on request, using parameters, such as map layer
order, styling/symbolization, map extent, data format, projection, etc. The OGC
Consortium defined the WMS standard to define the map requests and return data
formats. Typical image formats for the map result are PNG, JPEG, GIF or SVG. There
are open source WMS Servers such as UMN Mapserver and Mapnik. Commercial
alternatives exist from most commercial GIS vendors, such as ESRI ArcIMS, Intergraph
Geomedia WebMap and others.
Views
• Article
• Discussion
• Edit this page
• History
Personal tools
Navigation
• Main page
• Contents
• Featured content
• Current events
• Random article
Search
Interaction
• About Wikipedia
• Community portal
• Recent changes
• Contact Wikipedia
• Donate to Wikipedia
• Help
Toolbox
Languages
• Español
• Français
• Русский
• 中文
Web server
From Wikipedia, the free encyclopedia
The term web server or webserver can mean one of two things:
1. A computer program that is responsible for accepting HTTP requests from clients (user
agents such as web browsers), and serving them HTTP responses along with optional
data contents, which usually are web pages such as HTML documents and linked objects
(images, etc.).
2. A computer that runs a computer program as described above.
Contents
[hide]
• 1 Common features
• 2 Origin of returned content
• 3 Path translation
• 4 Load limits
o 4.1 Overload causes
o 4.2 Overload symptoms
o 4.3 Anti-overload techniques
• 5 Historical notes
• 6 Market structure
• 7 See also
• 8 External links
Although web server programs differ in detail, they all share some basic common features.
1. HTTP: every web server program operates by accepting HTTP requests from the client,
and providing an HTTP response to the client. The HTTP response usually consists of an
HTML or XHTML document, but can also be a raw file, an image, or some other type of
document (defined by MIME-types). If some error is found in client request or while
trying to serve it, a web server has to send an error response which may include some
custom HTML or text messages to better explain the problem to end users.
2. Logging: usually web servers have also the capability of logging some detailed
information, about client requests and server responses, to log files; this allows the
webmaster to collect statistics by running log analyzers on these files.
Serving static content is usually much faster (from 2 to 100 times) than serving dynamic
content, especially if the latter involves data pulled from a database.
For a static request the URL path specified by the client is relative to the Web server's root
directory.
http://www.example.com/path/file.html
The client's web browser will translate it into a connection to www.example.com with the
following HTTP 1.1 request:
The web server on www.example.com will append the given path to the path of its root directory.
On Unix machines, this is commonly /var/www. The result is the local file system resource:
/var/www/path/file.html
The web server will then read the file, if it exists, and send a response to the client's web
browser. The response will describe the content of the file and contain the file itself.
When a web server is near to or over its limits, it becomes overloaded and thus unresponsive.
• Too much legitimate web traffic (i.e. thousands or even millions of clients hitting the
web site in a short interval of time. e.g. Slashdot effect);
• DDoS (Distributed Denial of Service) attacks;
• Computer worms that sometimes cause abnormal traffic because of millions of infected
computers (not coordinated among them);
• XSS viruses can cause high traffic because of millions of infected browsers and/or web
servers;
• Internet web robots traffic not filtered/limited on large web sites with very few
resources (bandwidth, etc.);
• Internet (network) slowdowns, so that client requests are served more slowly and the
number of connections increases so much that server limits are reached;
• Web servers (computers) partial unavailability, this can happen because of required or
urgent maintenance or upgrade, HW or SW failures, back-end (i.e. DB) failures, etc.; in
these cases the remaining web servers get too much traffic and become overloaded.
• requests are served with (possibly long) delays (from 1 second to a few hundred
seconds);
• 500, 502, 503, 504 HTTP errors are returned to clients (sometimes also unrelated 404
error or even 408 error may be returned);
• TCP connections are refused or reset (interrupted) before any content is sent to clients;
• in very rare cases, only partial contents are sent (but this behavior may well be considered
a bug, even if it usually depends on unavailable system resources).
To partially overcome above load limits and to prevent overload, most popular web sites use
common techniques like:
Between 1991 and 1994 the simplicity and effectiveness of early technologies used to surf and
exchange data through the World Wide Web helped to port them to many different operating
systems and spread their use among lots of different social groups of people, first in scientific
organizations, then in universities and finally in industry.
In 1994 Tim Berners-Lee decided to constitute the World Wide Web Consortium to regulate the
further development of the many technologies involved (HTTP, HTML, etc.) through a
standardization process.
The following years are recent history which has seen an exponential growth of the number of
web sites and servers.
See Category:Web server software for a longer list of HTTP server programs.
Views
• Article
• Discussion
• Edit this page
• History
Personal tools
Navigation
• Main page
• Contents
• Featured content
• Current events
• Random article
Search
Interaction
• About Wikipedia
• Community portal
• Recent changes
• Contact Wikipedia
• Donate to Wikipedia
• Help
Toolbox
Languages
• العربية
• Bosanski
• Български
• Català
• Česky
• Dansk
• Deutsch
• Español
• Esperanto
• فارسی
• Français
• 한국어
• Hrvatski
• Bahasa Indonesia
• Interlingua
• Íslenska
• Italiano
• עברית
• Latviešu
• Magyar
• Bahasa Melayu
• Монгол
• Nederlands
• 日本語
• Polski
• Português
• Русский
• Simple English
• Slovenčina
• Slovenščina
• Srpskohrvatski / Српскохрватски
• Suomi
• Svenska
• ไทย
• Türkçe
• Українська
• اردو
• 中文
Internet resource management has been the domain of Internet technicians in managing the
addressing structure of the Internet to enable the explosive growth of Internet use, and to have
enough addressing space for that growth. There is however a different version of this concept
used by Sequel Technology upon its release of "Internet Resource Manager'. This concept has to
do with managing all network resources available to an enterprise, obtaining a view of exactly
what resources are being used, and a tool to manage those resources in tandem with Acceptable
Use Policies to reduce the total cost of Internet ownership.
This Sequel concept of "Internet Resource Management" has been further moved from the highly
technical term, towards meaning a management process taken up at the enterprise level by
visionGateway. In this concept, "Internet Resources" are all those network and Internet
templates, tools, software systems, gateways and websites that an enterprise owns, manages, or
to those systems and sites that the enterprise may have access to through a network.
Contents
[hide]
• 4 External References
Everyday Internet Resources of a business are being used by employees, administrators, 3rd
parties, customers & Joe and Jill public. Managers want to know that employees are able to see
any server across the Internet that has information or services that employees needs to get her job
done. More and more services such as accounting packages, Customer Relationship Management
tools, spreadsheets, document editors are obtained from servers outside the local area network of
the employee.
In addition, however, employees with Internet access now have available the biggest
entertainment, video playing, music acquirement tools, shopping facilities, banking and more. A
study of 10,000 Internet users has reported that 44.7% workers use the Internet for an average of
2.9 hours a day on personal web/Internet activity.
• the role a connected user from the business needs to play when online,
• the range of services that this particular role may require such as whether connection to
the Internet for this role requires to see only the software packages to which the company
subscribes,
• how the employee needs to connect to the Internet for example desktop computer, laptop,
PDA, mobile phone etc
• connectivity for the employee say whether we need to give this employee special online
support for handling large scale documents, maybe some special packet switching, or
additional security levels
• the range of online assets to be made available to this employee, usually managed by
category, such as Banking assets where all Banks online would be seen by an employee,
or a catalogue of small to medium businesses.
While the software packages named above could provide a management team with the tools
needed to manage Internet connectivity within an organization, there would be significant
difficulties arise in the cross-over of one tool with another. While there are cross over issues from
one package to another there are also significant gaps where there is no product offering.
• filter out (censor) all the sites that would not be useful to an employee while doing
business things on a workplace computer (Secure Computing's Gateway Security,
SurfControl Enterprise Protection Suite, and Websense Enterprise)
• a self-management approach where employees are given an account, and in that account
it tallies total time on the Internet, activity by protocol, and policies required of the
company (visionGateway's INTERScepter)
Spying and censoring have huge drawbacks. Spying creates a rift between employers and
employees and does nothing for morale in an organization. After the first two or three examples
of employees being "caught out" the required effects of spying reduce as does employee
ingenuity to "get around" being spied upon.
Although censoring tools enjoy majority market share, censoring has major drawbacks; the main
filtering database can be a bottleneck when employees are working, and also a database of
filtered sites is near impossible to keep up to date because there are thousands of sites a day that
spring up as new, and it is near impossible for those sites to be viewed by staff to put on the
blacklist. Machine listing of sites on filters is not a perfect science and a lot of good sites are
wrongly filtered out.
A self-management approach has fewer drawbacks, however, management would need to play an
active part in reviewing with employees their goals for the month and dutifully revisit with each
staff member how they are progressing in meeting company goals of Internet use.
Views
• Article
• Discussion
• Edit this page
• History
Personal tools
Navigation
• Main page
• Contents
• Featured content
• Current events
• Random article
Search
Interaction
• About Wikipedia
• Community portal
• Recent changes
• Contact Wikipedia
• Donate to Wikipedia
• Help
Toolbox
Important:
Be sure that your system meets all the requirements listed and is running only supported versions of all
software. For best results, start with a system with only the Windows® operating system, Internet
Information Server (IIS), and the latest service pack and security patches installed. Otherwise, Tivoli
Compliance Insight Manager might not work correctly.
General
• Minimum screen resolution for working with the Tivoli Compliance Insight Manager and
its components is 1024x768.
• The Server, Management Console, and Actuator components of Tivoli Compliance Insight
Manager use a number of network ports for communication. The default base port
number is 5992; this port is used for Point-of-Presence <-> Server communication. This
communication requires that a two-way connection can be made on the base port
between the Tivoli Compliance Insight Manager Server and the Point-of-Presence. The
base port + 1 (5993 by default) is used by the Management Console to connect to the
server (but always locally on a server or a Point-of-Presence). Other local ports are
assigned dynamically in the range 49152-65535 for the Tivoli Compliance Insight
Manager server's internal communication. Ports that are already in use in this range are
detected and skipped. If the default base port (5992) and the next sequential port (5993)
are not available in your environment, or on a particular system, use the Add Machine
wizard to specify a different base port number.
Note:
Ensure that the base port and the base port + 1 are available locally on the Tivoli
Compliance Insight Manager Server and Point-of-Presence. Verify that any network
devices, such as routers and firewalls, that are located between the Tivoli Compliance
Insight Manager Server and the Point-of-Presence permit two-way network traffic over
the base port.
Monitored Windows event sources require TCP 139 with a one-way connection.
Monitored UNIX SSH event sources require TCP 22 (by default) with a one-way
connection
DB2 is used between the Enterprise Server and Standard Server. Ports 50000 and
50001 are used by default, but you can specify different ports during installation.
• In all cases, connect the systems hosting the Tivoli Compliance Insight Manager
components to the Server through a TCP/IP network.
• The Tivoli Compliance Insight Manager Setup Program installs Java™ 1.4.2 in the
C:\Program Files directory if the program does not find a valid version installed. If a
custom JRE installation path is desired, Java 1.4.2 must be installed. For best results,
use the version provided with Tivoli Compliance Insight Manager 8.5. The Java
installation package can be found in the NT\Support\Java folder on the CD labeled IBM
Tivoli Compliance Insight Manager for Windows 2003 CD 3 of 3. Select the custom install
option and select the Support for Additional Languages option.
Important:
Other versions of Java are not supported. If you use a version of Java other than the one
provided, unpredictable results might occur.
For detailed information about determining the required memory and disk space, see
Determining disk space and memory requirements.
Software requirements
The amount of disk space required for the log files on the audited systems depends on the
amount of activity, the log settings, and the IBM Tivoli Compliance Insight Manager
collect schedule. For guidelines for calculating disk space see Determining disk space
and memory requirements.
Software prerequisites
• Install the Standard Server and the Management Console before installing the Actuator.
For details, see Installing a Standard Server.
• The Actuator must have access to the Tivoli Compliance Insight Manager servers through
a TCP/IP network.
• The Server, Management Console, and Actuator components of the Tivoli Compliance
Insight Manager system use a number of network ports for communication. The default
base port number is 5992; this port is used for Point-of-Presence <-> Server
communication. This communication requires that a two-way connection can be made on
the base port between the Tivoli Compliance Insight Manager Server and the Point-of-
Presence. The base port + 1 (5993 by default) is used by the Management Console to
connect to the server (but always locally on a server or a Point-of-Presence). Other local
ports are assigned dynamically in the range 49152-65535 for the Tivoli Compliance
Insight Manager server's internal communication. Ports that are already in use in this
range are detected and skipped. If the default base port (5992) and the next sequential
port (5993) are not available in your environment, or on a particular system, use the Add
Machine wizard to specify a different base port number.
Ensure that the base port and the base port + 1 are available locally on the Tivoli
Compliance Insight Manager Server and Point-of-Presence. Verify that any
network devices, such as routers and firewalls, that are located between the Tivoli
Compliance Insight Manager Server and the Point-of-Presence permit two-way
network traffic over the base port.
• Connect the Actuator to the Tivoli Compliance Insight Manager servers through a TCP/IP
network.
• To work with the Tivoli Compliance Insight Manager and its components, use a minimum
screen resolution of 1024x768.