Вы находитесь на странице: 1из 49

UNIT -III

Computer Network Overview

Computer networking or data communication is a most important part of the


information technology. Today every business in the world needs a computer
network for smooth operations, flexibly, instant communication and data access.
Just imagine if there is no network communication in the university campuses,
hospitals, multinational organizations and educational institutes then how difficult
are to communicate with each other. In this article you will learn the basic overview
of a computer network. The targeted audience of this article is the people who want
to know about the network communication system, network standards and types.

A computer network is comprised of connectivity devices and components. To share


data and resources between two or more computers is known as networking. There
are different types of a computer network such as LAN, MAN, WAN and wireless
network. The key devices involved that make the infrastructure of a computer
network are Hub, Switch, Router, Modem, Access point, LAN card and network
cables.

LAN stands for local area network and a network in a room, in a building or a
network over small distance is known as a LAN. MAN stands for Metropolitan area
network and it covers the networking between two offices within the city. WAN
stands for wide area network and it cover the networking between two or more
computers between two cities, two countries or two continents.

There are different topologies of a computer network. A topology defines the


physical layout or a design of a network. These topologies are star topology, bus
topology, mesh topology, star bus topology etc. In a star topology each computer in
a network is directly connected with a centralized device known as hub or switch. If
any computer gets problematic in star topology then it does not affect the other
computers in a network.

There are different standards and devices in computer network. The most
commonly used standard for a local area network is Ethernet. Key devices in a
computer network are hub, switch, router, modem and access point etc. A router is
used to connect two logically and physical different networks. All the
communication on the internet is based on the router. Hub/Switch is used to
connect the computers in local area network.

Hopefully, in this article you may have learnt that what a computer network is, how
important it is in our lives, what are different network devices, standards, topologies
and communication types.
Types of Computer Network

One way to categorize the different types of computer network designs is by their
scope or scale. For historical reasons, the networking industry refers to nearly every
type of design as some kind of area network. Common examples of area network
types are:

•LAN - Local Area Network


•WLAN - Wireless Local Area Network
•WAN - Wide Area Network
•MAN - Metropolitan Area Network
•SAN - Storage Area Network, System Area Network, Server Area Network, or
sometimes Small Area Network
•CAN - Campus Area Network, Controller Area Network, or sometimes Cluster Area
Network
•PAN - Personal Area Network
•DAN - Desk Area Network
LAN and WAN were the original categories of area networks, while the others have
gradually emerged over many years of technology evolution.
Note that these network types are a separate concept from network topologies such
as bus, ring and star.

LAN - Local Area Network


A LAN connects network devices over a relatively short distance. A networked office
building, school, or home usually contains a single LAN, though sometimes one
building will contain a few small LANs (perhaps one per room), and occasionally a
LAN will span a group of nearby buildings. In TCP/IP networking, a LAN is often but
not always implemented as a single IP subnet.
In addition to operating in a limited space, LANs are also typically owned,
controlled, and managed by a single person or organization. They also tend to use
certain connectivity technologies, primarily Ethernet and Token Ring.

WAN - Wide Area Network


As the term implies, a WAN spans a large physical distance. The Internet is the
largest WAN, spanning the Earth.
A WAN is a geographically-dispersed collection of LANs. A network device called a
router connects LANs to a WAN. In IP networking, the router maintains both a LAN
address and a WAN address.

A WAN differs from a LAN in several important ways. Most WANs (like the Internet)
are not owned by any one organization but rather exist under collective or
distributed ownership and management. WANs tend to use technology like ATM,
Frame Relay and X.25 for connectivity over the longer distances.

LAN, WAN and Home Networking


Residences typically employ one LAN and connect to the Internet WAN via an
Internet Service Provider (ISP) using a broadband modem. The ISP provides a WAN
IP address to the modem, and all of the computers on the home network use LAN
(so-called private) IP addresses. All computers on the home LAN can communicate
directly with each other but must go through a central gateway, typically a
broadband router, to reach the ISP.
Other Types of Area Networks
While LAN and WAN are by far the most popular network types mentioned, you may
also commonly see references to these others:
•Wireless Local Area Network - a LAN based on WiFi wireless network technology
•Metropolitan Area Network - a network spanning a physical area larger than a LAN
but smaller than a WAN, such as a city. A MAN is typically owned an operated by a
single entity such as a government body or large corporation.
•Campus Area Network - a network spanning multiple LANs but smaller than a MAN,
such as on a university or local business campus.
•Storage Area Network - connects servers to data storage devices through a
technology like Fibre Channel.
•System Area Network - links high-performance computers with high-speed
connections in a cluster configuration. Also known as Cluster Area Network.

Computer network

A computer network, often simply referred to as a network, is a collection of


computers and devices interconnected by communications channels that facilitate
communications among users and allows users to share resources. Networks may
be classified according to a wide variety of characteristics. A computer network
allows sharing of resources and information among interconnected devices.

History

Early networks of communicating computers included the military radar system


Semi-Automatic Ground Environment (SAGE) and its relative the commercial airline
reservation system Semi-Automatic Business Research Environment (SABRE),
starting in the late 1950s.[1][2]

In the 1960s, the Advanced Research Projects Agency (ARPA) started funding the
design of the Advanced Research Projects Agency Network (ARPANET) for the
United States Department of Defense. Development of the network began in 1969,
based on designs developed during the 1960s.[3] The ARPANET evolved into the
modern Internet.

Purpose

Computer networks can be used for a variety of purposes:

• Facilitating communications. Using a network, people can communicate


efficiently and easily via email, instant messaging, chat rooms, telephone,
video telephone calls, and video conferencing.
• Sharing hardware. In a networked environment, each computer on a network
may access and use hardware resources on the network, such as printing a
document on a shared network printer.
• Sharing files, data, and information. In a network environment, authorized
user may access data and information stored on other computers on the
network. The capability of providing access to data and information on shared
storage devices is an important feature of many networks.
• Sharing software. Users connected to a network may run application
programs on remote computers.
• Information preservation.
• Security.
• Speed up.

Network classification

The following list presents categories used for classifying networks.

Connection method

Computer networks can be classified according to the hardware and software


technology that is used to interconnect the individual devices in the network, such
as optical fiber, Ethernet, wireless LAN, HomePNA, power line communication or
G.hn.

Ethernet as it is defined by IEEE 802 utilizes various standards and mediums that
enable communication between devices. Frequently deployed devices include hubs,
switches, bridges, or routers. Wireless LAN technology is designed to connect
devices without wiring. These devices use radio waves or infrared signals as a
transmission medium. ITU-T G.hn technology uses existing home wiring (coaxial
cable, phone lines and power lines) to create a high-speed (up to 1 Gigabit/s) local
area network.

Wired technologies

• Twisted pair wire is the most widely used medium for telecommunication.
Twisted-pair cabling consist of copper wires that are twisted into pairs.
Ordinary telephone wires consist of two insulated copper wires twisted into
pairs. Computer networking cabling consist of 4 pairs of copper cabling that
can be utilized for both voice and data transmission. The use of two wires
twisted together helps to reduce crosstalk and electromagnetic induction.
The transmission speed ranges from 2 million bits per second to 100 million
bits per second. Twisted pair cabling comes in two forms which are
Unshielded Twisted Pair (UTP) and Shielded twisted-pair (STP) which are rated
in categories which are manufactured in different increments for various
scenarios.

• Coaxial cable is widely used for cable television systems, office buildings, and
other worksites for local area networks. The cables consist of copper or
aluminum wire wrapped with insulating layer typically of a flexible material
with a high dielectric constant, all of which are surrounded by a conductive
layer. The layers of insulation help minimize interference and distortion.
Transmission speed range from 200 million to more than 500 million bits per
second.

• Optical fiber cable consists of one or more filaments of glass fiber wrapped in
protective layers. It transmits light which can travel over extended distances.
Fiber-optic cables are not affected by electromagnetic radiation.
Transmission speed may reach trillions of bits per second. The transmission
speed of fiber optics is hundreds of times faster than for coaxial cables and
thousands of times faster than a twisted-pair wire.[citation needed]

Wireless technologies

• Terrestrial microwave – Terrestrial microwaves use Earth-based transmitter


and receiver. The equipment look similar to satellite dishes. Terrestrial
microwaves use low-gigahertz range, which limits all communications to line-
of-sight. Path between relay stations spaced approx, 30 miles apart.
Microwave antennas are usually placed on top of buildings, towers, hills, and
mountain peaks.

• Communications satellites – The satellites use microwave radio as their


telecommunications medium which are not deflected by the Earth's
atmosphere. The satellites are stationed in space, typically 22,000 miles (for
geosynchronous satellites) above the equator. These Earth-orbiting systems
are capable of receiving and relaying voice, data, and TV signals.

• Cellular and PCS systems – Use several radio communications technologies.


The systems are divided to different geographic areas. Each area has a low-
power transmitter or radio relay antenna device to relay calls from one area
to the next area.

• Wireless LANs – Wireless local area network use a high-frequency radio


technology similar to digital cellular and a low-frequency radio technology.
Wireless LANs use spread spectrum technology to enable communication
between multiple devices in a limited area. An example of open-standards
wireless radio-wave technology is IEEE.

• Infrared communication , which can transmit signals between devices within


small distances not more than 10 meters peer to peer or ( face to face )
without any body in the line of transmitting.

Scale

Networks are often classified as local area network (LAN), wide area network (WAN),
metropolitan area network (MAN), personal area network (PAN), virtual private
network (VPN), campus area network (CAN), storage area network (SAN), and
others, depending on their scale, scope and purpose, e.g., controller area network
(CAN) usage, trust level, and access right often differ between these types of
networks. LANs tend to be designed for internal use by an organization's internal
systems and employees in individual physical locations, such as a building, while
WANs may connect physically separate parts of an organization and may include
connections to third parties.

Functional relationship (network architecture)

Computer networks may be classified according to the functional relationships


which exist among the elements of the network, e.g., active networking, client–
server and peer-to-peer (workgroup) architecture.

Network topology

Main article: Network topology

Computer networks may be classified according to the network topology upon


which the network is based, such as bus network, star network, ring network, mesh
network. Network topology is the coordination by which devices in the network are
arranged in their logical relations to one another, independent of physical
arrangement. Even if networked computers are physically placed in a linear
arrangement and are connected to a hub, the network has a star topology, rather
than a bus topology. In this regard the visual and operational characteristics of a
network are distinct. Networks may be classified based on the method of data used
to convey the data, these include digital and analog networks.

Types of networks based on physical scope

Common types of computer networks may be identified by their scale.

Local area network

A local area network (LAN) is a network that connects computers and devices in a
limited geographical area such as home, school, computer laboratory, office
building, or closely positioned group of buildings. Each computer or device on the
network is a node. Current wired LANs are most likely to be based on Ethernet
technology, although new standards like ITU-T G.hn also provide a way to create a
wired LAN using existing home wires (coaxial cables, phone lines and power lines).[4]
Typical library network, in a branching tree topology and controlled access to
resources

All interconnected devices must understand the network layer (layer 3), because
they are handling multiple subnets (the different colors). Those inside the library,
which have only 10/100 Mbit/s Ethernet connections to the user device and a
Gigabit Ethernet connection to the central router, could be called "layer 3 switches"
because they only have Ethernet interfaces and must understand IP. It would be
more correct to call them access routers, where the router at the top is a
distribution router that connects to the Internet and academic networks' customer
access routers.

The defining characteristics of LANs, in contrast to WANs (Wide Area Networks),


include their higher data transfer rates, smaller geographic range, and no need for
leased telecommunication lines. Current Ethernet or other IEEE 802.3 LAN
technologies operate at speeds up to 10 Gbit/s. This is the data transfer rate. IEEE
has projects investigating the standardization of 40 and 100 Gbit/s.[5]

Personal area network

A personal area network (PAN) is a computer network used for communication


among computer and different information technological devices close to one
person. Some examples of devices that are used in a PAN are personal computers,
printers, fax machines, telephones, PDAs, scanners, and even video game consoles.
A PAN may include wired and wireless devices. The reach of a PAN typically extends
to 10 meters.[6] A wired PAN is usually constructed with USB and Firewire
connections while technologies such as Bluetooth and infrared communication
typically form a wireless PAN.

[edit] Home area network

A home area network (HAN) is a residential LAN which is used for communication
between digital devices typically deployed in the home, usually a small number of
personal computers and accessories, such as printers and mobile computing
devices. An important function is the sharing of Internet access, often a broadband
service through a CATV or Digital Subscriber Line (DSL) provider. It can also be
referred to as an office area network (OAN).

Wide area network

A wide area network (WAN) is a computer network that covers a large geographic
area such as a city, country, or spans even intercontinental distances, using a
communications channel that combines many types of media such as telephone
lines, cables, and air waves. A WAN often uses transmission facilities provided by
common carriers, such as telephone companies. WAN technologies generally
function at the lower three layers of the OSI reference model: the physical layer, the
data link layer, and the network layer.

Campus network
A campus network is a computer network made up of an interconnection of local
area networks (LAN's) within a limited geographical area. The networking
equipments (switches, routers) and transmission media (optical fiber, copper plant,
Cat5 cabling etc.) are almost entirely owned (by the campus tenant / owner: an
enterprise, university, government etc.).

In the case of a university campus-based campus network, the network is likely to


link a variety of campus buildings including; academic departments, the university
library and student residence halls.

[edit] Metropolitan area network

A Metropolitan area network is a large computer network that usually spans a city
or a large campus.

Sample EPN made of Frame relay WAN connections and dialup remote access.

Sample VPN used to interconnect 3 offices and remote users

Enterprise private network


An enterprise private network is a network build by an enterprise to interconnect
various company sites, e.g., production sites, head offices, remote offices, shops, in
order to share computer resources.

Virtual private network

A virtual private network (VPN) is a computer network in which some of the links
between nodes are carried by open connections or virtual circuits in some larger
network (e.g., the Internet) instead of by physical wires. The data link layer
protocols of the virtual network are said to be tunneled through the larger network
when this is the case. One common application is secure communications through
the public Internet, but a VPN need not have explicit security features, such as
authentication or content encryption. VPNs, for example, can be used to separate
the traffic of different user communities over an underlying network with strong
security features.

VPN may have best-effort performance, or may have a defined service level
agreement (SLA) between the VPN customer and the VPN service provider.
Generally, a VPN has a topology more complex than point-to-point.

Internetwork

An internetwork is the connection of two or more private computer networks via a


common routing technology (OSI Layer 3) using routers. The Internet is an
aggregation of many internetworks, hence its name was shortened to Internet.

Backbone network

A Backbone network (BBN) A backbone network or network backbone is part of a


computer network infrastructure that interconnects various pieces of network,
providing a path for the exchange of information between different LANs or
subnetworks.[1][2] A backbone can tie together diverse networks in the same
building, in different buildings in a campus environment, or over wide areas.
Normally, the backbone's capacity is greater than the networks connected to it.

A large corporation that has many locations may have a backbone network that ties
all of the locations together, for example, if a server cluster needs to be accessed
by different departments of a company that are located at different geographical
locations. The pieces of the network connections (for example: ethernet, wireless)
that bring these departments together is often mentioned as network backbone.
Network congestion is often taken into consideration while designing backbones.

Backbone networks should not be confused with the Internet backbone.

Global area network

A global area network (GAN) is a network used for supporting mobile


communications across an arbitrary number of wireless LANs, satellite coverage
areas, etc. The key challenge in mobile communications is handing off the user
communications from one local coverage area to the next. In IEEE Project 802, this
involves a succession of terrestrial wireless LANs.[7]

Internet

The Internet is a global system of interconnected governmental, academic,


corporate, public, and private computer networks. It is based on the networking
technologies of the Internet Protocol Suite. It is the successor of the Advanced
Research Projects Agency Network (ARPANET) developed by DARPA of the United
States Department of Defense. The Internet is also the communications backbone
underlying the World Wide Web (WWW).

Participants in the Internet use a diverse array of methods of several hundred


documented, and often standardized, protocols compatible with the Internet
Protocol Suite and an addressing system (IP addresses) administered by the
Internet Assigned Numbers Authority and address registries. Service providers and
large enterprises exchange information about the reachability of their address
spaces through the Border Gateway Protocol (BGP), forming a redundant worldwide
mesh of transmission paths.

Intranets and extranets

Intranets and extranets are parts or extensions of a computer network, usually a


local area network.

An intranet is a set of networks, using the Internet Protocol and IP-based tools such
as web browsers and file transfer applications, that is under the control of a single
administrative entity. That administrative entity closes the intranet to all but
specific, authorized users. Most commonly, an intranet is the internal network of an
organization. A large intranet will typically have at least one web server to provide
users with organizational information.

An extranet is a network that is limited in scope to a single organization or entity


and also has limited connections to the networks of one or more other usually, but
not necessarily, trusted organizations or entities—a company's customers may be
given access to some part of its intranet—while at the same time the customers
may not be considered trusted from a security standpoint. Technically, an extranet
may also be categorized as a CAN, MAN, WAN, or other type of network, although
an extranet cannot consist of a single LAN; it must have at least one connection
with an external network.

Overlay network

An overlay network is a virtual computer network that is built on top of another


network. Nodes in the overlay are connected by virtual or logical links, each of
which corresponds to a path, perhaps through many physical links, in the underlying
network.
A sample overlay network: IP over SONET over Optical

For example, many peer-to-peer networks are overlay networks because they are
organized as nodes of a virtual system of links run on top of the Internet. The
Internet was initially built as an overlay on the telephone network .[8]

Overlay networks have been around since the invention of networking when
computer systems were connected over telephone lines using modem, before any
data network existed.

Nowadays the Internet is the basis for many overlaid networks that can be
constructed to permit routing of messages to destinations specified by an IP
address. For example, distributed hash tables can be used to route messages to a
node having a specific logical address, whose IP address is known in advance.

Overlay networks have also been proposed as a way to improve Internet routing,
such as through quality of service guarantees to achieve higher-quality streaming
media. Previous proposals such as IntServ, DiffServ, and IP Multicast have not seen
wide acceptance largely because they require modification of all routers in the
network.[citation needed] On the other hand, an overlay network can be incrementally
deployed on end-hosts running the overlay protocol software, without cooperation
from Internet service providers. The overlay has no control over how packets are
routed in the underlying network between two overlay nodes, but it can control, for
example, the sequence of overlay nodes a message traverses before reaching its
destination.

For example, Akamai Technologies manages an overlay network that provides


reliable, efficient content delivery (a kind of multicast). Academic research includes
End System Multicast and Overcast for multicast; RON (Resilient Overlay Network)
for resilient routing; and OverQoS for quality of service guarantees, among others. A
backbone network or network backbone is a part of computer network
infrastructure that interconnects various pieces of network, providing a path for the
exchange of information between different LANs or subnetworks.[1][2] A backbone
can tie together diverse networks in the same building, in different buildings in a
campus environment, or over wide areas. Normally, the backbone's capacity is
greater than the networks connected to it.

Basic hardware components

All networks are made up of basic hardware building blocks to interconnect network
nodes, such as Network Interface Cards (NICs), Bridges, Hubs, Switches, and
Routers. In addition, some method of connecting these building blocks is required,
usually in the form of galvanic cable (most commonly Category 5 cable). Less
common are microwave links (as in IEEE 802.12) or optical cable ("optical fiber").

Network interface cards

A network card, network adapter, or NIC (network interface card) is a piece of


computer hardware designed to allow computers to communicate over a computer
network. It provides physical access to a networking medium and often provides a
low-level addressing system through the use of MAC addresses.

Each network interface card has its unique id. This is written on a chip which is
mounted on the card.

Repeaters

A repeater is an electronic device that receives a signal, cleans it of unnecessary


noise, regenerates it, and retransmits it at a higher power level, or to the other side
of an obstruction, so that the signal can cover longer distances without degradation.
In most twisted pair Ethernet configurations, repeaters are required for cable that
runs longer than 100 meters. Repeaters work on the Physical Layer of the OSI
model.

Hubs

A network hub contains multiple ports. When a packet arrives at one port, it is
copied unmodified to all ports of the hub for transmission. The destination address
in the frame is not changed to a broadcast address.[9] It works on the Physical Layer
of the OSI model..

Bridges

A network bridge connects multiple network segments at the data link layer (layer
2) of the OSI model. Bridges broadcast to all ports except the port on which the
broadcast was received. However, bridges do not promiscuously copy traffic to all
ports, as hubs do, but learn which MAC addresses are reachable through specific
ports. Once the bridge associates a port and an address, it will send traffic for that
address to that port only.

Bridges learn the association of ports and addresses by examining the source
address of frames that it sees on various ports. Once a frame arrives through a port,
its source address is stored and the bridge assumes that MAC address is associated
with that port. The first time that a previously unknown destination address is seen,
the bridge will forward the frame to all ports other than the one on which the frame
arrived.

Bridges come in three basic types:

• Local bridges: Directly connect local area networks (LANs)


• Remote bridges: Can be used to create a wide area network (WAN) link
between LANs. Remote bridges, where the connecting link is slower than the
end networks, largely have been replaced with routers.
• Wireless bridges: Can be used to join LANs or connect remote stations to
LANs.

Switches

A network switch is a device that forwards and filters OSI layer 2 datagrams (chunks
of data communication) between ports (connected cables) based on the MAC
addresses in the packets.[10] A switch is distinct from a hub in that it only forwards
the frames to the ports involved in the communication rather than all ports
connected. A switch breaks the collision domain but represents itself as a broadcast
domain. Switches make forwarding decisions of frames on the basis of MAC
addresses. A switch normally has numerous ports, facilitating a star topology for
devices, and cascading additional switches.[11] Some switches are capable of routing
based on Layer 3 addressing or additional logical levels; these are called multi-layer
switches. The term switch is used loosely in marketing to encompass devices
including routers and bridges, as well as devices that may distribute traffic on load
or by application content (e.g., a Web URL identifier).

Routers

A router is an internetworking device that forwards packets between networks by


processing information found in the datagram or packet (Internet protocol
information from Layer 3 of the OSI Model). In many situations, this information is
processed in conjunction with the routing table (also known as forwarding table).
Routers use routing tables to determine what interface to forward packets (this can
include the "null" also known as the "black hole" interface because data can go into
it, however, no further processing is done for said data).

Firewalls

Firewalls are the most important aspect of a network with respect to security. A
firewalled system does not need every interaction or data transfer monitored by a
human, as automated processes can be set up to assist in rejecting access requests
from unsafe sources, and allowing actions from recognized ones. The vital role
firewalls play in network security grows in parallel with the constant increase in
'cyber' attacks for the purpose of stealing/corrupting data, planting viruses, etc.

Internet: "The Big Picture"


Welcome to "The Big Picture" of connecting through the Internet to reach online
resources. The purpose of this page is to answer the question: "What are the major
pieces of the Internet, and who are the major players in each segment?" If some of
these links don't make sense, it's because you are not an "alumni" of my internet
courses ;-)

This page displays the main pieces of the Internet from a User's PC... extending all
the way through to the online content. Each section mentions the most significant
parts of the Internet's architecture. I also provide links to the top "couple of
vendors" in each category, and then an external link to a more extensive lists of
vendors.

In creating this one web page to describe the "entire Internet", I split the diagram
based on the function being performed. I recognize that a company may perform
several of these functions. I've included several "leading edge" components as
well, such as LMDS for the local loop (This page is intended to be forward-looking). I
also recognize that there are many additional details that could be added to this
page, but I am trying to adhere to a 90/10 rule. If this page identifies 90% of the
mainstream pieces and players, that should be sufficient to convey "the big
picture". (The remaining 10% details would probably triple the size & complexity of
this one meta-diagram.) I welcome any comments you have to improve this page -
especially if I've omitted anything significant. Russ Haynal.

These are the Main Sections:

• User PC - Multi-Media PCs equipped to send and receive all variety of audio
and video
• User Communication Equipment - Connects the Users' PC(s) to the "Local
Loop"
• Local Loop Carrier - Connects the User location to the ISP's Point of Presence
• ISP's POP - Connections from the user are accepted and authenticated here.
• User Services - Used by the User for access (DNS, EMAIL, etc).
• ISP Backbone - Interconnects the ISP's POPs, AND interconnects the ISP to
Other ISP's and online content.
• Online Content - These are the host sites that the user interacts with.
• Origins Of Online Content - This is the original "real-world" sources for the
online information.

User PC - A Multi-Media PC equipped to send and


receive all variety of audio and video.

1. Sound Board /Microphone/Speakers for


telephony, MIDI ,Creative Labs/SoundBlaster,
Yahoo's List for Sound Cards.
2. Video/Graphics for 3D graphics, video,
playback . Matrox, Diamond Multimedia,
Yahoo's List for Video Cards.
3. Video camera - Connectix, Yahoo's List for
Video Conferencing, Yahoo's List for
Manufacturers.

4. Voice recognition - Yahoo's List for Voice


Recognition.
User's Communication Equipment - This is the
communication equipment located at the User's
location(s) to connect the Users' PC(s) to the "Local
Loop" (aka Customer Premise equipment - CPE)

1. Phone line - Analog Modem (v.90=56K) US


Robotics , Rockwell, Yahoo's List for Modems.
2. Phone line -ISDN(128K) Yahoo's list for ISDN.
3. Phone Line - DSL (6 MB) , Yahoo's list for DSL.,
ADSL Forum.
4. Cable Modem (27 MB) Cable Modem University
(and their neat table of Modem Vendors)
5. Electric Line (1 MB) Digital PowerLine by Nortel
6. Satellite (400 Kb) DirecPC
7. LAN - 3com, Yahoo's list of Network Hardware.
8. Routers - Cisco, Ascend, Bay Networks, Yahoo's
list.
9. Firewalls - TBD Vendors, Yahoo's list for
firewalls.

User services - Many corporations also provide "User


services" to their employees such as DNS, Email,
Usenet, etc. Links for these services are described
further down this diagram in the user services
section.
Local Loop Carrier - Connects the User location to
the ISP's Point of Presence

1. Communication Lines -RBOCS: (Ameritech, Bell


Atlantic, Bell South, Cincinnati Bell, NYNEX,
Pacific Bell, Southwestern Bell, US West),GTE,
LEC's, MFS, TCG, Brooks,
2. Cable - List of Cable ISP's.
3. Satellite - DirecPC.
4. Power line - Digital PowerLine by Nortel.
5. Wireless - Wireless Week, Wireless Access Tech
Magazine, Yahoos' List for Wireless networking.

Equipment Manufacturers: Nortel, Lucent, Newbridge,


Siemens.
ISP POP- This is the edge of the ISP's network.
Connections from the user are accepted and
authenticated here.

1. Remote ports Ascend (Max Product), US


Robotics (3com), Livingston (Portmaster),
Cisco, Yahoo's List for Routing Technology.
User Services - these are the services that most
users would use along with Internet Access. (These
may be hosted within a large corporate LAN)
(Webhosting is discussed under the online content
section)

1. Domain Name Server - BIND, DNS Resources


Directory.
2. Email Host -,Sendmail ,Microsoft Exchange
3. Usenet Newsgroups (NNTP) - INN,
4. Special services such as quake, telnet, FTP
5. User Web Hosting - See the online content
section for details.

6. These servers require fast interfaces and


large/fast storage.
ISP Backbone - The ISP backbone interconnects the
ISP's POPs, AND interconnects the ISP to Other ISP's
and online content.

1. Backbone Providers - Russ Haynal's ISP Page.


2. Large Circuits - fiber Circuit carriers, AT&T,
SPRINT, MCI, Worldcom (MFS, Brooks), RBOC's,
C&W, Qwest,
3. Routers - Cisco, Ascend, Bay Networks, Yahoo's
list.
4. ATM Switches - Fore, Newbridge, Lucent,
Ascend, Yahoo's List of ATM Manufacturers.
5. Sonet/SDH Switches - Nortel, Fujitsu,
Alcatel.Tellabs , Lucent and Positron Fiber
Systems.
6. Gigaswitch - Gigaswitch from Dec, Yahoo's List.
7. Network Access Points - Russ Haynal's ISP Page

The Broadband guide (links to 4,000 vendors)


Online Content - These are the host sites that the
user interacts with.

1. Web Server platforms - Netsite, Apache,


Microsoft, Yahoo's List of web servers.
2. Hosting Farms- Many online resources are
hosted at well-connection facilities

3. These servers require fast interfaces and


large/fast storage.
Origins of online content - This is the original
"real-world" sources for the online information.

1. Existing electronic information is being


connected from legacy systems.
2. Traditional print resources are being scanned
and converted into electronic format
3. Many types of video and audio programming is
being broadcast via the internet. For example,
look at Radio_locator.
4. Internet telephony is growing on the Internet
Start with VON and then explore this list from
Yahoo.

5. Look at this list of interesting devices


connected to the Internet.

Other Resources:

Internet Architecture
Fortunately, nobody owns the Internet, there is no centralized control, and nobody
can turn it off. Its evolution depends on rough consensus about technical proposals,
and on running code. Engineering feed-back from real implementations is more
important than any architectural principles.

RFC 1958; B. Carpenter; Architectural Principles of the Internet; June, 1996.

What is the Internet architecture? It is by definition a meta-network, a constantly


changing collection of thousands of individual networks intercommunicating with a
common protocol.

The Internet's architecture is described in its name, a short from of the compound
word "inter-networking". This architecture is based in the very specification of the
standard TCP/IP protocol, designed to connect any two networks which may be very
different in internal hardware, software, and technical design. Once two networks
are interconnected, communication with TCP/IP is enabled end-to-end, so that any
node on the Internet has the near magical ability to communicate with any other no
matter where they are. This openness of design has enabled the Internet
architecture to grow to a global scale.
In practice, the Internet technical architecture looks a bit like a multi-dimensional
river system, with small tributaries feeding medium-sized streams feeding large
rivers. For example, an individual's access to the Internet is often from home over a
modem to a local Internet service provider who connects to a regional network
connected to a national network. At the office, a desktop computer might be
connected to a local area network with a company connection to a corporate
Intranet connected to several national Internet service providers. In general, small
local Internet service providers connect to medium-sized regional networks which
connect to large national networks, which then connect to very large bandwidth
networks on the Internet backbone. Most Internet service providers have several
redundant network cross-connections to other providers in order to ensure
continuous availability.

The companies running the Internet backbone operate very high bandwidth
networks relied on by governments, corporations, large organizations, and other
Internet service providers. Their technical infrastructure often includes global
connections through underwater cables and satellite links to enable communication
between countries and continents. As always, a larger scale introduces new
phenomena: the number of packets flowing through the switches on the backbone
is so large that it exhibits the kind of complex non-linear patterns usually found in
natural, analog systems like the flow of water or development of the rings of Saturn
(RFC 3439, S2.2).

Each communication packet goes up the hierarchy of Internet networks as far as


necessary to get to its destination network where local routing takes over to deliver
it to the addressee. In the same way, each level in the hierarchy pays the next level
for the bandwidth they use, and then the large backbone companies settle up with
each other. Bandwidth is priced by large Internet service providers by several
methods, such as at a fixed rate for constant availability of a certain number of
megabits per second, or by a variety of use methods that amount to a cost per
gigabyte. Due to economies of scale and efficiencies in management, bandwidth
cost drops dramatically at the higher levels of the architecture.

Resources. The network topology page provides information and resources on the
real-time construction of the Internet network, including graphs and statistics. The
following references provide additional information about the Internet architecture:

IP address
An identifier for a computer or device on a TCP/IP network. Networks using the
TCP/IP protocol route messages based on the IP address of the destination. The
format of an IP address is a 32-bit numeric address written as four numbers
separated by periods. Each number can be zero to 255. For example, 1.160.10.240
could be an IP address.

Within an isolated network, you can assign IP addresses at random as long as each
one is unique. However, connecting a private network to the Internet requires using
registered IP addresses (called Internet addresses) to avoid duplicates.
The four numbers in an IP address are used in different ways to identify a particular
network and a host on that network. Four regional Internet registries -- ARIN, RIPE
NCC, LACNIC and APNIC -- assign Internet addresses from the following three
classes.

•Class A - supports 16 million hosts on each of 126 networks


•Class B - supports 65,000 hosts on each of 16,000 networks
•Class C - supports 254 hosts on each of 2 million networks
The number of unassigned Internet addresses is running out, so a new classless
scheme called CIDR is gradually replacing the system based on classes A, B, and C
and is tied to adoption of IPv6.

ISP
Short for Internet Service Provider, it refers to a company that provides Internet
services, including personal and business access to the Internet. For a monthly fee,
the service provider usually provides a software package, username, password and
access phone number. Equipped with a modem, you can then log on to the Internet
and browse the World Wide Web and USENET, and send and receive e-mail. For
broadband access you typically receive the broadband modem hardware or pay a
monthly fee for this equipment that is added to your ISP account billing.

In addition to serving individuals, ISPs also serve large companies, providing a


direct connection from the company's networks to the Internet. ISPs themselves are
connected to one another through Network Access Points (NAPs). ISPs may also be
called IAPs (Internet Access Providers).

URL
Abbreviation of Uniform Resource Locator, the global address of documents and
other resources on the World Wide Web.

The first part of the address is called a protocol identifier and it indicates what
protocol to use, and the second part is called a resource name and it specifies the IP
address or the domain name where the resource is located. The protocol identifier
and the resource name are separated by a colon and two forward slashes.

For example, the two URLs below point to two different files at the domain
pcwebopedia.com. The first specifies an executable file that should be fetched using
the FTP protocol; the second specifies a Web page that should be fetched using the
HTTP protocol:

•ftp://www.pcwebopedia.com/stuff.exe
•http://www.pcwebopedia.com/index.html

Domain Name
Domain names are used to identify one or more IP addresses. For example, the
domain name microsoft.com represents about a dozen IP addresses. Domain names
are used in URLs to identify particular Web pages. For example, in the URL
http://www.pcwebopedia.com/index.html, the domain name is pcwebopedia.com.
Every domain name has a suffix that indicates which top level domain (TLD) it
belongs to. There are only a limited number of such domains. For example:

•gov - Government agencies


•edu - Educational institutions
•org - Organizations (nonprofit)
•mil - Military
•com - commercial business
•net - Network organizations
•ca - Canada
•th - Thailand
Because the Internet is based on IP addresses, not domain names, every Web
server requires a Domain Name System (DNS) server to translate domain names
into IP addresses.

Browser
short for Web browser, a software application used to locate and display Web
pages. The two most popular browsers are Microsoft Internet Explorer and Firefox.
Both of these are graphical browsers, which means that they can display graphics
as well as text. In addition, most modern browsers can present multimedia
information, including sound and video, though they require plug-ins for some
formats.

Protocol
An agreed-upon format for transmitting data between two devices. The protocol
determines the following:

•the type of error checking to be used


data compression method, if any
•how the sending device will indicate that it has finished sending a message
how the receiving device will indicate that it has received a message
There are a variety of standard protocols from which programmers can choose.
Each has particular advantages and disadvantages; for example, some are simpler
than others, some are more reliable, and some are faster.

From a user's point of view, the only interesting aspect about protocols is that your
computer or device must support the right ones if you want to communicate with
other computers. The protocol can be implemented either in hardware or in
software.

Search engine
A program that searches documents for specified keywords and returns a list of the
documents where the keywords were found. Although search engine is really a
general class of programs, the term is often used to specifically describe systems
like Google, Alta Vista and Excite that enable users to search for documents on the
World Wide Web and USENET newsgroups.

Typically, a search engine works by sending out a spider to fetch as many


documents as possible. Another program, called an indexer, then reads these
documents and creates an index based on the words contained in each document.
Each search engine uses a proprietary algorithm to create its indices such that,
ideally, only meaningful results are returned for each query.

e-mail
Short for electronic mail, the transmission of messages over communications
networks. The messages can be notes entered from the keyboard or electronic files
stored on disk. Most mainframes, minicomputers, and computer networks have an
e-mail system. Some electronic-mail systems are confined to a single computer
system or network, but others have gateways to other computer systems, enabling
users to send electronic mail anywhere in the world. Companies that are fully
computerized make extensive use of e-mail because it is fast, flexible, and reliable.

Most e-mail systems include a rudimentary text editor for composing messages, but
many allow you to edit your messages using any editor you want. You then send the
message to the recipient by specifying the recipient's address. You can also send
the same message to several users at once. This is called broadcasting.

Sent messages are stored in electronic mailboxes until the recipient fetches them.
To see if you have any mail, you may have to check your electronic mailbox
periodically, although many systems alert you when mail is received. After reading
your mail, you can store it in a text file, forward it to other users, or delete it. Copies
of memos can be printed out on a printer if you want a paper copy.

All online services and Internet Service Providers (ISPs) offer e-mail, and most also
support gateways so that you can exchange mail with users of other systems.
Usually, it takes only a few seconds or minutes for mail to arrive at its destination.
This is a particularly effective way to communicate with a group because you can
broadcast a message or document to everyone in the group at once.

Although different e-mail systems use different formats, there are some emerging
standards that are making it possible for users on all systems to exchange
messages. In the PC world, an important e-mail standard is MAPI. The CCITT
standards organization has developed the X.400 standard, which attempts to
provide a universal way of addressing messages. To date, though, the de facto
addressing standard is the one used by the Internet system because almost all e-
mail systems have an Internet gateway.

Another common spelling for e-mail is email.

FTP
Short for File Transfer Protocol, the protocol for exchanging files over the Internet.
FTP works in the same way as HTTP for transferring Web pages from a server to a
user's browser and SMTP for transferring electronic mail across the Internet in that,
like these technologies, FTP uses the Internet's TCP/IP protocols to enable data
transfer.

FTP is most commonly used to download a file from a server using the Internet or to
upload a file to a server (e.g., uploading a Web page file to a server).
Telnet
(tel´net) (n.) A terminal emulation program for TCP/IP networks such as the Internet.
The Telnet program runs on your computer and connects your PC to a server on the
network. You can then enter commands through the Telnet program and they will
be executed as if you were entering them directly on the server console. This
enables you to control the server and communicate with other servers on the
network. To start a Telnet session, you must log in to a server by entering a valid
username and password. Telnet is a common way to remotely control Web servers.

gopher
A system that pre-dates the World Wide Web for organizing and displaying files on
Internet servers. A Gopher server presents its contents as a hierarchically
structured list of files. With the ascendance of the Web, many gopher databases
were converted to Web sites which can be more easily accessed via Web search
engines.

Gopher was developed at the University of Minnesota and named after the school's
mascot. Two systems, Veronica and Jughead, let you search global indices of
resources stored in Gopher systems.

WWW
The World Wide Web, abbreviated as WWW and commonly known as the Web, is a
system of interlinked hypertext documents accessed via the Internet. With a web
browser, one can view web pages that may contain text, images, videos, and other
multimedia and navigate between them via hyperlinks. Using concepts from earlier
hypertext systems, English engineer and computer scientist Sir Tim Berners-Lee,
now the Director of the World Wide Web Consortium, wrote a proposal in March
1989 for what would eventually become the World Wide Web.[1] At CERN in
Geneva, Switzerland, Berners-Lee and Belgian computer scientist Robert Cailliau
proposed in 1990 to use "HyperText ... to link and access information of various
kinds as a web of nodes in which the user can browse at will",[2] and publicly
introduced the project in December.[3]

"The World-Wide Web (W3) was developed to be a pool of human knowledge, and
human culture, which would allow collaborators in remote sites to share their ideas
and all aspects of a common project."[4]

Contents []
1 History
2 Function
2.1 Linking
2.2 Dynamic updates of web pages
2.3 WWW prefix
3 Privacy
4 Security
5 Standards
6 Accessibility
7 Internationalization
8 Statistics
9 Speed issues
10 Caching
11 See also
12 Notes
13 References
14 External links

HistoryMain article: History of the World Wide Web


In the May 1970 issue of Popular Science magazine Arthur C. Clarke was reported to
have predicted that satellites would one day "bring the accumulated knowledge of
the world to your fingertips" using a console that would combine the functionality of
the Xerox, telephone, television and a small computer, allowing data transfer and
video conferencing around the globe.[5]

In March 1989, Tim Berners-Lee wrote a proposal that referenced ENQUIRE, a


database and software project he had built in 1980, and described a more elaborate
information management system.[6]

With help from Robert Cailliau, he published a more formal proposal (on November
12, 1990) to build a "Hypertext project" called "WorldWideWeb" (one word, also
"W3") as a "web" of "hypertext documents" to be viewed by "browsers" using a
client–server architecture.[2] This proposal estimated that a read-only web would be
developed within three months and that it would take six months to achieve "the
creation of new links and new material by readers, [so that] authorship becomes
universal" as well as "the automatic notification of a reader when new material of
interest to him/her has become available." See Web 2.0 and RSS/Atom, which have
taken a little longer to mature.

The proposal was modeled after the Dynatext SGML reader by Electronic Book
Technology, a spin-off from the Institute for Research in Information and Scholarship
at Brown University. The Dynatext system, licensed by CERN, was technically
advanced and was a key player in the extension of SGML ISO 8879:1986 to
Hypermedia within HyTime, but it was considered too expensive and had an
inappropriate licensing policy for use in the general high energy physics community,
namely a fee for each document and each document alteration.

This NeXT Computer used by Tim Berners-Lee at CERN became the first web server
The CERN datacenter in 2010 housing some www serversA NeXT Computer was
used by Berners-Lee as the world's first web server and also to write the first web
browser, WorldWideWeb, in 1990. By Christmas 1990, Berners-Lee had built all the
tools necessary for a working Web:[7] the first web browser (which was a web editor
as well); the first web server; and the first web pages,[8] which described the
project itself. On August 6, 1991, he posted a short summary of the World Wide
Web project on the alt.hypertext newsgroup.[9] This date also marked the debut of
the Web as a publicly available service on the Internet. The first photo on the web
was uploaded by Berners-Lee in 1992, an image of the CERN house band Les
Horribles Cernettes.
Web as a "Side Effect" of the 40 years of Particle Physics Experiments. It happened
many times during history of science that the most impressive results of large scale
scientific efforts appeared far away from the main directions of those efforts... After
the World War 2 the nuclear centers of almost all developed countries became the
places with the highest concentration of talented scientists. For about four decades
many of them were invited to the international CERN's Laboratories. So specific kind
of the CERN's intellectual "entire culture" (as you called it) was constantly growing
from one generation of the scientists and engineers to another. When the
concentration of the human talents per square foot of the CERN's Labs reached the
critical mass, it caused an intellectual explosion The Web -- crucial point of human's
history -- was born... Nothing could be compared to it... We cant imagine yet the
real scale of the recent shake, because there has not been so fast growing multi-
dimension social-economic processes in human history... [10]

The first server outside Europe was set up at SLAC to host the SPIRES-HEP
database. Accounts differ substantially as to the date of this event. The World Wide
Web Consortium says December 1992,[11] whereas SLAC itself claims 1991.[12]
[13] This is supported by a W3C document entitled A Little History of the World
Wide Web.[14]

The crucial underlying concept of hypertext originated with older projects from the
1960s, such as the Hypertext Editing System (HES) at Brown University, Ted
Nelson's Project Xanadu, and Douglas Engelbart's oN-Line System (NLS). Both
Nelson and Engelbart were in turn inspired by Vannevar Bush's microfilm-based
"memex", which was described in the 1945 essay "As We May Think".[citation
needed]

Berners-Lee's breakthrough was to marry hypertext to the Internet. In his book


Weaving The Web, he explains that he had repeatedly suggested that a marriage
between the two technologies was possible to members of both technical
communities, but when no one took up his invitation, he finally tackled the project
himself. In the process, he developed a system of globally unique identifiers for
resources on the Web and elsewhere: the Universal Document Identifier (UDI), later
known as Uniform Resource Locator (URL) and Uniform Resource Identifier (URI);
the publishing language HyperText Markup Language (HTML); and the Hypertext
Transfer Protocol (HTTP).[15]

The World Wide Web had a number of differences from other hypertext systems
that were then available. The Web required only unidirectional links rather than
bidirectional ones. This made it possible for someone to link to another resource
without action by the owner of that resource. It also significantly reduced the
difficulty of implementing web servers and browsers (in comparison to earlier
systems), but in turn presented the chronic problem of link rot. Unlike predecessors
such as HyperCard, the World Wide Web was non-proprietary, making it possible to
develop servers and clients independently and to add extensions without licensing
restrictions. On April 30, 1993, CERN announced[16] that the World Wide Web
would be free to anyone, with no fees due. Coming two months after the
announcement that the Gopher protocol was no longer free to use, this produced a
rapid shift away from Gopher and towards the Web. An early popular web browser
was ViolaWWW, which was based upon HyperCard.
Scholars generally agree that a turning point for the World Wide Web began with
the introduction[17] of the Mosaic web browser[18] in 1993, a graphical browser
developed by a team at the National Center for Supercomputing Applications at the
University of Illinois at Urbana-Champaign (NCSA-UIUC), led by Marc Andreessen.
Funding for Mosaic came from the U.S. High-Performance Computing and
Communications Initiative, a funding program initiated by the High Performance
Computing and Communication Act of 1991, one of several computing
developments initiated by U.S. Senator Al Gore.[19] Prior to the release of Mosaic,
graphics were not commonly mixed with text in web pages and the Web's
popularity was less than older protocols in use over the Internet, such as Gopher
and Wide Area Information Servers (WAIS). Mosaic's graphical user interface
allowed the Web to become, by far, the most popular Internet protocol.

The World Wide Web Consortium (W3C) was founded by Tim Berners-Lee after he
left the European Organization for Nuclear Research (CERN) in October, 1994. It was
founded at the Massachusetts Institute of Technology Laboratory for Computer
Science (MIT/LCS) with support from the Defense Advanced Research Projects
Agency (DARPA), which had pioneered the Internet; a year later, a second site was
founded at INRIA (a French national computer research lab) with support from the
European Commission DG InfSo; and in 1996, a third continental site was created in
Japan at Keio University. By the end of 1994, while the total number of websites was
still minute compared to present standards, quite a number of notable websites
were already active, many of which are the precursors or inspiration for today's
most popular services.

Connected by the existing Internet, other websites were created around the world,
adding international standards for domain names and HTML. Since then, Berners-
Lee has played an active role in guiding the development of web standards (such as
the markup languages in which web pages are composed), and in recent years has
advocated his vision of a Semantic Web. The World Wide Web enabled the spread of
information over the Internet through an easy-to-use and flexible format. It thus
played an important role in popularizing use of the Internet.[20] Although the two
terms are sometimes conflated in popular use, World Wide Web is not synonymous
with Internet.[21] The Web is an application built on top of the Internet.

FunctionThe terms Internet and World Wide Web are often used in every-day
speech without much distinction. However, the Internet and the World Wide Web
are not one and the same. The Internet is a global system of interconnected
computer networks. In contrast, the Web is one of the services that runs on the
Internet. It is a collection of interconnected documents and other resources, linked
by hyperlinks and URLs. In short, the Web is an application running on the Internet.
[22] Viewing a web page on the World Wide Web normally begins either by typing
the URL of the page into a web browser, or by following a hyperlink to that page or
resource. The web browser then initiates a series of communication messages,
behind the scenes, in order to fetch and display it.

First, the server-name portion of the URL is resolved into an IP address using the
global, distributed Internet database known as the Domain Name System (DNS).
This IP address is necessary to contact the Web server. The browser then requests
the resource by sending an HTTP request to the Web server at that particular
address. In the case of a typical web page, the HTML text of the page is requested
first and parsed immediately by the web browser, which then makes additional
requests for images and any other files that complete the page image. Statistics
measuring a website's popularity are usually based either on the number of page
views or associated server 'hits' (file requests) that take place.

While receiving these files from the web server, browsers may progressively render
the page onto the screen as specified by its HTML, Cascading Style Sheets (CSS), or
other page composition languages. Any images and other resources are
incorporated to produce the on-screen web page that the user sees. Most web
pages contain hyperlinks to other related pages and perhaps to downloadable files,
source documents, definitions and other web resources. Such a collection of useful,
related resources, interconnected via hypertext links is dubbed a web of
information. Publication on the Internet created what Tim Berners-Lee first called
the WorldWideWeb (in its original CamelCase, which was subsequently discarded) in
November 1990.[2]

Linking
Graphic representation of a minute fraction of the WWW, demonstrating
hyperlinksOver time, many web resources pointed to by hyperlinks disappear,
relocate, or are replaced with different content. This makes hyperlinks obsolete, a
phenomenon referred to in some circles as link rot and the hyperlinks affected by it
are often called dead links. The ephemeral nature of the Web has prompted many
efforts to archive web sites. The Internet Archive, active since 1996, is one of the
best-known efforts.

Dynamic updates of web pagesMain article: Ajax (programming)


JavaScript is a scripting language that was initially developed in 1995 by Brendan
Eich, then of Netscape, for use within web pages.[23] The standardized version is
ECMAScript.[23] To overcome some of the limitations of the page-by-page model
described above, some web applications also use Ajax (asynchronous JavaScript and
XML). JavaScript is delivered with the page that can make additional HTTP requests
to the server, either in response to user actions such as mouse-clicks, or based on
lapsed time. The server's responses are used to modify the current page rather
than creating a new page with each response. Thus the server only needs to
provide limited, incremental information. Since multiple Ajax requests can be
handled at the same time, users can interact with a page even while data is being
retrieved. Some web applications regularly poll the server to ask if new information
is available.[24]

WWW prefixMany domain names used for the World Wide Web begin with www
because of the long-standing practice of naming Internet hosts (servers) according
to the services they provide. The hostname for a web server is often www, in the
same way that it may be ftp for an FTP server, and news or nntp for a USENET news
server. These host names appear as Domain Name System (DNS) subdomain
names, as in www.example.com. The use of 'www' as a subdomain name is not
required by any technical or policy standard; indeed, the first ever web server was
called nxoc01.cern.ch,[25] and many web sites exist without it. Many established
websites still use 'www', or they invent other subdomain names such as 'www2',
'secure', etc. Many such web servers are set up such that both the domain root
(e.g., example.com) and the www subdomain (e.g., www.example.com) refer to the
same site; others require one form or the other, or they may map to different web
sites.

The use of a subdomain name is useful for load balancing incoming web traffic by
creating a CNAME record that points to a cluster of web servers. Since, currently,
only a subdomain can be cname'ed the same result cannot be achieved by using
the bare domain root.

When a user submits an incomplete website address to a web browser in its address
bar input field, some web browsers automatically try adding the prefix "www" to the
beginning of it and possibly ".com", ".org" and ".net" at the end, depending on what
might be missing. For example, entering 'microsoft' may be transformed to
http://www.microsoft.com/ and 'openoffice' to http://www.openoffice.org. This
feature started appearing in early versions of Mozilla Firefox, when it still had the
working title 'Firebird' in early 2003.[26] It is reported that Microsoft was granted a
US patent for the same idea in 2008, but only for mobile devices.[27]

The scheme specifier (http:// or https://) in URIs refers to the Hypertext Transfer
Protocol and to HTTP Secure respectively and so defines the communication
protocol to be used for the request and response. The HTTP protocol is fundamental
to the operation of the World Wide Web, and the encryption involved in HTTPS adds
an essential layer if confidential information such as passwords or banking
information are to be exchanged over the public Internet. Web browsers usually
prepend the scheme to URLs too, if omitted.

In English, www is pronounced by individually pronouncing the name of characters


(double-u double-u double-u). Although some technical users pronounce it dub-dub-
dub this is not widespread. The English writer Douglas Adams once quipped in The
Independent on Sunday (1999): "The World Wide Web is the only thing I know of
whose shortened form takes three times longer to say than what it's short for," with
Stephen Fry later pronouncing it in his "Podgrammes" series of podcasts as "wuh
wuh wuh." In Mandarin Chinese, World Wide Web is commonly translated via a
phono-semantic matching to wàn wéi wǎng (万维网), which satisfies www and
literally means "myriad dimensional net",[28] a translation that very appropriately
reflects the design concept and proliferation of the World Wide Web. Tim Berners-
Lee's web-space states that World Wide Web is officially spelled as three separate
words, each capitalized, with no intervening hyphens.[29]

PrivacyComputer users, who save time and money, and who gain conveniences
and entertainment, may or may not have surrendered the right to privacy in
exchange for using a number of technologies including the Web.[30] Worldwide,
more than a half billion people have used a social network service,[31] and of
Americans who grew up with the Web, half created an online profile[32] and are
part of a generational shift that could be changing norms.[33][34] Facebook
progressed from U.S. college students to a 70% non-U.S. audience, and in 2009
estimated that only 20% of its members use privacy settings.[35] In 2010 (six years
after co-founding the company), Mark Zuckerberg wrote, "we will add privacy
controls that are much simpler to use".[36]
Privacy representatives from 60 countries have resolved to ask for laws to
complement industry self-regulation, for education for children and other minors
who use the Web, and for default protections for users of social networks.[37] They
also believe data protection for personally identifiable information benefits business
more than the sale of that information.[37] Users can opt-in to features in browsers
to clear their personal histories locally and block some cookies and advertising
networks[38] but they are still tracked in websites' server logs, and particularly web
beacons.[39] Berners-Lee and colleagues see hope in accountability and
appropriate use achieved by extending the Web's architecture to policy awareness,
perhaps with audit logging, reasoners and appliances.[40]

In exchange for providing free content, vendors hire advertisers who spy on Web
users and base their business model on tracking them.[41] Since 2009, they buy
and sell consumer data on exchanges (lacking a few details that could make it
possible to de-anonymize, or identify an individual).[42][41] Hundreds of millions of
times per day, Lotame Solutions captures what users are typing in real time, and
sends that text to OpenAmplify who then tries to determine, to quote a writer at The
Wall Street Journal, "what topics are being discussed, how the author feels about
those topics, and what the person is going to do about them".[43][44]

Microsoft backed away in 2008 from its plans for strong privacy features in Internet
Explorer,[45] leaving its users (50% of the world's Web users) open to advertisers
who may make assumptions about them based on only one click when they visit a
website.[46] Among services paid for by advertising, Yahoo! could collect the most
data about users of commercial websites, about 2,500 bits of information per month
about each typical user of its site and its affiliated advertising network sites. Yahoo!
was followed by MySpace with about half that potential and then by AOL–
TimeWarner, Google, Facebook, Microsoft, and eBay.[47]

SecurityThe Web has become criminals' preferred pathway for spreading malware.
Cybercrime carried out on the Web can include identity theft, fraud, espionage and
intelligence gathering.[48] Web-based vulnerabilities now outnumber traditional
computer security concerns,[49][50] and as measured by Google, about one in ten
web pages may contain malicious code.[51] Most Web-based attacks take place on
legitimate websites, and most, as measured by Sophos, are hosted in the United
States, China and Russia.[52] The most common of all malware threats is SQL
injection attacks against websites.[53] Through HTML and URIs the Web was
vulnerable to attacks like cross-site scripting (XSS) that came with the introduction
of JavaScript[54] and were exacerbated to some degree by Web 2.0 and Ajax web
design that favors the use of scripts.[55] Today by one estimate, 70% of all
websites are open to XSS attacks on their users.[56]

Proposed solutions vary to extremes. Large security vendors like McAfee already
design governance and compliance suites to meet post-9/11 regulations,[57] and
some, like Finjan have recommended active real-time inspection of code and all
content regardless of its source.[48] Some have argued that for enterprise to see
security as a business opportunity rather than a cost center,[58] "ubiquitous,
always-on digital rights management" enforced in the infrastructure by a handful of
organizations must replace the hundreds of companies that today secure data and
networks.[59] Jonathan Zittrain has said users sharing responsibility for computing
safety is far preferable to locking down the Internet.[60]

StandardsMain article: Web standards


Many formal standards and other technical specifications and software define the
operation of different aspects of the World Wide Web, the Internet, and computer
information exchange. Many of the documents are the work of the World Wide Web
Consortium (W3C), headed by Berners-Lee, but some are produced by the Internet
Engineering Task Force (IETF) and other organizations.

Usually, when web standards are discussed, the following publications are seen as
foundational:

Recommendations for markup languages, especially HTML and XHTML, from the
W3C. These define the structure and interpretation of hypertext documents.
Recommendations for stylesheets, especially CSS, from the W3C.
Standards for ECMAScript (usually in the form of JavaScript), from Ecma
International.
Recommendations for the Document Object Model, from W3C.
Additional publications provide definitions of other essential technologies for the
World Wide Web, including, but not limited to, the following:

Uniform Resource Identifier (URI), which is a universal system for referencing


resources on the Internet, such as hypertext documents and images. URIs, often
called URLs, are defined by the IETF's RFC 3986 / STD 66: Uniform Resource
Identifier (URI): Generic Syntax, as well as its predecessors and numerous URI
scheme-defining RFCs;
HyperText Transfer Protocol (HTTP), especially as defined by RFC 2616: HTTP/1.1
and RFC 2617: HTTP Authentication, which specify how the browser and server
authenticate each other.
AccessibilityMain article: Web accessibility
Access to the Web is for everyone regardless of disability including visual, auditory,
physical, speech, cognitive, or neurological. Accessibility features also help others
with temporary disabilities like a broken arm or the aging population as their
abilities change.[61] The Web is used for receiving information as well as providing
information and interacting with society, making it essential that the Web be
accessible in order to provide equal access and equal opportunity to people with
disabilities.[62] Tim Berners-Lee once noted, "The power of the Web is in its
universality. Access by everyone regardless of disability is an essential aspect."[61]
Many countries regulate web accessibility as a requirement for websites.[63]
International cooperation in the W3C Web Accessibility Initiative led to simple
guidelines that web content authors as well as software developers can use to make
the Web accessible to persons who may or may not be using assistive technology.
[61][64]

InternationalizationThe W3C Internationalization Activity assures that web


technology will work in all languages, scripts, and cultures.[65] Beginning in 2004 or
2005, Unicode gained ground and eventually in December 2007 surpassed both
ASCII and Western European as the Web's most frequently used character encoding.
[66] Originally RFC 3986 allowed resources to be identified by URI in a subset of US-
ASCII. RFC 3987 allows more characters—any character in the Universal Character
Set—and now a resource can be identified by IRI in any language.[67]

StatisticsAccording to a 2001 study, there were a massive over 550 billion


documents on the Web, mostly in the invisible Web, or deep Web.[68] A 2002
survey of 2,024 million Web pages[69] determined that by far the most Web
content was in English: 56.4%; next were pages in German (7.7%), French (5.6%),
and Japanese (4.9%). A more recent study, which used Web searches in 75 different
languages to sample the Web, determined that there were over 11.5 billion Web
pages in the publicly indexable Web as of the end of January 2005.[70] As of March
2009[update], the indexable web contains at least 25.21 billion pages.[71] On July
25, 2008, Google software engineers Jesse Alpert and Nissan Hajaj announced that
Google Search had discovered one trillion unique URLs.[72] As of May
2009[update], over 109.5 million websites operated.[73] Of these 74% were
commercial or other sites operating in the .com generic top-level domain.[73]

Speed issuesFrustration over congestion issues in the Internet infrastructure and


the high latency that results in slow browsing has led to a pejorative name for the
World Wide Web: the World Wide Wait.[74] Speeding up the Internet is an ongoing
discussion over the use of peering and QoS technologies. Other solutions to reduce
the congestion can be found at W3C.[75] Standard guidelines for ideal Web
response times are:[76]

0.1 second (one tenth of a second). Ideal response time. The user doesn't sense any
interruption.
1 second. Highest acceptable response time. Download times above 1 second
interrupt the user experience.
10 seconds. Unacceptable response time. The user experience is interrupted and
the user is likely to leave the site or system.
CachingIf a user revisits a Web page after only a short interval, the page data may
not need to be re-obtained from the source Web server. Almost all web browsers
cache recently obtained data, usually on the local hard drive. HTTP requests sent by
a browser will usually only ask for data that has changed since the last download. If
the locally cached data are still current, it will be reused. Caching helps reduce the
amount of Web traffic on the Internet. The decision about expiration is made
independently for each downloaded file, whether image, stylesheet, JavaScript,
HTML, or whatever other content the site may provide. Thus even on sites with
highly dynamic content, many of the basic resources only need to be refreshed
occasionally. Web site designers find it worthwhile to collate resources such as CSS
data and JavaScript into a few site-wide files so that they can be cached efficiently.
This helps reduce page download times and lowers demands on the Web server.

There are other components of the Internet that can cache Web content. Corporate
and academic firewalls often cache Web resources requested by one user for the
benefit of all. (See also Caching proxy server.) Some search engines also store
cached content from websites. Apart from the facilities built into Web servers that
can determine when files have been updated and so need to be re-sent, designers
of dynamically generated Web pages can control the HTTP headers sent back to
requesting users, so that transient or sensitive pages are not cached. Internet
banking and news sites frequently use this facility. Data requested with an HTTP
'GET' is likely to be cached if other conditions are met; data obtained in response to
a 'POST' is assumed to depend on the data that was POSTed and so is not cached.
--------
Intranet
Show me everything on Win Development Resources
definition -
An intranet is a private network that is contained within an enterprise. It may
consist of many interlinked local area networks and also use leased lines in the wide
area network. Typically, an intranet includes connections through one or more
gateway computers to the outside Internet. The main purpose of an intranet is to
share company information and computing resources among employees. An
intranet can also be used to facilitate working in groups and for teleconferences.
An intranet uses TCP/IP, HTTP, and other Internet protocols and in general looks like
a private version of the Internet. With tunneling, companies can send private
messages through the public network, using the public network with special
encryption/decryption and other security safeguards to connect one part of their
intranet to another.

Typically, larger enterprises allow users within their intranet to access the public
Internet through firewall servers that have the ability to screen messages in both
directions so that company security is maintained. When part of an intranet is made
accessible to customers, partners, suppliers, or others outside the company, that
part becomes part of an extranet.

extranet
Show me everything on Cloud computing and SaaS
definition -
An extranet is a private network that uses Internet technology and the public
telecommunication system to securely share part of a business's information or
operations with suppliers, vendors, partners, customers, or other businesses. An
extranet can be viewed as part of a company's intranet that is extended to users
outside the company. It has also been described as a "state of mind" in which the
Internet is perceived as a way to do business with other companies as well as to sell
products to customers.
An extranet requires security and privacy. These can include firewall server
management, the issuance and use of digital certificates or similar means of user
authentication, encryption of messages, and the use of virtual private networks
(VPNs) that tunnel through the public network.

Companies can use an extranet to:

•Exchange large volumes of data using Electronic Data Interchange (EDI)


•Share product catalogs exclusively with wholesalers or those "in the trade"
•Collaborate with other companies on joint development efforts
•Jointly develop and use training programs with other companies
•Provide or access services provided by one company to a group of other
companies, such as an online banking application managed by one company on
behalf of affiliated banks
•Share news of common interest exclusively with partner companies
------------------
VoIP
Voice over Internet Protocol (VoIP) is the family of technologies that allow IP
networks to be used for voice applications, such as telephony, voice instant
messaging, and teleconferencing. VoIP entails solutions at almost every layer of an
IP network--from specialized voice applications (like Skype) all the way down to low-
level quality measures that keep those applications running smoothly.
In this Article
The VoIP Technology
Why VoIP Now?
VoIP in Action
How IP Telephony Fits In
VoIP-Based Services
What's Next for VoIP?
Unless you've been sleeping under a very big rock for the last year, you've certainly
heard the phrase "Voice over IP" uttered. Perhaps you've seen those hilarious
Vonage commercials that feature painful and embarrassing accidents caught on
tape, promising to let you dump your local phone company in order save big on
your phone bill. You may also have seen the Cisco telephones that are curiously
inserted in prime-time shows like 24.

What is all the hubbub about, anyway? Why, VoIP, of course! VoIP, the fabulous
secret ingredient in Vonage, Skype, Cisco CallManager, and a host of other
revolutionary technology products you may have already encountered on TV, in the
news, or in person. But what makes these products so revolutionary? What is it
about VoIP that is such a big deal?

The VoIP Technology


Voice over Internet Protocol is a family of technologies that enable voice
communications using IP networks like the internet. Inventive developers and
entrepreneurs have created an industry around VoIP technology in its many forms:
desktop applications, telephone services, and corporate phone systems. VoIP is a
core technology that drives everything from voice-chat software loaded on a
desktop PC to Mac full-blown IP-based telecommunications networks in large
corporations. To the Wall Street speculator, VoIP is a single technology investment
with many revenue streams. To the enterprise network engineer, it's a way to
simplify the corporate network and improve the telephony experience for users of
the network. To the home user, it's a really cool way to save money on the old
phone bill.

But how? What makes VoIP do all this awesome stuff? Read on.

Why VoIP Now?


The concept isn't actually that new: VoIP has been touted as a long-distance killer
since the later 1990s, when goofy PC products like Internet Phone were starting to
show up. But the promise of Voice over IP was lost in the shuffle of buggy
applications and the slow-to-start broadband revolution. Without broadband
connections, VoIP really isn't worthwhile. So early adopters of personal VoIP
software like CUSeeMe and NetMeeting were sometimes frustrated by bad sound
quality, and the first generation of VoIP products ultimately failed in the
marketplace.
Fast forward to Fall 2005. Suddenly, everybody is talking about VoIP again. Why?
There may be no greater reason than the sudden success of a freeware VoIP chat
program called Skype.

VoIP in Action
Skype is an instant messaging program that happens to have a peer-to-peer
(modeled after Kazaa) global voice network at its disposal, so you can use it to call
people on your buddy list using your PC or Mac. All you need is broadband, a
microphone, and a pair speakers or headphones. Voice calling alone doesn't set
Skype apart from other IM applications like AIM or Windows Messenger--they also
support voice. But Skype supports voice calling in a way that those applications can
only dream of: Skype works in almost any broadband-connected network
environment, even networks with firewalls that often break other voice-chatting
apps. Plus, Skype's variable-bitrate sound codec makes it less prone to sound
quality issues than its predecessors. In a nutshell, Skype just works. Perhaps that's
why Skype's official slogan is "Internet Telephony that Just Works."

The world has noticed. 150 million downloads later, Skype now offers the ability for
its users to call regular phone numbers from their PCs, a feature known as
SkypeOut. Skype also offers a voicemail service and can route incoming calls to a
certain phone number right to a user's desktop PC. There's even a Skype API that
allows Windows and Mac programmers to integrate the Skype client with other
applications. Videoconferencing add-ons, Outlook integration, and personal
answering machines are just some of the cool software folks have developed using
the Skype API.

How IP Telephony Fits In


But Skype can't take all of the credit for the recent growth of Voice over IP. A
number of enterprise telephone system vendors have heavily promoted what they
call "IP telephony"--the art of building corporate phone systems using Ethernet
devices and host-based servers instead of old-fashioned PBX chassis and legacy
technology. Cisco Systems and Avaya were two of the earliest players in the VoIP-
phone-system arena, and their stubborn support of IP-based voice technology is
beginning to pay off. More and more corporate customers are integrating IP phones
and servers, and upgrading their IP networks to support voice applications,
interested primarily in the productivity boost and long-term cost savings of running
a single converged network instead of maintaining legacy voice equipment. This
transition is a lot like the move from mainframes and minicomputers to personal
computers a generation ago.

On two fronts--the corporate phone system and that of the home user--VoIP is
transforming the global communications matrix. Instead of two separate notions of
a global network (one for voice calling and one for Internet Protocol), a single
converged network is arising, carrying both voice and data with the same
networking protocol, IP. Steadily, corporations and domestic phone subscribers are
migrating their voice services from the old voice plane to the new one, and next-
generation, IP-based phone companies have rushed in to help them make the
move.
VoIP-Based Services
By now you've probably seen ads for companies like Vonage and Packet8. These
services promise ultra-cheap voice calling service via your broadband internet
connection. Some offer calling packages as low as $9.95 per month. Their secret
weapon is VoIP. Voice over IP service providers use the internet to carry voice
signals from their networks to your home phone. Because VoIP telecommunication
isn't regulated the way traditional phone line telecommunication is, VoIP providers
like Vonage can offer drastically lower calling rates.

The catch? You've got to put up with the occasional hiccup in your voice service,
caused by the one thing legacy telephone technology has built-in that VoIP doesn't:
guaranteed quality. Because VoIP uses packets to transmit data like other services
on the internet, it cannot provide the quality guarantees of old-fashioned, non-
packet-based telephone lines. But this is changing, too. Efforts are underway on all
fronts (service providers, Internet providers, and VoIP solution makers) to adapt
quality-of-service techniques to VoIP services, so that one day, your VoIP calls may
sound as good as (or better than) your regular land-line calls.

Today, if you want to build a fully quality-enabled private VoIP network, you can.
Cisco, Foundry Networks, Nortel, and other network equipment makers all support
common quality-of-service standards, meaning corporate networks are only an
upgrade away from effective convergence of voice and data.

But it will be quite some time before the internet itself is quality-enabled. Indeed,
the internet may never be fully quality-enabled. This hasn't stopped enterprising
network gearheads like me from trying to connect calls over the internet, of course.
Hey, if Skype works so well, why can't corporate phone calls? Enterprise phone
administrators have found that it is actually very easy to equip mobile users with
VoIP phones to place calls on the company phone system by connecting to it over
the internet--from hotel rooms or home offices--but the quality of these calls is sort
of hit or miss, like a cell phone when you drive through a "dead zone" in the cell
network.

What's Next for VoIP?


A host of brand new, VoIP-enabled cell phones will soon be ready for action. Imagine
driving to work, receiving a call on your cell phone from a client, and then
continuing that call on the corporate Wi-Fi network as you walk into the front office.
all without any interruption to your call-in-progress. The cell network will just "hand
off" the call to your Wi-Fi network. This sort of technology exists today, and will be a
commonplace feature of corporate phone systems in years to come.

Cost savings, uber-slick telephony features, network convergence--VoIP is the


technology at the root of all these trends, and you should expect to see a lot more
news about VoIP in the coming months and years. If you haven't used Voice over IP
products yet, try out a broadband phone service like Broadvox Direct or Vonage,
and download a copy of Skype or the Gizmo Project, two excellent VoIP PC calling
applications.
To learn more, you can visit VoIPFan.com or browse O'Reilly's growing selection of
books about IP telephony, including the book that's been dubbed the "Voice over IP
of reason:" Switching to VoIP.
Wireless Network

Wireless Network
Wireless network refers to any type of computer network that is wireless, and is
commonly associated with a telecommunications network whose interconnections
between nodes are implemented without the use of wires.[1] Wireless
telecommunications networks are generally implemented with some type of remote
information transmission system that uses electromagnetic waves, such as radio
waves, for the carrier and this implementation usually takes place at the physical
level or "layer" of the network.[2]

Contents []
1 Types of wireless connections
1.1 Wireless PAN
1.2 Wireless LAN
1.3 Wireless MAN
1.4 Wireless WAN
1.5 Mobile devices networks
2 Uses
3 Environmental concerns and health hazard
4 See also
5 References
6 Further reading
7 External links

Types of wireless connections Wireless PANWireless Personal Area Networks


(WPANs) interconnect devices within a relatively small area, generally within reach
of a person. For example, Bluetooth and Infrared rays provides a WPAN for
interconnecting a headset to a laptop. ZigBee also supports WPAN applications.[3]
Wi-Fi PANs are also getting popular as vendors have started integrating Wi-Fi in
variety of consumer electronic devices. Intel My WiFi and Windows 7 virtual Wi-Fi
capabilities have made Wi-Fi PANs simpler and easier to set up and configure.[4]

Wireless LAN: Wireless LAN


A wireless local area network (WLAN) links two or more devices using a wireless
distribution method (typically spread-spectrum or OFDM radio), and usually
providing a connection through an access point to the wider internet. This gives
users the mobility to move around within a local coverage area and still be
connected to the network.

Wi-Fi: Wi-Fi is increasingly used as a synonym for 802.11 WLANs, although it is


technically a certification of interoperability between 802.11 devices.
Fixed Wireless Data: This implements point to point links between computers or
networks at two locations, often using dedicated microwave or laser beams over
line of sight paths. It is often used in cities to connect networks in two or more
buildings without physically wiring the buildings together.
Wireless MANWireless Metropolitan Area Networks are a type of wireless network
that connects several Wireless LANs.

WiMAX is a type of Wireless MAN and is described by the IEEE 802.16 standard.[5]
Wireless WANwireless wide area networks are wireless networks that typically
cover large outdoor areas. These networks can be used to connect branch offices of
business or as a public internet access system. They are usually deployed on the
2.4 GHz band. A typical system contains base station gateways, access points and
wireless bridging relays. Other configurations are mesh systems where each access
point acts as a relay also. When combined with renewable energy systems such as
photo-voltaic solar panels or wind systems they can be stand alone systems.

Mobile devices networksFurther information: mobile telecommunications


With the development of smart phones, cellular telephone networks routinely carry
data in addition to telephone conversations:

Global System for Mobile Communications (GSM): The GSM network is divided
into three major systems: the switching system, the base station system, and the
operation and support system. The cell phone connects to the base system station
which then connects to the operation and support station; it then connects to the
switching station where the call is transferred to where it needs to go. GSM is the
most common standard and is used for a majority of cell phones.[6]
Personal Communications Service (PCS): PCS is a radio band that can be used by
mobile phones in North America and South Asia. Sprint happened to be the first
service to set up a PCS.
D-AMPS: Digital Advanced Mobile Phone Service, an upgraded version of AMPS, is
being phased out due to advancement in technology. The newer GSM networks are
replacing the older system.
Uses This section is written like a personal reflection or essay and may require
cleanup. Please help improve it by rewriting it in an encyclopedic style. (September
2010)

An embedded RouterBoard 112 with U.FL-RSMA pigtail and R52 mini PCI Wi-Fi card
widely used by wireless Internet service providers (WISPs) in the Czech
Republic.Wireless networks have continued to develop and their uses have grown
significantly. Cellular phones are part of huge wireless network systems. People use
these phones daily to communicate with one another. Sending information overseas
is possible through wireless network systems using satellites and other signals to
communicate across the world. Emergency services such as the police department
utilize wireless networks to communicate important information quickly. People and
businesses use wireless networks to send and share data quickly whether it be in a
small office building or across the world.

Another important use for wireless networks is as an inexpensive and rapid way to
be connected to the Internet in countries and regions where the telecom
infrastructure is poor or there is a lack of resources, as in most developing
countries.
Compatibility issues also arise when dealing with wireless networks. Different
components not made by the same company may not work together, or might
require extra work to fix these issues. Wireless networks are typically slower than
those that are directly connected through an Ethernet cable.

A wireless network is more vulnerable, because anyone can try to break into a
network broadcasting a signal.[citation needed] Many networks offer WEP - Wired
Equivalent Privacy - security systems which have been found to be vulnerable to
intrusion. Though WEP does block some intruders, the security problems have
caused some businesses to stick with wired networks until security can be
improved. Another type of security for wireless networks is WPA - Wi-Fi Protected
Access. WPA provides more security to wireless networks than a WEP security set
up. The use of firewalls will help with security breaches which can help to fix
security problems in some wireless networks that are more vulnerable.

Environmental concerns and health hazardStarting around 2009, there have been
increased concerns about the safety of wireless communications, despite little
evidence of health risks so far.[7] The president of Lakehead University refused to
agree to installation of a wireless network citing a California Public Utilities
Commission study which said that the possible risk of tumors and other diseases
due to exposure to electromagnetic fields (EMFs) needs to be further investigated.
[8]
-----------------
Last Mile

The "last mile" or "last kilometer" is the final leg of delivering connectivity from a
communications provider to a customer. The phrase is therefore often used by the
telecommunications and cable television industries. The actual distance of this leg
may be considerably more than a mile, especially in rural areas. It is typically seen
as an expensive challenge because "fanning out" wires and cables is a considerable
physical undertaking. Because the last mile of a network to the user is also the first
mile from the user to the world, the term "first mile" is sometimes used.

To solve the problem of providing enhanced services over the last mile, some firms
have been mixing networks for decades. One example is Fixed Wireless Access,
where a wireless network is used instead of wires to connect a stationary terminal
to the wireline network.

Various solutions are being developed which are seen as an alternative to the "last
mile" of standard incumbent local exchange carriers: these include WiMAX and BPL
(Broadband over Power Line) applications.

Contents []
1 Business "last mile"
2 Existing delivery system problems
3 Economical information transfer
4 Existing last mile delivery systems
4.1 Wired systems (including optical fiber)
4.2 Wireless delivery systems
4.3 Intermediate system
4.4 Courier
4.5 Line aggregation ("bonding")
5 References
6 See also

Business "last mile"Connectivity from the local telephone exchanges to the


customer premises is also called the "last mile". In many countries this is often an
ISDN30 connection, delivered through either a copper or fibre cable. This ISDN30
can carry 30 simultaneous telephone calls and many direct dial telephone numbers,
(DDI's).

When leaving the telephone exchange, the ISDN30 cable can be buried in the
ground, usually in ducting, at very little depth. This makes any business telephone
lines vulnerable to being dug up during streetworks, liable to flooding during heavy
storms and general wear and tear due to natural elements. Loss, therefore, of the
"last mile" will cause the failure to deliver any calls to the business affected.
Business continuity planning often provides for this type of technical failure.

Any business with ISDN30 type of connectivity should provide for this failure within
its business continuity planning. There are many ways to achieve this, as
documented CPNI.

1. Dual Parenting.
This is where the telephone carrier provides the same numbers from two different
telephone exchanges. If the cable is damaged from one telephone exchange to the
customer premises most of the calls can be delivered from the surviving route to
the customer.

2. Diverse Routing.
This is where the carrier can provide more than one route to bring the ISDN 30’s
from the exchange, or exchanges, (as in dual parenting), but they may share
underground ducting and cabinets.

3. Separacy.
This is where the carrier can provide more than one route to bring the ISDN 30’s
from the exchange, or exchanges, (as in dual parenting), but they may not share
underground ducting and cabinets, and therefore should be absolutely separate
from the telephone exchange to the customer premises.

4. Exchange based solutions.


This is where a specialist company working in association with the carriers offers an
enhancement the to ability to divert ISDN30’s upon failure to any other number or
group of numbers. Carrier diversions are usually limited to all of the ISDN30 DDI
numbers being delivered to 1 single number. In the UK, GemaTech offers this
service in association with all of the carriers other than Verizon. By being in the
exchanges, the GemaTech version offers a part diversion service if required and
voice recording of calls if required.

5. Non-exchange based diversion services.


This is where a specialist company working in association with BT offers an
enhancement to the ability to divert ISDN30’s upon failure to any other number or
group of numbers. Carrier diversions are usually limited to all of the ISDN30 DDI
numbers being delivered to 1 single number. In the UK Teamphone offers this
service in association with BT. By not being in the exchanges, the Teamphone
version offers an all or nothing diversion service if required and but does not offer
voice recording of calls.

6. Ported number services.


This is where customers numbers can be ported to a specialist company where the
numbers are pointed to the ISDN30 DDI numbers during business as usual and
delivered to alternative numbers during a business continuity need. These are
generally carrier independent and there are a number of companies offering such
solutions in the UK.

7. Hosted numbers.
This is where the carriers or specialist companies can host the customers numbers
within their own or the carriers networks and deliver calls over an IP network to the
customers sites. When a diversion service is required, the calls can be pointed to
alternative numbers.

8. Inbound numbers, (08 type services).


This is where the carriers or specialist companies can offer 08/05/03 prefixed
numbers to deliver to the ISDN30 DDI numbers and can point them to alternative
numbers in the event of a diversion requirement. Both carriers and specialist
companies offer this type of service in the UK

Existing delivery system problemsThe increasing worldwide demand for rapid, low-
latency and high-volume communication of information to homes and businesses
has made economical information distribution and delivery increasingly important.
As demand has escalated, particularly fueled by the widespread adoption of the
Internet, the need for economical high-speed access by end-users located at
millions of locations has ballooned as well. As requirements have changed, existing
systems and networks which were initially pressed into service for this purpose
have proven to be inadequate. To date, although a number of approaches have
been tried and used, no single clear solution to this problem has emerged. This
problem has been termed "The Last Mile Problem".

As expressed by Shannon's equation for channel information capacity, the


omnipresence of noise in information systems sets a minimum signal-to-noise ratio
requirement in a channel, even when adequate spectral bandwidth is available.
Since the integral of the rate of information transfer with respect to time is
information quantity, this requirement leads to a corresponding minimum energy
per bit. The problem of sending any given amount of information across a channel
can therefore be viewed in terms of sending sufficient Information-Carrying Energy
(ICE). For this reason the concept of an ICE "pipe" or "conduit" is relevant and useful
for examining existing systems.
The distribution of information to a great number of widely separated end-users can
be compared to the distribution of many other resources. Some familiar analogies
are:

blood distribution to a large number of cells over a system of veins, arteries and
capillaries
water distribution by a drip irrigation system to individual plants, including rivers,
aqueducts, water mains etc.
Nourishment to a plants leaves through roots, trunk and branches
All of these have in common conduits which carry a relatively small amount of a
resource a short distance to a very large number of physically separated endpoints.
Also common are conduits supporting more voluminous flow which combine and
carry the many individual portions over much greater distances. The shorter, lower-
volume conduits which individually serve only one or a small fraction of the
endpoints, may have far greater combined length than the larger capacity ones.
These common attributes are shown to the right.

The high-capacity conduits in these systems tend to also have in common the
ability to efficiently transfer the resource over a long distance. Only a small fraction
of the resource being transferred is either wasted, lost, or misdirected. The same
cannot necessarily be said of the lower-capacity conduits. One reason for this has to
do with the efficiency of scale. These conduits which are located closer to the
endpoint, or end-user, do not individually have as many users supporting them.
Even though they are smaller, each has the overhead of an "installation;" obtaining
and maintaining a suitable path over which the resource can flow. The funding and
resources supporting these smaller conduits tend to come from the immediate
locale. This can have the advantage of a "small-government model." That is, the
management and resources for these conduits is provided by local entities and
therefore can be optimized to achieve the best solutions in the immediate
environment and also to make best use of local resources. However, the lower
operating efficiencies and relatively greater installation expenses, compared with
the transfer capacities, can cause these smaller conduits, as a whole, to be the
most expensive and difficult part of the complete distribution system.

These characteristics have been displayed in the birth, growth, and funding of the
Internet. The earliest inter-computer communication tended to be accomplished
with direct wireline connections between individual computers. These grew into
clusters of small Local Area Networks (LANs). The TCP/IP suite of protocols was born
out of the need to connect several of these LANs together, particularly as related to
common projects among the defense department, industry and some academic
institutions. ARPANET came into being to further these interests. In addition to
providing a way for multiple computers and users to share a common inter-LAN
connection, the TCP/IP protocols provided a standardized way for dissimilar
computers and operating systems to exchange information over this inter-network.
The funding and support for the connections among LANs could be spread over one
or even several LANs. As each new LAN, or subnet, was added, the new subnet's
constituents enjoyed access to the greater network. At the same time the new
subnet made a contribution of access to any network or networks with which it was
already networked. Thus the growth became a mutually inclusive or "win-win"
event.
In general, economy of scale makes an increase in capacity of a conduit less
expensive as the capacity is increased. There is an overhead associated with the
creation of any conduit. This overhead is not repeated as capacity is increased
within the potential of the technology being utilized. As the Internet has grown in
size, by some estimates doubling in number of users every eighteen months,
economy of scale has resulted in increasingly large information conduits providing
the longest distance and highest capacity backbone connections. In recent years,
the capacity of fiber-optic communication, aided by a supporting industry, has
resulted in an expansion of raw capacity, so much so that in the United States a
large amount of installed fiber infrastructure is not being used because it is
currently excess capacity "dark fiber".

This excess backbone capacity exists in spite of the trend of increasing per-user
data rates and overall quantity of data. Initially, only the inter-LAN connections were
high speed. End-users used existing telephone lines and modems which were
capable of data rates of only a few hundred bit/s. Now almost all end users enjoy
access at 100 or more times those early rates. Notwithstanding this great increase
in user traffic, the high-capacity backbones have kept pace, and information
capacity and rate limitations almost always occur near the user. The economy of
scale along with the fundamental capability of fiber technology have kept the high-
capacity conduits adequate but have not solved the appetite of the home users. The
last mile problem is one of economically serving an increasing mass of end-users
with a solution to their information needs.

Economical information transferBefore considering the characteristics of existing


last-mile information delivery mechanisms, it is important to further examine what
makes information conduits effective. As the Shannon-Hartley theorem shows, it is
a combination of bandwidth and signal-to-noise ratio which determines the
maximum information rate of a channel. The product of the average information
rate and time yields total information transfer. In the presence of noise, this
corresponds to some amount of transferred information-carrying energy. Therefore
the economics of information transfer may be viewed in terms of the economics of
the transfer of ICE.

Effective last-mile conduits must:

1.Deliver signal power, S — (must have adequate signal power capacity).


2.Low loss (low occurrence of conversion to unusable energy forms).
3.Support wide transmission bandwidth.
4.Deliver high signal-to-noise ratio (SNR) — low unwanted-signal (Noise) power, N.
5.Provide nomadic connectivity.
In addition to these factors, a good solution to the last-mile problem must provide
each user:

1.High availability and reliability.


2.Low latency, latency must be small compared with required interaction times.
3.High per-user capacity.
1.A conduit which is shared among multiple end-users must provide a
correspondingly higher capacity in order to properly support each individual user.
This must be true for information transfer in each direction.
2.Affordability, suitable capacity must be financially viable.
Existing last mile delivery systems Wired systems (including optical fiber)Wired
systems provide guided conduits for Information-Carrying Energy (ICE). They all
have some degree of shielding which limits the susceptibility to external noise
sources. These transmission lines have losses which are proportional to length.
Without the addition of periodic amplification, there is some maximum length
beyond which all of these systems fail to deliver adequate S/N to support
information flow. Dielectric optical fiber systems support heavier flow, at higher
cost.

Local area networks (LAN)


Traditional wired local area networking systems require copper coaxial cable or
twisted pair to be run between or among two or more of the nodes in the network.
Common systems operate at 100 Mbit/s and newer ones also support 1000 Mbit/s or
more. While length may be limited by collision detection and avoidance
requirements, signal loss and reflections over these lines also set a maximum
distance. The decrease in information capacity made available to an individual user
is roughly proportional to the number of users sharing a LAN.

Telephone
In the late 20th century, improvements in the use of existing copper telephone lines
increased their capabilities if maximum line length is controlled. With support for
higher transmission bandwidth and improved modulation, these digital subscriber
line schemes have increased capability 20-50 times as compared to the previous
voiceband systems. These methods are not based on altering the fundamental
physical properties and limitations of the medium which, apart from the introduction
of twisted pairs, are no different today than when the first telephone exchange was
opened in 1877 by the Bell Telephone Company. The history and long life of copper-
based communications infrastructure is both a testament to our ability to derive
new value from simple concepts through technological innovation – and a warning
that copper communications infrastructure is beginning to offer diminishing returns
on continued investment.[1]

CATV
Community Access Cable Television Systems, also known simply as "cable", have
been expanded to provide bidirectional communication over existing physical
cables. However, they are by nature shared systems and the spectrum available for
reverse information flow and achievable S/N are limited. As was done for the initial
unidirectional (TV) communication, cable loss is mitigated through the use of
periodic amplifiers within the system. These factors set an upper limit on per-user
information capacity, particularly when many users share a common section of
cable or access network.

Optical fiber
Fiber offers high information capacity and after the turn of the 21st century became
the deployed medium of choice given its scalability in the face of increasing
bandwidth requirements of modern applications.
In 2004, according to Richard Lynch, EVP and CTO of telecom giant Verizon, they
saw the world moving toward vastly higher bandwidth applications as consumers
loved everything broadband had to offer, and eagerly devoured as much as they
could get, including two-way, user-generated content. Copper and coaxial networks
wouldn’t – in fact, couldn’t – satisfy these demands, which precipitated Verizon's
aggressive move into Fiber-to-the-home via FiOS.[2]

Fiber is a future-proof technology that meets the needs of today's users, but unlike
other copper-based and wireless last-mile mediums, also has the capacity for years
to come, by upgrading the end-point optics and electronics, without changing the
fiber infrastructure. The fiber itself is installed on existing pole or conduit
infrastructure and most of the cost is in labor, providing good regional economic
stimulus in the deployment phase and providing a critical foundation for future
regional commerce.

Wireless delivery systemsMobile CDN coined the term the 'mobile mile' to
categorize the last mile connection when a wireless systems is used to reach the
customer. In contrast to wired delivery systems, wireless systems use unguided
waves to transmit ICE. They all tend to be unshielded and have a greater degree of
susceptibility to unwanted signal and noise sources. Because these waves are not
guided but diverge, in free space these systems have attenuation following an
inverse-square law, inversely proportional to distance squared. Losses thus increase
more slowly with increasing length than for wired systems whose loss increases
exponentially. In a free space environment, beyond some length, the losses in a
wireless system are less than those in a wired system. In practice, the presence of
atmosphere, and especially obstructions caused by terrain, buildings and foliage
can greatly increase the loss above the free space value. Reflection, refraction and
diffraction of these waves can also alter their transmission characteristics and
require specialized systems to accommodate the accompanying distortions.

Wireless systems have an advantage over wired systems in last mile applications in
not requiring lines to be installed. However, they also have a disadvantage that
their unguided nature makes them more susceptible to unwanted noise and signals.
Spectral reuse can therefore be limited.

Lightwaves and free-space optics


Visible and infrared light waves are much shorter than radio frequency waves. Their
use to transmit data is referred to as free-space optical communication. Being short,
light waves can be focused or collimated with a small lens/antenna and to a much
higher degree than radio waves. Thus, a greater portion of the transmitted signal
can be recovered by a receiving device. Also because of the high frequency, a high
data transfer rate may be available. However, in practical last mile environments,
obstructions and de-steering of these beams, and absorption by elements of the
atmosphere including fog and rain, particularly over longer paths, can greatly
restrict their use for last-mile wireless communications. Longer (redder) waves
suffer less obstruction but may carry lesser data rates. See RONJA.

Radio waves
Radio frequencies (RF), from low frequencies through the microwave region, have
wavelengths much longer than visible light. Although this means that it is not
possible to focus the beams nearly as tightly as for light, it also means that the
aperture or "capture area" of even the simplest, omni-directional antenna is greatly
larger than that of a lens in any feasible optical system. This characteristic results in
greatly increased attenuation or "path loss" for systems that are not highly
directional. In actuality, the term path loss is something of a misnomer because no
energy is actually lost on a free-space path. Rather, it is merely not received by the
receiving antenna. The apparent reduction in transmission, as frequency is
increased, is actually an artifact of the change in the aperture of a given type of
antenna.

Relative to the last-mile problem, these longer wavelengths have an advantage


over light waves when omni-directional or sectored transmissions are considered.
The larger aperture of radio antennas results in much greater signal levels for a
given path length and therefore higher information capacity. On the other hand, the
lower carrier frequencies are not able to support the high information bandwidths
which are required by Shannon's equation, when the practical limits of S/N have
been reached.

For the above reasons, wireless radio systems have the advantage of being optimal
for lower-information-capacity broadcast communications delivered over longer
paths. For high-information capacity, highly-directive point-to-point over short
ranges, wireless light-wave systems are most useful.

One-way (broadcast) radio and television communications


Historically, most high-information-capacity broadcast has used lower frequencies,
generally no higher than the UHF television region, with television itself being a
prime example. Terrestrial television has generally been limited to the region above
50 MHz where sufficient information bandwidth is available, and below 1000 MHz,
due to problems associated with increased path loss as mentioned above.

Two-way wireless communications


Two-way communication systems have primarily been limited to lower-information-
capacity applications, such as audio, facsimile. or radio teletype. For the most part,
higher-capacity systems, such as two-way video communications or terrestrial
microwave telephone and data trunks, have been limited and confined to UHF or
microwave and to point-point paths. Higher capacity systems such as third-
generation, 3G, cellular telephone systems require a large infrastructure of more
closely spaced cell sites in order to maintain communications within typical
environments, where path losses are much greater than in free space and which
also require omni-directional access by the users.

Satellite communications
For information delivery to end-users, satellite systems, by nature, have relatively
long path lengths, even for low earth-orbiting satellites. They are also very
expensive to deploy and therefore each satellite must serve many users.
Additionally, the very long paths of geostationary satellites cause information
latency that makes many real-time applications unusable. As a solution to the last-
mile problem, satellite systems have application and sharing limitations. The ICE
which they transmit must be spread over a relatively large geographical area. This
causes the received signal to be relatively small, unless very large or directional
terrestrial antennas are used. A parallel problem exists when a satellite is receiving.
In that case, the satellite system must have a very great information capacity in
order to accommodate a multitude of sharing users and each user must have large
antenna size, with attendant directivity and pointing requirements, in order to
obtain even modest information-rate transfer. These requirements render high-
information-capacity, bi-directional information systems uneconomical. This is a
reason that the Iridium satellite system was not more successful.

Broadcast versus point-to-point


For both terrestrial and satellite systems, economical, high-capacity, last-mile
communications requires point-to-point transmission systems. Except for extremely
small geographic areas, broadcast systems are only able to deliver large amounts of
S/N at low frequencies where there is not sufficient spectrum to support the large
information capacity needed by a large number of users. Although complete
"flooding" of a region can be accomplished, such systems have the fundamental
characteristic that most of the radiated ICE never reaches a user and is wasted. As
information requirements increase, broadcast "wireless mesh" systems (also
sometimes referred to as microcells or nano-cells) which are small enough to
provide adequate information distribution to and from a relatively small number of
local users, require a prohibitively large number of broadcast locations or "points of
presence" along with a large amount of excess capacity to make up for the wasted
energy.

Intermediate systemRecently a new type of information transport which is midway


between wired and wireless systems has been discovered. Called E-Line, it uses a
single central conductor but no outer conductor or shield. The energy is transported
in a plane wave which, unlike radio, does not diverge while like radio, has no outer
guiding structure. This system exhibits a combination of the attributes of wired and
wireless systems and can support high information capacity utilizing existing power
lines over a broad range of frequencies from RF through microwave. See BPL
(Broadband over Power Line).

CourierWizzy Digital Courier is a project to distribute useful data to places with no


Internet connection. Primarily for e-mail, it also carries web content (stored locally
in a web cache).

In an implementation of a sneakernet, its delivery mechanism is USB flash drive.


The USB stick uses the UUCP protocol, carrying information to and from a better-
connected location - perhaps a school or local business, which acts as the dropoff
for Email, and fetches web content by proxy. The email and web content is re-
packaged as a UUCP transaction, and ferried back on the USB flash drive.

Line aggregation ("bonding")Aggregation is a method of "bonding" multiple lines to


achieve a faster, more reliable connection. Some companies[3] believe that ADSL
aggregation (or "bonding") is the solution to the UK's last mile problem[4].
----------------------
Types of Internet hosting service
Full-featured hosting
Virtual private server
Dedicated hosting
Colocation centre
Cloud hosting
Web hosting
Free hosting · Shared
Clustered · Reseller · FFmpeg
Application-specific
web hosting
Blog · Guild hosting · Image
Video · Wiki farms · Application
Social network
Other types
File · Remote backup
Game server · DNS · E-mail
v · d · e

An example of "rack mounted" servers.A web hosting service is a type of Internet


hosting service that allows individuals and organizations to make their own website
accessible via the World Wide Web. Web hosts are companies that provide space on
a server they own or lease for use by their clients as well as providing Internet
connectivity, typically in a data center. Web hosts can also provide data center
space and connectivity to the Internet for servers they do not own to be located in
their data center, called colocation or Housing as it is commonly called in Latin
America or France.

The scope of hosting services varies widely. The most basic is web page and small-
scale file hosting, where files can be uploaded via File Transfer Protocol (FTP) or a
Web interface. The files are usually delivered to the Web "as is" or with little
processing. Many Internet service providers (ISPs) offer this service free to their
subscribers. People can also obtain Web page hosting from other, alternative
service providers. Personal web site hosting is typically free, advertisement-
sponsored, or inexpensive. Business web site hosting often has a higher expense.

Single page hosting is generally sufficient only for personal web pages. A complex
site calls for a more comprehensive package that provides database support and
application development platforms (e.g. PHP, Java, Ruby on Rails, ColdFusion, and
ASP.NET). These facilities allow the customers to write or install scripts for
applications like forums and content management. For e-commerce, SSL is also
highly recommended.

The host may also provide an interface or control panel for managing the Web
server and installing scripts as well as other services like e-mail. Some hosts
specialize in certain software or services (e.g. e-commerce). They are commonly
used by larger companies to outsource network infrastructure to a hosting
company.

Contents []
1 Hosting reliability and uptime
2 Types of hosting
3 Obtaining hosting
4 See also
5 External links

Hosting reliability and uptime This section does not cite any references or sources.
Please help improve this article by adding citations to reliable sources. Unsourced
material may be challenged and removed. (March 2009)

Multiple racks of servers.Hosting uptime refers to the percentage of time the host is
accessible via the internet. Many providers state that they aim for at least 99.9%
uptime (roughly equivalent to 45 minutes of downtime a month, or less), but there
may be server restarts and planned (or unplanned) maintenance in any hosting
environment, which may or may not be considered part of the official uptime
promise.

Many providers tie uptime and accessibility into their own service level agreement
(SLA). SLAs sometimes include refunds or reduced costs if performance goals are
not met.

Types of hosting
A typical server "rack," commonly seen in colocation centres.Internet hosting
services can run Web servers; see Internet hosting services.

Many large companies who are not internet service providers also need a computer
permanently connected to the web so they can send email, files, etc. to other sites.
They may also use the computer as a website host so they can provide details of
their goods and services to anyone interested. Additionally these people may
decide to place online orders.

Free web hosting service: offered by different companies with limited services,
sometimes supported by advertisements, and often limited when compared to paid
hosting.
Shared web hosting service: one's website is placed on the same server as many
other sites, ranging from a few to hundreds or thousands. Typically, all domains
may share a common pool of server resources, such as RAM and the CPU. The
features available with this type of service can be quite extensive. A shared website
may be hosted with a reseller.
Reseller web hosting: allows clients to become web hosts themselves. Resellers
could function, for individual domains, under any combination of these listed types
of hosting, depending on who they are affiliated with as a reseller. Resellers'
accounts may vary tremendously in size: they may have their own virtual dedicated
server to a collocated server. Many resellers provide a nearly identical service to
their provider's shared hosting plan and provide the technical support themselves.

Virtual Dedicated Server: also known as a Virtual Private Server (VPS), divides
server resources into virtual servers, where resources can be allocated in a way that
does not directly reflect the underlying hardware. VPS will often be allocated
resources based on a one server to many VPSs relationship, however virtualisation
may be done for a number of reasons, including the ability to move a VPS container
between servers. The users may have root access to their own virtual space.
Customers are sometimes responsible for patching and maintaining the server.
Dedicated hosting service: the user gets his or her own Web server and gains full
control over it (root access for Linux/administrator access for Windows); however,
the user typically does not own the server. Another type of Dedicated hosting is
Self-Managed or Unmanaged. This is usually the least expensive for Dedicated
plans. The user has full administrative access to the box, which means the client is
responsible for the security and maintenance of his own dedicated box.
Managed hosting service: the user gets his or her own Web server but is not allowed
full control over it (root access for Linux/administrator access for Windows);
however, they are allowed to manage their data via FTP or other remote
management tools. The user is disallowed full control so that the provider can
guarantee quality of service by not allowing the user to modify the server or
potentially create configuration problems. The user typically does not own the
server. The server is leased to the client.
Colocation web hosting service: similar to the dedicated web hosting service,
but the user owns the colo server; the hosting company provides physical space
that the server takes up and takes care of the server. This is the most powerful and
expensive type of web hosting service. In most cases, the colocation provider may
provide little to no support directly for their client's machine, providing only the
electrical, Internet access, and storage facilities for the server. In most cases for
colo, the client would have his own administrator visit the data center on site to do
any hardware upgrades or changes.
Cloud Hosting: is a new type of hosting platform that allows customers powerful,
scalable and reliable hosting based on clustered load-balanced servers and utility
billing. Removing single-point of failures and allowing customers to pay for only
what they use versus what they could use.
Clustered hosting: having multiple servers hosting the same content for better
resource utilization. Clustered Servers are a perfect solution for high-availability
dedicated hosting, or creating a scalable web hosting solution. A cluster may
separate web serving from database hosting capability.
Grid hosting: this form of distributed hosting is when a server cluster acts like a grid
and is composed of multiple nodes.
Home server: usually a single machine placed in a private residence can be used
to host one or more web sites from a usually consumer-grade broadband
connection. These can be purpose-built machines or more commonly old PCs. Some
ISPs actively attempt to block home servers by disallowing incoming requests to
TCP port 80 of the user's connection and by refusing to provide static IP addresses.
A common way to attain a reliable DNS hostname is by creating an account with a
dynamic DNS service. A dynamic DNS service will automatically change the IP
address that a URL points to when the IP address changes.
Some specific types of hosting provided by web host service providers:

File hosting service: hosts files, not web pages


Image hosting service
Video hosting service
Blog hosting service
One-click hosting
Pastebin Hosts text snippets
Shopping cart software
E-mail hosting service
Obtaining hostingWeb hosting is often provided as part of a general Internet access
plan; there are many free and paid providers offering these

A customer needs to evaluate the requirements of the application to choose what


kind of hosting to use. Such considerations include database server software,
scripting software, and operating system. Most hosting providers provide Linux-
based web hosting which offers a wide range of different software. A typical
configuration for a Linux server is the LAMP platform: Linux, Apache, MySQL, and
PHP/Perl/Python. The webhosting client may want to have other services, such as
email for their business domain, databases or multi-media services for streaming
media. A customer may also choose Windows as the hosting platform. The customer
still can choose from PHP, Perl, and Python but may also use ASP .Net or Classic
ASP. Web hosting packages often include a Web Content Management System, so
the end-user doesn't have to worry about the more technical aspects.

Вам также может понравиться