Академический Документы
Профессиональный Документы
Культура Документы
LAN stands for local area network and a network in a room, in a building or a
network over small distance is known as a LAN. MAN stands for Metropolitan area
network and it covers the networking between two offices within the city. WAN
stands for wide area network and it cover the networking between two or more
computers between two cities, two countries or two continents.
There are different standards and devices in computer network. The most
commonly used standard for a local area network is Ethernet. Key devices in a
computer network are hub, switch, router, modem and access point etc. A router is
used to connect two logically and physical different networks. All the
communication on the internet is based on the router. Hub/Switch is used to
connect the computers in local area network.
Hopefully, in this article you may have learnt that what a computer network is, how
important it is in our lives, what are different network devices, standards, topologies
and communication types.
Types of Computer Network
One way to categorize the different types of computer network designs is by their
scope or scale. For historical reasons, the networking industry refers to nearly every
type of design as some kind of area network. Common examples of area network
types are:
A WAN differs from a LAN in several important ways. Most WANs (like the Internet)
are not owned by any one organization but rather exist under collective or
distributed ownership and management. WANs tend to use technology like ATM,
Frame Relay and X.25 for connectivity over the longer distances.
Computer network
History
In the 1960s, the Advanced Research Projects Agency (ARPA) started funding the
design of the Advanced Research Projects Agency Network (ARPANET) for the
United States Department of Defense. Development of the network began in 1969,
based on designs developed during the 1960s.[3] The ARPANET evolved into the
modern Internet.
Purpose
Network classification
Connection method
Ethernet as it is defined by IEEE 802 utilizes various standards and mediums that
enable communication between devices. Frequently deployed devices include hubs,
switches, bridges, or routers. Wireless LAN technology is designed to connect
devices without wiring. These devices use radio waves or infrared signals as a
transmission medium. ITU-T G.hn technology uses existing home wiring (coaxial
cable, phone lines and power lines) to create a high-speed (up to 1 Gigabit/s) local
area network.
Wired technologies
• Twisted pair wire is the most widely used medium for telecommunication.
Twisted-pair cabling consist of copper wires that are twisted into pairs.
Ordinary telephone wires consist of two insulated copper wires twisted into
pairs. Computer networking cabling consist of 4 pairs of copper cabling that
can be utilized for both voice and data transmission. The use of two wires
twisted together helps to reduce crosstalk and electromagnetic induction.
The transmission speed ranges from 2 million bits per second to 100 million
bits per second. Twisted pair cabling comes in two forms which are
Unshielded Twisted Pair (UTP) and Shielded twisted-pair (STP) which are rated
in categories which are manufactured in different increments for various
scenarios.
• Coaxial cable is widely used for cable television systems, office buildings, and
other worksites for local area networks. The cables consist of copper or
aluminum wire wrapped with insulating layer typically of a flexible material
with a high dielectric constant, all of which are surrounded by a conductive
layer. The layers of insulation help minimize interference and distortion.
Transmission speed range from 200 million to more than 500 million bits per
second.
• Optical fiber cable consists of one or more filaments of glass fiber wrapped in
protective layers. It transmits light which can travel over extended distances.
Fiber-optic cables are not affected by electromagnetic radiation.
Transmission speed may reach trillions of bits per second. The transmission
speed of fiber optics is hundreds of times faster than for coaxial cables and
thousands of times faster than a twisted-pair wire.[citation needed]
Wireless technologies
Scale
Networks are often classified as local area network (LAN), wide area network (WAN),
metropolitan area network (MAN), personal area network (PAN), virtual private
network (VPN), campus area network (CAN), storage area network (SAN), and
others, depending on their scale, scope and purpose, e.g., controller area network
(CAN) usage, trust level, and access right often differ between these types of
networks. LANs tend to be designed for internal use by an organization's internal
systems and employees in individual physical locations, such as a building, while
WANs may connect physically separate parts of an organization and may include
connections to third parties.
Network topology
A local area network (LAN) is a network that connects computers and devices in a
limited geographical area such as home, school, computer laboratory, office
building, or closely positioned group of buildings. Each computer or device on the
network is a node. Current wired LANs are most likely to be based on Ethernet
technology, although new standards like ITU-T G.hn also provide a way to create a
wired LAN using existing home wires (coaxial cables, phone lines and power lines).[4]
Typical library network, in a branching tree topology and controlled access to
resources
All interconnected devices must understand the network layer (layer 3), because
they are handling multiple subnets (the different colors). Those inside the library,
which have only 10/100 Mbit/s Ethernet connections to the user device and a
Gigabit Ethernet connection to the central router, could be called "layer 3 switches"
because they only have Ethernet interfaces and must understand IP. It would be
more correct to call them access routers, where the router at the top is a
distribution router that connects to the Internet and academic networks' customer
access routers.
A home area network (HAN) is a residential LAN which is used for communication
between digital devices typically deployed in the home, usually a small number of
personal computers and accessories, such as printers and mobile computing
devices. An important function is the sharing of Internet access, often a broadband
service through a CATV or Digital Subscriber Line (DSL) provider. It can also be
referred to as an office area network (OAN).
A wide area network (WAN) is a computer network that covers a large geographic
area such as a city, country, or spans even intercontinental distances, using a
communications channel that combines many types of media such as telephone
lines, cables, and air waves. A WAN often uses transmission facilities provided by
common carriers, such as telephone companies. WAN technologies generally
function at the lower three layers of the OSI reference model: the physical layer, the
data link layer, and the network layer.
Campus network
A campus network is a computer network made up of an interconnection of local
area networks (LAN's) within a limited geographical area. The networking
equipments (switches, routers) and transmission media (optical fiber, copper plant,
Cat5 cabling etc.) are almost entirely owned (by the campus tenant / owner: an
enterprise, university, government etc.).
A Metropolitan area network is a large computer network that usually spans a city
or a large campus.
Sample EPN made of Frame relay WAN connections and dialup remote access.
A virtual private network (VPN) is a computer network in which some of the links
between nodes are carried by open connections or virtual circuits in some larger
network (e.g., the Internet) instead of by physical wires. The data link layer
protocols of the virtual network are said to be tunneled through the larger network
when this is the case. One common application is secure communications through
the public Internet, but a VPN need not have explicit security features, such as
authentication or content encryption. VPNs, for example, can be used to separate
the traffic of different user communities over an underlying network with strong
security features.
VPN may have best-effort performance, or may have a defined service level
agreement (SLA) between the VPN customer and the VPN service provider.
Generally, a VPN has a topology more complex than point-to-point.
Internetwork
Backbone network
A large corporation that has many locations may have a backbone network that ties
all of the locations together, for example, if a server cluster needs to be accessed
by different departments of a company that are located at different geographical
locations. The pieces of the network connections (for example: ethernet, wireless)
that bring these departments together is often mentioned as network backbone.
Network congestion is often taken into consideration while designing backbones.
Internet
An intranet is a set of networks, using the Internet Protocol and IP-based tools such
as web browsers and file transfer applications, that is under the control of a single
administrative entity. That administrative entity closes the intranet to all but
specific, authorized users. Most commonly, an intranet is the internal network of an
organization. A large intranet will typically have at least one web server to provide
users with organizational information.
Overlay network
For example, many peer-to-peer networks are overlay networks because they are
organized as nodes of a virtual system of links run on top of the Internet. The
Internet was initially built as an overlay on the telephone network .[8]
Overlay networks have been around since the invention of networking when
computer systems were connected over telephone lines using modem, before any
data network existed.
Nowadays the Internet is the basis for many overlaid networks that can be
constructed to permit routing of messages to destinations specified by an IP
address. For example, distributed hash tables can be used to route messages to a
node having a specific logical address, whose IP address is known in advance.
Overlay networks have also been proposed as a way to improve Internet routing,
such as through quality of service guarantees to achieve higher-quality streaming
media. Previous proposals such as IntServ, DiffServ, and IP Multicast have not seen
wide acceptance largely because they require modification of all routers in the
network.[citation needed] On the other hand, an overlay network can be incrementally
deployed on end-hosts running the overlay protocol software, without cooperation
from Internet service providers. The overlay has no control over how packets are
routed in the underlying network between two overlay nodes, but it can control, for
example, the sequence of overlay nodes a message traverses before reaching its
destination.
All networks are made up of basic hardware building blocks to interconnect network
nodes, such as Network Interface Cards (NICs), Bridges, Hubs, Switches, and
Routers. In addition, some method of connecting these building blocks is required,
usually in the form of galvanic cable (most commonly Category 5 cable). Less
common are microwave links (as in IEEE 802.12) or optical cable ("optical fiber").
Each network interface card has its unique id. This is written on a chip which is
mounted on the card.
Repeaters
Hubs
A network hub contains multiple ports. When a packet arrives at one port, it is
copied unmodified to all ports of the hub for transmission. The destination address
in the frame is not changed to a broadcast address.[9] It works on the Physical Layer
of the OSI model..
Bridges
A network bridge connects multiple network segments at the data link layer (layer
2) of the OSI model. Bridges broadcast to all ports except the port on which the
broadcast was received. However, bridges do not promiscuously copy traffic to all
ports, as hubs do, but learn which MAC addresses are reachable through specific
ports. Once the bridge associates a port and an address, it will send traffic for that
address to that port only.
Bridges learn the association of ports and addresses by examining the source
address of frames that it sees on various ports. Once a frame arrives through a port,
its source address is stored and the bridge assumes that MAC address is associated
with that port. The first time that a previously unknown destination address is seen,
the bridge will forward the frame to all ports other than the one on which the frame
arrived.
Switches
A network switch is a device that forwards and filters OSI layer 2 datagrams (chunks
of data communication) between ports (connected cables) based on the MAC
addresses in the packets.[10] A switch is distinct from a hub in that it only forwards
the frames to the ports involved in the communication rather than all ports
connected. A switch breaks the collision domain but represents itself as a broadcast
domain. Switches make forwarding decisions of frames on the basis of MAC
addresses. A switch normally has numerous ports, facilitating a star topology for
devices, and cascading additional switches.[11] Some switches are capable of routing
based on Layer 3 addressing or additional logical levels; these are called multi-layer
switches. The term switch is used loosely in marketing to encompass devices
including routers and bridges, as well as devices that may distribute traffic on load
or by application content (e.g., a Web URL identifier).
Routers
Firewalls
Firewalls are the most important aspect of a network with respect to security. A
firewalled system does not need every interaction or data transfer monitored by a
human, as automated processes can be set up to assist in rejecting access requests
from unsafe sources, and allowing actions from recognized ones. The vital role
firewalls play in network security grows in parallel with the constant increase in
'cyber' attacks for the purpose of stealing/corrupting data, planting viruses, etc.
This page displays the main pieces of the Internet from a User's PC... extending all
the way through to the online content. Each section mentions the most significant
parts of the Internet's architecture. I also provide links to the top "couple of
vendors" in each category, and then an external link to a more extensive lists of
vendors.
In creating this one web page to describe the "entire Internet", I split the diagram
based on the function being performed. I recognize that a company may perform
several of these functions. I've included several "leading edge" components as
well, such as LMDS for the local loop (This page is intended to be forward-looking). I
also recognize that there are many additional details that could be added to this
page, but I am trying to adhere to a 90/10 rule. If this page identifies 90% of the
mainstream pieces and players, that should be sufficient to convey "the big
picture". (The remaining 10% details would probably triple the size & complexity of
this one meta-diagram.) I welcome any comments you have to improve this page -
especially if I've omitted anything significant. Russ Haynal.
• User PC - Multi-Media PCs equipped to send and receive all variety of audio
and video
• User Communication Equipment - Connects the Users' PC(s) to the "Local
Loop"
• Local Loop Carrier - Connects the User location to the ISP's Point of Presence
• ISP's POP - Connections from the user are accepted and authenticated here.
• User Services - Used by the User for access (DNS, EMAIL, etc).
• ISP Backbone - Interconnects the ISP's POPs, AND interconnects the ISP to
Other ISP's and online content.
• Online Content - These are the host sites that the user interacts with.
• Origins Of Online Content - This is the original "real-world" sources for the
online information.
Other Resources:
Internet Architecture
Fortunately, nobody owns the Internet, there is no centralized control, and nobody
can turn it off. Its evolution depends on rough consensus about technical proposals,
and on running code. Engineering feed-back from real implementations is more
important than any architectural principles.
The Internet's architecture is described in its name, a short from of the compound
word "inter-networking". This architecture is based in the very specification of the
standard TCP/IP protocol, designed to connect any two networks which may be very
different in internal hardware, software, and technical design. Once two networks
are interconnected, communication with TCP/IP is enabled end-to-end, so that any
node on the Internet has the near magical ability to communicate with any other no
matter where they are. This openness of design has enabled the Internet
architecture to grow to a global scale.
In practice, the Internet technical architecture looks a bit like a multi-dimensional
river system, with small tributaries feeding medium-sized streams feeding large
rivers. For example, an individual's access to the Internet is often from home over a
modem to a local Internet service provider who connects to a regional network
connected to a national network. At the office, a desktop computer might be
connected to a local area network with a company connection to a corporate
Intranet connected to several national Internet service providers. In general, small
local Internet service providers connect to medium-sized regional networks which
connect to large national networks, which then connect to very large bandwidth
networks on the Internet backbone. Most Internet service providers have several
redundant network cross-connections to other providers in order to ensure
continuous availability.
The companies running the Internet backbone operate very high bandwidth
networks relied on by governments, corporations, large organizations, and other
Internet service providers. Their technical infrastructure often includes global
connections through underwater cables and satellite links to enable communication
between countries and continents. As always, a larger scale introduces new
phenomena: the number of packets flowing through the switches on the backbone
is so large that it exhibits the kind of complex non-linear patterns usually found in
natural, analog systems like the flow of water or development of the rings of Saturn
(RFC 3439, S2.2).
Resources. The network topology page provides information and resources on the
real-time construction of the Internet network, including graphs and statistics. The
following references provide additional information about the Internet architecture:
IP address
An identifier for a computer or device on a TCP/IP network. Networks using the
TCP/IP protocol route messages based on the IP address of the destination. The
format of an IP address is a 32-bit numeric address written as four numbers
separated by periods. Each number can be zero to 255. For example, 1.160.10.240
could be an IP address.
Within an isolated network, you can assign IP addresses at random as long as each
one is unique. However, connecting a private network to the Internet requires using
registered IP addresses (called Internet addresses) to avoid duplicates.
The four numbers in an IP address are used in different ways to identify a particular
network and a host on that network. Four regional Internet registries -- ARIN, RIPE
NCC, LACNIC and APNIC -- assign Internet addresses from the following three
classes.
ISP
Short for Internet Service Provider, it refers to a company that provides Internet
services, including personal and business access to the Internet. For a monthly fee,
the service provider usually provides a software package, username, password and
access phone number. Equipped with a modem, you can then log on to the Internet
and browse the World Wide Web and USENET, and send and receive e-mail. For
broadband access you typically receive the broadband modem hardware or pay a
monthly fee for this equipment that is added to your ISP account billing.
URL
Abbreviation of Uniform Resource Locator, the global address of documents and
other resources on the World Wide Web.
The first part of the address is called a protocol identifier and it indicates what
protocol to use, and the second part is called a resource name and it specifies the IP
address or the domain name where the resource is located. The protocol identifier
and the resource name are separated by a colon and two forward slashes.
For example, the two URLs below point to two different files at the domain
pcwebopedia.com. The first specifies an executable file that should be fetched using
the FTP protocol; the second specifies a Web page that should be fetched using the
HTTP protocol:
•ftp://www.pcwebopedia.com/stuff.exe
•http://www.pcwebopedia.com/index.html
Domain Name
Domain names are used to identify one or more IP addresses. For example, the
domain name microsoft.com represents about a dozen IP addresses. Domain names
are used in URLs to identify particular Web pages. For example, in the URL
http://www.pcwebopedia.com/index.html, the domain name is pcwebopedia.com.
Every domain name has a suffix that indicates which top level domain (TLD) it
belongs to. There are only a limited number of such domains. For example:
Browser
short for Web browser, a software application used to locate and display Web
pages. The two most popular browsers are Microsoft Internet Explorer and Firefox.
Both of these are graphical browsers, which means that they can display graphics
as well as text. In addition, most modern browsers can present multimedia
information, including sound and video, though they require plug-ins for some
formats.
Protocol
An agreed-upon format for transmitting data between two devices. The protocol
determines the following:
From a user's point of view, the only interesting aspect about protocols is that your
computer or device must support the right ones if you want to communicate with
other computers. The protocol can be implemented either in hardware or in
software.
Search engine
A program that searches documents for specified keywords and returns a list of the
documents where the keywords were found. Although search engine is really a
general class of programs, the term is often used to specifically describe systems
like Google, Alta Vista and Excite that enable users to search for documents on the
World Wide Web and USENET newsgroups.
e-mail
Short for electronic mail, the transmission of messages over communications
networks. The messages can be notes entered from the keyboard or electronic files
stored on disk. Most mainframes, minicomputers, and computer networks have an
e-mail system. Some electronic-mail systems are confined to a single computer
system or network, but others have gateways to other computer systems, enabling
users to send electronic mail anywhere in the world. Companies that are fully
computerized make extensive use of e-mail because it is fast, flexible, and reliable.
Most e-mail systems include a rudimentary text editor for composing messages, but
many allow you to edit your messages using any editor you want. You then send the
message to the recipient by specifying the recipient's address. You can also send
the same message to several users at once. This is called broadcasting.
Sent messages are stored in electronic mailboxes until the recipient fetches them.
To see if you have any mail, you may have to check your electronic mailbox
periodically, although many systems alert you when mail is received. After reading
your mail, you can store it in a text file, forward it to other users, or delete it. Copies
of memos can be printed out on a printer if you want a paper copy.
All online services and Internet Service Providers (ISPs) offer e-mail, and most also
support gateways so that you can exchange mail with users of other systems.
Usually, it takes only a few seconds or minutes for mail to arrive at its destination.
This is a particularly effective way to communicate with a group because you can
broadcast a message or document to everyone in the group at once.
Although different e-mail systems use different formats, there are some emerging
standards that are making it possible for users on all systems to exchange
messages. In the PC world, an important e-mail standard is MAPI. The CCITT
standards organization has developed the X.400 standard, which attempts to
provide a universal way of addressing messages. To date, though, the de facto
addressing standard is the one used by the Internet system because almost all e-
mail systems have an Internet gateway.
FTP
Short for File Transfer Protocol, the protocol for exchanging files over the Internet.
FTP works in the same way as HTTP for transferring Web pages from a server to a
user's browser and SMTP for transferring electronic mail across the Internet in that,
like these technologies, FTP uses the Internet's TCP/IP protocols to enable data
transfer.
FTP is most commonly used to download a file from a server using the Internet or to
upload a file to a server (e.g., uploading a Web page file to a server).
Telnet
(tel´net) (n.) A terminal emulation program for TCP/IP networks such as the Internet.
The Telnet program runs on your computer and connects your PC to a server on the
network. You can then enter commands through the Telnet program and they will
be executed as if you were entering them directly on the server console. This
enables you to control the server and communicate with other servers on the
network. To start a Telnet session, you must log in to a server by entering a valid
username and password. Telnet is a common way to remotely control Web servers.
gopher
A system that pre-dates the World Wide Web for organizing and displaying files on
Internet servers. A Gopher server presents its contents as a hierarchically
structured list of files. With the ascendance of the Web, many gopher databases
were converted to Web sites which can be more easily accessed via Web search
engines.
Gopher was developed at the University of Minnesota and named after the school's
mascot. Two systems, Veronica and Jughead, let you search global indices of
resources stored in Gopher systems.
WWW
The World Wide Web, abbreviated as WWW and commonly known as the Web, is a
system of interlinked hypertext documents accessed via the Internet. With a web
browser, one can view web pages that may contain text, images, videos, and other
multimedia and navigate between them via hyperlinks. Using concepts from earlier
hypertext systems, English engineer and computer scientist Sir Tim Berners-Lee,
now the Director of the World Wide Web Consortium, wrote a proposal in March
1989 for what would eventually become the World Wide Web.[1] At CERN in
Geneva, Switzerland, Berners-Lee and Belgian computer scientist Robert Cailliau
proposed in 1990 to use "HyperText ... to link and access information of various
kinds as a web of nodes in which the user can browse at will",[2] and publicly
introduced the project in December.[3]
"The World-Wide Web (W3) was developed to be a pool of human knowledge, and
human culture, which would allow collaborators in remote sites to share their ideas
and all aspects of a common project."[4]
Contents []
1 History
2 Function
2.1 Linking
2.2 Dynamic updates of web pages
2.3 WWW prefix
3 Privacy
4 Security
5 Standards
6 Accessibility
7 Internationalization
8 Statistics
9 Speed issues
10 Caching
11 See also
12 Notes
13 References
14 External links
With help from Robert Cailliau, he published a more formal proposal (on November
12, 1990) to build a "Hypertext project" called "WorldWideWeb" (one word, also
"W3") as a "web" of "hypertext documents" to be viewed by "browsers" using a
client–server architecture.[2] This proposal estimated that a read-only web would be
developed within three months and that it would take six months to achieve "the
creation of new links and new material by readers, [so that] authorship becomes
universal" as well as "the automatic notification of a reader when new material of
interest to him/her has become available." See Web 2.0 and RSS/Atom, which have
taken a little longer to mature.
The proposal was modeled after the Dynatext SGML reader by Electronic Book
Technology, a spin-off from the Institute for Research in Information and Scholarship
at Brown University. The Dynatext system, licensed by CERN, was technically
advanced and was a key player in the extension of SGML ISO 8879:1986 to
Hypermedia within HyTime, but it was considered too expensive and had an
inappropriate licensing policy for use in the general high energy physics community,
namely a fee for each document and each document alteration.
This NeXT Computer used by Tim Berners-Lee at CERN became the first web server
The CERN datacenter in 2010 housing some www serversA NeXT Computer was
used by Berners-Lee as the world's first web server and also to write the first web
browser, WorldWideWeb, in 1990. By Christmas 1990, Berners-Lee had built all the
tools necessary for a working Web:[7] the first web browser (which was a web editor
as well); the first web server; and the first web pages,[8] which described the
project itself. On August 6, 1991, he posted a short summary of the World Wide
Web project on the alt.hypertext newsgroup.[9] This date also marked the debut of
the Web as a publicly available service on the Internet. The first photo on the web
was uploaded by Berners-Lee in 1992, an image of the CERN house band Les
Horribles Cernettes.
Web as a "Side Effect" of the 40 years of Particle Physics Experiments. It happened
many times during history of science that the most impressive results of large scale
scientific efforts appeared far away from the main directions of those efforts... After
the World War 2 the nuclear centers of almost all developed countries became the
places with the highest concentration of talented scientists. For about four decades
many of them were invited to the international CERN's Laboratories. So specific kind
of the CERN's intellectual "entire culture" (as you called it) was constantly growing
from one generation of the scientists and engineers to another. When the
concentration of the human talents per square foot of the CERN's Labs reached the
critical mass, it caused an intellectual explosion The Web -- crucial point of human's
history -- was born... Nothing could be compared to it... We cant imagine yet the
real scale of the recent shake, because there has not been so fast growing multi-
dimension social-economic processes in human history... [10]
The first server outside Europe was set up at SLAC to host the SPIRES-HEP
database. Accounts differ substantially as to the date of this event. The World Wide
Web Consortium says December 1992,[11] whereas SLAC itself claims 1991.[12]
[13] This is supported by a W3C document entitled A Little History of the World
Wide Web.[14]
The crucial underlying concept of hypertext originated with older projects from the
1960s, such as the Hypertext Editing System (HES) at Brown University, Ted
Nelson's Project Xanadu, and Douglas Engelbart's oN-Line System (NLS). Both
Nelson and Engelbart were in turn inspired by Vannevar Bush's microfilm-based
"memex", which was described in the 1945 essay "As We May Think".[citation
needed]
The World Wide Web had a number of differences from other hypertext systems
that were then available. The Web required only unidirectional links rather than
bidirectional ones. This made it possible for someone to link to another resource
without action by the owner of that resource. It also significantly reduced the
difficulty of implementing web servers and browsers (in comparison to earlier
systems), but in turn presented the chronic problem of link rot. Unlike predecessors
such as HyperCard, the World Wide Web was non-proprietary, making it possible to
develop servers and clients independently and to add extensions without licensing
restrictions. On April 30, 1993, CERN announced[16] that the World Wide Web
would be free to anyone, with no fees due. Coming two months after the
announcement that the Gopher protocol was no longer free to use, this produced a
rapid shift away from Gopher and towards the Web. An early popular web browser
was ViolaWWW, which was based upon HyperCard.
Scholars generally agree that a turning point for the World Wide Web began with
the introduction[17] of the Mosaic web browser[18] in 1993, a graphical browser
developed by a team at the National Center for Supercomputing Applications at the
University of Illinois at Urbana-Champaign (NCSA-UIUC), led by Marc Andreessen.
Funding for Mosaic came from the U.S. High-Performance Computing and
Communications Initiative, a funding program initiated by the High Performance
Computing and Communication Act of 1991, one of several computing
developments initiated by U.S. Senator Al Gore.[19] Prior to the release of Mosaic,
graphics were not commonly mixed with text in web pages and the Web's
popularity was less than older protocols in use over the Internet, such as Gopher
and Wide Area Information Servers (WAIS). Mosaic's graphical user interface
allowed the Web to become, by far, the most popular Internet protocol.
The World Wide Web Consortium (W3C) was founded by Tim Berners-Lee after he
left the European Organization for Nuclear Research (CERN) in October, 1994. It was
founded at the Massachusetts Institute of Technology Laboratory for Computer
Science (MIT/LCS) with support from the Defense Advanced Research Projects
Agency (DARPA), which had pioneered the Internet; a year later, a second site was
founded at INRIA (a French national computer research lab) with support from the
European Commission DG InfSo; and in 1996, a third continental site was created in
Japan at Keio University. By the end of 1994, while the total number of websites was
still minute compared to present standards, quite a number of notable websites
were already active, many of which are the precursors or inspiration for today's
most popular services.
Connected by the existing Internet, other websites were created around the world,
adding international standards for domain names and HTML. Since then, Berners-
Lee has played an active role in guiding the development of web standards (such as
the markup languages in which web pages are composed), and in recent years has
advocated his vision of a Semantic Web. The World Wide Web enabled the spread of
information over the Internet through an easy-to-use and flexible format. It thus
played an important role in popularizing use of the Internet.[20] Although the two
terms are sometimes conflated in popular use, World Wide Web is not synonymous
with Internet.[21] The Web is an application built on top of the Internet.
FunctionThe terms Internet and World Wide Web are often used in every-day
speech without much distinction. However, the Internet and the World Wide Web
are not one and the same. The Internet is a global system of interconnected
computer networks. In contrast, the Web is one of the services that runs on the
Internet. It is a collection of interconnected documents and other resources, linked
by hyperlinks and URLs. In short, the Web is an application running on the Internet.
[22] Viewing a web page on the World Wide Web normally begins either by typing
the URL of the page into a web browser, or by following a hyperlink to that page or
resource. The web browser then initiates a series of communication messages,
behind the scenes, in order to fetch and display it.
First, the server-name portion of the URL is resolved into an IP address using the
global, distributed Internet database known as the Domain Name System (DNS).
This IP address is necessary to contact the Web server. The browser then requests
the resource by sending an HTTP request to the Web server at that particular
address. In the case of a typical web page, the HTML text of the page is requested
first and parsed immediately by the web browser, which then makes additional
requests for images and any other files that complete the page image. Statistics
measuring a website's popularity are usually based either on the number of page
views or associated server 'hits' (file requests) that take place.
While receiving these files from the web server, browsers may progressively render
the page onto the screen as specified by its HTML, Cascading Style Sheets (CSS), or
other page composition languages. Any images and other resources are
incorporated to produce the on-screen web page that the user sees. Most web
pages contain hyperlinks to other related pages and perhaps to downloadable files,
source documents, definitions and other web resources. Such a collection of useful,
related resources, interconnected via hypertext links is dubbed a web of
information. Publication on the Internet created what Tim Berners-Lee first called
the WorldWideWeb (in its original CamelCase, which was subsequently discarded) in
November 1990.[2]
Linking
Graphic representation of a minute fraction of the WWW, demonstrating
hyperlinksOver time, many web resources pointed to by hyperlinks disappear,
relocate, or are replaced with different content. This makes hyperlinks obsolete, a
phenomenon referred to in some circles as link rot and the hyperlinks affected by it
are often called dead links. The ephemeral nature of the Web has prompted many
efforts to archive web sites. The Internet Archive, active since 1996, is one of the
best-known efforts.
WWW prefixMany domain names used for the World Wide Web begin with www
because of the long-standing practice of naming Internet hosts (servers) according
to the services they provide. The hostname for a web server is often www, in the
same way that it may be ftp for an FTP server, and news or nntp for a USENET news
server. These host names appear as Domain Name System (DNS) subdomain
names, as in www.example.com. The use of 'www' as a subdomain name is not
required by any technical or policy standard; indeed, the first ever web server was
called nxoc01.cern.ch,[25] and many web sites exist without it. Many established
websites still use 'www', or they invent other subdomain names such as 'www2',
'secure', etc. Many such web servers are set up such that both the domain root
(e.g., example.com) and the www subdomain (e.g., www.example.com) refer to the
same site; others require one form or the other, or they may map to different web
sites.
The use of a subdomain name is useful for load balancing incoming web traffic by
creating a CNAME record that points to a cluster of web servers. Since, currently,
only a subdomain can be cname'ed the same result cannot be achieved by using
the bare domain root.
When a user submits an incomplete website address to a web browser in its address
bar input field, some web browsers automatically try adding the prefix "www" to the
beginning of it and possibly ".com", ".org" and ".net" at the end, depending on what
might be missing. For example, entering 'microsoft' may be transformed to
http://www.microsoft.com/ and 'openoffice' to http://www.openoffice.org. This
feature started appearing in early versions of Mozilla Firefox, when it still had the
working title 'Firebird' in early 2003.[26] It is reported that Microsoft was granted a
US patent for the same idea in 2008, but only for mobile devices.[27]
The scheme specifier (http:// or https://) in URIs refers to the Hypertext Transfer
Protocol and to HTTP Secure respectively and so defines the communication
protocol to be used for the request and response. The HTTP protocol is fundamental
to the operation of the World Wide Web, and the encryption involved in HTTPS adds
an essential layer if confidential information such as passwords or banking
information are to be exchanged over the public Internet. Web browsers usually
prepend the scheme to URLs too, if omitted.
PrivacyComputer users, who save time and money, and who gain conveniences
and entertainment, may or may not have surrendered the right to privacy in
exchange for using a number of technologies including the Web.[30] Worldwide,
more than a half billion people have used a social network service,[31] and of
Americans who grew up with the Web, half created an online profile[32] and are
part of a generational shift that could be changing norms.[33][34] Facebook
progressed from U.S. college students to a 70% non-U.S. audience, and in 2009
estimated that only 20% of its members use privacy settings.[35] In 2010 (six years
after co-founding the company), Mark Zuckerberg wrote, "we will add privacy
controls that are much simpler to use".[36]
Privacy representatives from 60 countries have resolved to ask for laws to
complement industry self-regulation, for education for children and other minors
who use the Web, and for default protections for users of social networks.[37] They
also believe data protection for personally identifiable information benefits business
more than the sale of that information.[37] Users can opt-in to features in browsers
to clear their personal histories locally and block some cookies and advertising
networks[38] but they are still tracked in websites' server logs, and particularly web
beacons.[39] Berners-Lee and colleagues see hope in accountability and
appropriate use achieved by extending the Web's architecture to policy awareness,
perhaps with audit logging, reasoners and appliances.[40]
In exchange for providing free content, vendors hire advertisers who spy on Web
users and base their business model on tracking them.[41] Since 2009, they buy
and sell consumer data on exchanges (lacking a few details that could make it
possible to de-anonymize, or identify an individual).[42][41] Hundreds of millions of
times per day, Lotame Solutions captures what users are typing in real time, and
sends that text to OpenAmplify who then tries to determine, to quote a writer at The
Wall Street Journal, "what topics are being discussed, how the author feels about
those topics, and what the person is going to do about them".[43][44]
Microsoft backed away in 2008 from its plans for strong privacy features in Internet
Explorer,[45] leaving its users (50% of the world's Web users) open to advertisers
who may make assumptions about them based on only one click when they visit a
website.[46] Among services paid for by advertising, Yahoo! could collect the most
data about users of commercial websites, about 2,500 bits of information per month
about each typical user of its site and its affiliated advertising network sites. Yahoo!
was followed by MySpace with about half that potential and then by AOL–
TimeWarner, Google, Facebook, Microsoft, and eBay.[47]
SecurityThe Web has become criminals' preferred pathway for spreading malware.
Cybercrime carried out on the Web can include identity theft, fraud, espionage and
intelligence gathering.[48] Web-based vulnerabilities now outnumber traditional
computer security concerns,[49][50] and as measured by Google, about one in ten
web pages may contain malicious code.[51] Most Web-based attacks take place on
legitimate websites, and most, as measured by Sophos, are hosted in the United
States, China and Russia.[52] The most common of all malware threats is SQL
injection attacks against websites.[53] Through HTML and URIs the Web was
vulnerable to attacks like cross-site scripting (XSS) that came with the introduction
of JavaScript[54] and were exacerbated to some degree by Web 2.0 and Ajax web
design that favors the use of scripts.[55] Today by one estimate, 70% of all
websites are open to XSS attacks on their users.[56]
Proposed solutions vary to extremes. Large security vendors like McAfee already
design governance and compliance suites to meet post-9/11 regulations,[57] and
some, like Finjan have recommended active real-time inspection of code and all
content regardless of its source.[48] Some have argued that for enterprise to see
security as a business opportunity rather than a cost center,[58] "ubiquitous,
always-on digital rights management" enforced in the infrastructure by a handful of
organizations must replace the hundreds of companies that today secure data and
networks.[59] Jonathan Zittrain has said users sharing responsibility for computing
safety is far preferable to locking down the Internet.[60]
Usually, when web standards are discussed, the following publications are seen as
foundational:
Recommendations for markup languages, especially HTML and XHTML, from the
W3C. These define the structure and interpretation of hypertext documents.
Recommendations for stylesheets, especially CSS, from the W3C.
Standards for ECMAScript (usually in the form of JavaScript), from Ecma
International.
Recommendations for the Document Object Model, from W3C.
Additional publications provide definitions of other essential technologies for the
World Wide Web, including, but not limited to, the following:
0.1 second (one tenth of a second). Ideal response time. The user doesn't sense any
interruption.
1 second. Highest acceptable response time. Download times above 1 second
interrupt the user experience.
10 seconds. Unacceptable response time. The user experience is interrupted and
the user is likely to leave the site or system.
CachingIf a user revisits a Web page after only a short interval, the page data may
not need to be re-obtained from the source Web server. Almost all web browsers
cache recently obtained data, usually on the local hard drive. HTTP requests sent by
a browser will usually only ask for data that has changed since the last download. If
the locally cached data are still current, it will be reused. Caching helps reduce the
amount of Web traffic on the Internet. The decision about expiration is made
independently for each downloaded file, whether image, stylesheet, JavaScript,
HTML, or whatever other content the site may provide. Thus even on sites with
highly dynamic content, many of the basic resources only need to be refreshed
occasionally. Web site designers find it worthwhile to collate resources such as CSS
data and JavaScript into a few site-wide files so that they can be cached efficiently.
This helps reduce page download times and lowers demands on the Web server.
There are other components of the Internet that can cache Web content. Corporate
and academic firewalls often cache Web resources requested by one user for the
benefit of all. (See also Caching proxy server.) Some search engines also store
cached content from websites. Apart from the facilities built into Web servers that
can determine when files have been updated and so need to be re-sent, designers
of dynamically generated Web pages can control the HTTP headers sent back to
requesting users, so that transient or sensitive pages are not cached. Internet
banking and news sites frequently use this facility. Data requested with an HTTP
'GET' is likely to be cached if other conditions are met; data obtained in response to
a 'POST' is assumed to depend on the data that was POSTed and so is not cached.
--------
Intranet
Show me everything on Win Development Resources
definition -
An intranet is a private network that is contained within an enterprise. It may
consist of many interlinked local area networks and also use leased lines in the wide
area network. Typically, an intranet includes connections through one or more
gateway computers to the outside Internet. The main purpose of an intranet is to
share company information and computing resources among employees. An
intranet can also be used to facilitate working in groups and for teleconferences.
An intranet uses TCP/IP, HTTP, and other Internet protocols and in general looks like
a private version of the Internet. With tunneling, companies can send private
messages through the public network, using the public network with special
encryption/decryption and other security safeguards to connect one part of their
intranet to another.
Typically, larger enterprises allow users within their intranet to access the public
Internet through firewall servers that have the ability to screen messages in both
directions so that company security is maintained. When part of an intranet is made
accessible to customers, partners, suppliers, or others outside the company, that
part becomes part of an extranet.
extranet
Show me everything on Cloud computing and SaaS
definition -
An extranet is a private network that uses Internet technology and the public
telecommunication system to securely share part of a business's information or
operations with suppliers, vendors, partners, customers, or other businesses. An
extranet can be viewed as part of a company's intranet that is extended to users
outside the company. It has also been described as a "state of mind" in which the
Internet is perceived as a way to do business with other companies as well as to sell
products to customers.
An extranet requires security and privacy. These can include firewall server
management, the issuance and use of digital certificates or similar means of user
authentication, encryption of messages, and the use of virtual private networks
(VPNs) that tunnel through the public network.
What is all the hubbub about, anyway? Why, VoIP, of course! VoIP, the fabulous
secret ingredient in Vonage, Skype, Cisco CallManager, and a host of other
revolutionary technology products you may have already encountered on TV, in the
news, or in person. But what makes these products so revolutionary? What is it
about VoIP that is such a big deal?
But how? What makes VoIP do all this awesome stuff? Read on.
VoIP in Action
Skype is an instant messaging program that happens to have a peer-to-peer
(modeled after Kazaa) global voice network at its disposal, so you can use it to call
people on your buddy list using your PC or Mac. All you need is broadband, a
microphone, and a pair speakers or headphones. Voice calling alone doesn't set
Skype apart from other IM applications like AIM or Windows Messenger--they also
support voice. But Skype supports voice calling in a way that those applications can
only dream of: Skype works in almost any broadband-connected network
environment, even networks with firewalls that often break other voice-chatting
apps. Plus, Skype's variable-bitrate sound codec makes it less prone to sound
quality issues than its predecessors. In a nutshell, Skype just works. Perhaps that's
why Skype's official slogan is "Internet Telephony that Just Works."
The world has noticed. 150 million downloads later, Skype now offers the ability for
its users to call regular phone numbers from their PCs, a feature known as
SkypeOut. Skype also offers a voicemail service and can route incoming calls to a
certain phone number right to a user's desktop PC. There's even a Skype API that
allows Windows and Mac programmers to integrate the Skype client with other
applications. Videoconferencing add-ons, Outlook integration, and personal
answering machines are just some of the cool software folks have developed using
the Skype API.
On two fronts--the corporate phone system and that of the home user--VoIP is
transforming the global communications matrix. Instead of two separate notions of
a global network (one for voice calling and one for Internet Protocol), a single
converged network is arising, carrying both voice and data with the same
networking protocol, IP. Steadily, corporations and domestic phone subscribers are
migrating their voice services from the old voice plane to the new one, and next-
generation, IP-based phone companies have rushed in to help them make the
move.
VoIP-Based Services
By now you've probably seen ads for companies like Vonage and Packet8. These
services promise ultra-cheap voice calling service via your broadband internet
connection. Some offer calling packages as low as $9.95 per month. Their secret
weapon is VoIP. Voice over IP service providers use the internet to carry voice
signals from their networks to your home phone. Because VoIP telecommunication
isn't regulated the way traditional phone line telecommunication is, VoIP providers
like Vonage can offer drastically lower calling rates.
The catch? You've got to put up with the occasional hiccup in your voice service,
caused by the one thing legacy telephone technology has built-in that VoIP doesn't:
guaranteed quality. Because VoIP uses packets to transmit data like other services
on the internet, it cannot provide the quality guarantees of old-fashioned, non-
packet-based telephone lines. But this is changing, too. Efforts are underway on all
fronts (service providers, Internet providers, and VoIP solution makers) to adapt
quality-of-service techniques to VoIP services, so that one day, your VoIP calls may
sound as good as (or better than) your regular land-line calls.
Today, if you want to build a fully quality-enabled private VoIP network, you can.
Cisco, Foundry Networks, Nortel, and other network equipment makers all support
common quality-of-service standards, meaning corporate networks are only an
upgrade away from effective convergence of voice and data.
But it will be quite some time before the internet itself is quality-enabled. Indeed,
the internet may never be fully quality-enabled. This hasn't stopped enterprising
network gearheads like me from trying to connect calls over the internet, of course.
Hey, if Skype works so well, why can't corporate phone calls? Enterprise phone
administrators have found that it is actually very easy to equip mobile users with
VoIP phones to place calls on the company phone system by connecting to it over
the internet--from hotel rooms or home offices--but the quality of these calls is sort
of hit or miss, like a cell phone when you drive through a "dead zone" in the cell
network.
Wireless Network
Wireless network refers to any type of computer network that is wireless, and is
commonly associated with a telecommunications network whose interconnections
between nodes are implemented without the use of wires.[1] Wireless
telecommunications networks are generally implemented with some type of remote
information transmission system that uses electromagnetic waves, such as radio
waves, for the carrier and this implementation usually takes place at the physical
level or "layer" of the network.[2]
Contents []
1 Types of wireless connections
1.1 Wireless PAN
1.2 Wireless LAN
1.3 Wireless MAN
1.4 Wireless WAN
1.5 Mobile devices networks
2 Uses
3 Environmental concerns and health hazard
4 See also
5 References
6 Further reading
7 External links
WiMAX is a type of Wireless MAN and is described by the IEEE 802.16 standard.[5]
Wireless WANwireless wide area networks are wireless networks that typically
cover large outdoor areas. These networks can be used to connect branch offices of
business or as a public internet access system. They are usually deployed on the
2.4 GHz band. A typical system contains base station gateways, access points and
wireless bridging relays. Other configurations are mesh systems where each access
point acts as a relay also. When combined with renewable energy systems such as
photo-voltaic solar panels or wind systems they can be stand alone systems.
Global System for Mobile Communications (GSM): The GSM network is divided
into three major systems: the switching system, the base station system, and the
operation and support system. The cell phone connects to the base system station
which then connects to the operation and support station; it then connects to the
switching station where the call is transferred to where it needs to go. GSM is the
most common standard and is used for a majority of cell phones.[6]
Personal Communications Service (PCS): PCS is a radio band that can be used by
mobile phones in North America and South Asia. Sprint happened to be the first
service to set up a PCS.
D-AMPS: Digital Advanced Mobile Phone Service, an upgraded version of AMPS, is
being phased out due to advancement in technology. The newer GSM networks are
replacing the older system.
Uses This section is written like a personal reflection or essay and may require
cleanup. Please help improve it by rewriting it in an encyclopedic style. (September
2010)
An embedded RouterBoard 112 with U.FL-RSMA pigtail and R52 mini PCI Wi-Fi card
widely used by wireless Internet service providers (WISPs) in the Czech
Republic.Wireless networks have continued to develop and their uses have grown
significantly. Cellular phones are part of huge wireless network systems. People use
these phones daily to communicate with one another. Sending information overseas
is possible through wireless network systems using satellites and other signals to
communicate across the world. Emergency services such as the police department
utilize wireless networks to communicate important information quickly. People and
businesses use wireless networks to send and share data quickly whether it be in a
small office building or across the world.
Another important use for wireless networks is as an inexpensive and rapid way to
be connected to the Internet in countries and regions where the telecom
infrastructure is poor or there is a lack of resources, as in most developing
countries.
Compatibility issues also arise when dealing with wireless networks. Different
components not made by the same company may not work together, or might
require extra work to fix these issues. Wireless networks are typically slower than
those that are directly connected through an Ethernet cable.
A wireless network is more vulnerable, because anyone can try to break into a
network broadcasting a signal.[citation needed] Many networks offer WEP - Wired
Equivalent Privacy - security systems which have been found to be vulnerable to
intrusion. Though WEP does block some intruders, the security problems have
caused some businesses to stick with wired networks until security can be
improved. Another type of security for wireless networks is WPA - Wi-Fi Protected
Access. WPA provides more security to wireless networks than a WEP security set
up. The use of firewalls will help with security breaches which can help to fix
security problems in some wireless networks that are more vulnerable.
Environmental concerns and health hazardStarting around 2009, there have been
increased concerns about the safety of wireless communications, despite little
evidence of health risks so far.[7] The president of Lakehead University refused to
agree to installation of a wireless network citing a California Public Utilities
Commission study which said that the possible risk of tumors and other diseases
due to exposure to electromagnetic fields (EMFs) needs to be further investigated.
[8]
-----------------
Last Mile
The "last mile" or "last kilometer" is the final leg of delivering connectivity from a
communications provider to a customer. The phrase is therefore often used by the
telecommunications and cable television industries. The actual distance of this leg
may be considerably more than a mile, especially in rural areas. It is typically seen
as an expensive challenge because "fanning out" wires and cables is a considerable
physical undertaking. Because the last mile of a network to the user is also the first
mile from the user to the world, the term "first mile" is sometimes used.
To solve the problem of providing enhanced services over the last mile, some firms
have been mixing networks for decades. One example is Fixed Wireless Access,
where a wireless network is used instead of wires to connect a stationary terminal
to the wireline network.
Various solutions are being developed which are seen as an alternative to the "last
mile" of standard incumbent local exchange carriers: these include WiMAX and BPL
(Broadband over Power Line) applications.
Contents []
1 Business "last mile"
2 Existing delivery system problems
3 Economical information transfer
4 Existing last mile delivery systems
4.1 Wired systems (including optical fiber)
4.2 Wireless delivery systems
4.3 Intermediate system
4.4 Courier
4.5 Line aggregation ("bonding")
5 References
6 See also
When leaving the telephone exchange, the ISDN30 cable can be buried in the
ground, usually in ducting, at very little depth. This makes any business telephone
lines vulnerable to being dug up during streetworks, liable to flooding during heavy
storms and general wear and tear due to natural elements. Loss, therefore, of the
"last mile" will cause the failure to deliver any calls to the business affected.
Business continuity planning often provides for this type of technical failure.
Any business with ISDN30 type of connectivity should provide for this failure within
its business continuity planning. There are many ways to achieve this, as
documented CPNI.
1. Dual Parenting.
This is where the telephone carrier provides the same numbers from two different
telephone exchanges. If the cable is damaged from one telephone exchange to the
customer premises most of the calls can be delivered from the surviving route to
the customer.
2. Diverse Routing.
This is where the carrier can provide more than one route to bring the ISDN 30’s
from the exchange, or exchanges, (as in dual parenting), but they may share
underground ducting and cabinets.
3. Separacy.
This is where the carrier can provide more than one route to bring the ISDN 30’s
from the exchange, or exchanges, (as in dual parenting), but they may not share
underground ducting and cabinets, and therefore should be absolutely separate
from the telephone exchange to the customer premises.
7. Hosted numbers.
This is where the carriers or specialist companies can host the customers numbers
within their own or the carriers networks and deliver calls over an IP network to the
customers sites. When a diversion service is required, the calls can be pointed to
alternative numbers.
Existing delivery system problemsThe increasing worldwide demand for rapid, low-
latency and high-volume communication of information to homes and businesses
has made economical information distribution and delivery increasingly important.
As demand has escalated, particularly fueled by the widespread adoption of the
Internet, the need for economical high-speed access by end-users located at
millions of locations has ballooned as well. As requirements have changed, existing
systems and networks which were initially pressed into service for this purpose
have proven to be inadequate. To date, although a number of approaches have
been tried and used, no single clear solution to this problem has emerged. This
problem has been termed "The Last Mile Problem".
blood distribution to a large number of cells over a system of veins, arteries and
capillaries
water distribution by a drip irrigation system to individual plants, including rivers,
aqueducts, water mains etc.
Nourishment to a plants leaves through roots, trunk and branches
All of these have in common conduits which carry a relatively small amount of a
resource a short distance to a very large number of physically separated endpoints.
Also common are conduits supporting more voluminous flow which combine and
carry the many individual portions over much greater distances. The shorter, lower-
volume conduits which individually serve only one or a small fraction of the
endpoints, may have far greater combined length than the larger capacity ones.
These common attributes are shown to the right.
The high-capacity conduits in these systems tend to also have in common the
ability to efficiently transfer the resource over a long distance. Only a small fraction
of the resource being transferred is either wasted, lost, or misdirected. The same
cannot necessarily be said of the lower-capacity conduits. One reason for this has to
do with the efficiency of scale. These conduits which are located closer to the
endpoint, or end-user, do not individually have as many users supporting them.
Even though they are smaller, each has the overhead of an "installation;" obtaining
and maintaining a suitable path over which the resource can flow. The funding and
resources supporting these smaller conduits tend to come from the immediate
locale. This can have the advantage of a "small-government model." That is, the
management and resources for these conduits is provided by local entities and
therefore can be optimized to achieve the best solutions in the immediate
environment and also to make best use of local resources. However, the lower
operating efficiencies and relatively greater installation expenses, compared with
the transfer capacities, can cause these smaller conduits, as a whole, to be the
most expensive and difficult part of the complete distribution system.
These characteristics have been displayed in the birth, growth, and funding of the
Internet. The earliest inter-computer communication tended to be accomplished
with direct wireline connections between individual computers. These grew into
clusters of small Local Area Networks (LANs). The TCP/IP suite of protocols was born
out of the need to connect several of these LANs together, particularly as related to
common projects among the defense department, industry and some academic
institutions. ARPANET came into being to further these interests. In addition to
providing a way for multiple computers and users to share a common inter-LAN
connection, the TCP/IP protocols provided a standardized way for dissimilar
computers and operating systems to exchange information over this inter-network.
The funding and support for the connections among LANs could be spread over one
or even several LANs. As each new LAN, or subnet, was added, the new subnet's
constituents enjoyed access to the greater network. At the same time the new
subnet made a contribution of access to any network or networks with which it was
already networked. Thus the growth became a mutually inclusive or "win-win"
event.
In general, economy of scale makes an increase in capacity of a conduit less
expensive as the capacity is increased. There is an overhead associated with the
creation of any conduit. This overhead is not repeated as capacity is increased
within the potential of the technology being utilized. As the Internet has grown in
size, by some estimates doubling in number of users every eighteen months,
economy of scale has resulted in increasingly large information conduits providing
the longest distance and highest capacity backbone connections. In recent years,
the capacity of fiber-optic communication, aided by a supporting industry, has
resulted in an expansion of raw capacity, so much so that in the United States a
large amount of installed fiber infrastructure is not being used because it is
currently excess capacity "dark fiber".
This excess backbone capacity exists in spite of the trend of increasing per-user
data rates and overall quantity of data. Initially, only the inter-LAN connections were
high speed. End-users used existing telephone lines and modems which were
capable of data rates of only a few hundred bit/s. Now almost all end users enjoy
access at 100 or more times those early rates. Notwithstanding this great increase
in user traffic, the high-capacity backbones have kept pace, and information
capacity and rate limitations almost always occur near the user. The economy of
scale along with the fundamental capability of fiber technology have kept the high-
capacity conduits adequate but have not solved the appetite of the home users. The
last mile problem is one of economically serving an increasing mass of end-users
with a solution to their information needs.
Telephone
In the late 20th century, improvements in the use of existing copper telephone lines
increased their capabilities if maximum line length is controlled. With support for
higher transmission bandwidth and improved modulation, these digital subscriber
line schemes have increased capability 20-50 times as compared to the previous
voiceband systems. These methods are not based on altering the fundamental
physical properties and limitations of the medium which, apart from the introduction
of twisted pairs, are no different today than when the first telephone exchange was
opened in 1877 by the Bell Telephone Company. The history and long life of copper-
based communications infrastructure is both a testament to our ability to derive
new value from simple concepts through technological innovation – and a warning
that copper communications infrastructure is beginning to offer diminishing returns
on continued investment.[1]
CATV
Community Access Cable Television Systems, also known simply as "cable", have
been expanded to provide bidirectional communication over existing physical
cables. However, they are by nature shared systems and the spectrum available for
reverse information flow and achievable S/N are limited. As was done for the initial
unidirectional (TV) communication, cable loss is mitigated through the use of
periodic amplifiers within the system. These factors set an upper limit on per-user
information capacity, particularly when many users share a common section of
cable or access network.
Optical fiber
Fiber offers high information capacity and after the turn of the 21st century became
the deployed medium of choice given its scalability in the face of increasing
bandwidth requirements of modern applications.
In 2004, according to Richard Lynch, EVP and CTO of telecom giant Verizon, they
saw the world moving toward vastly higher bandwidth applications as consumers
loved everything broadband had to offer, and eagerly devoured as much as they
could get, including two-way, user-generated content. Copper and coaxial networks
wouldn’t – in fact, couldn’t – satisfy these demands, which precipitated Verizon's
aggressive move into Fiber-to-the-home via FiOS.[2]
Fiber is a future-proof technology that meets the needs of today's users, but unlike
other copper-based and wireless last-mile mediums, also has the capacity for years
to come, by upgrading the end-point optics and electronics, without changing the
fiber infrastructure. The fiber itself is installed on existing pole or conduit
infrastructure and most of the cost is in labor, providing good regional economic
stimulus in the deployment phase and providing a critical foundation for future
regional commerce.
Wireless delivery systemsMobile CDN coined the term the 'mobile mile' to
categorize the last mile connection when a wireless systems is used to reach the
customer. In contrast to wired delivery systems, wireless systems use unguided
waves to transmit ICE. They all tend to be unshielded and have a greater degree of
susceptibility to unwanted signal and noise sources. Because these waves are not
guided but diverge, in free space these systems have attenuation following an
inverse-square law, inversely proportional to distance squared. Losses thus increase
more slowly with increasing length than for wired systems whose loss increases
exponentially. In a free space environment, beyond some length, the losses in a
wireless system are less than those in a wired system. In practice, the presence of
atmosphere, and especially obstructions caused by terrain, buildings and foliage
can greatly increase the loss above the free space value. Reflection, refraction and
diffraction of these waves can also alter their transmission characteristics and
require specialized systems to accommodate the accompanying distortions.
Wireless systems have an advantage over wired systems in last mile applications in
not requiring lines to be installed. However, they also have a disadvantage that
their unguided nature makes them more susceptible to unwanted noise and signals.
Spectral reuse can therefore be limited.
Radio waves
Radio frequencies (RF), from low frequencies through the microwave region, have
wavelengths much longer than visible light. Although this means that it is not
possible to focus the beams nearly as tightly as for light, it also means that the
aperture or "capture area" of even the simplest, omni-directional antenna is greatly
larger than that of a lens in any feasible optical system. This characteristic results in
greatly increased attenuation or "path loss" for systems that are not highly
directional. In actuality, the term path loss is something of a misnomer because no
energy is actually lost on a free-space path. Rather, it is merely not received by the
receiving antenna. The apparent reduction in transmission, as frequency is
increased, is actually an artifact of the change in the aperture of a given type of
antenna.
For the above reasons, wireless radio systems have the advantage of being optimal
for lower-information-capacity broadcast communications delivered over longer
paths. For high-information capacity, highly-directive point-to-point over short
ranges, wireless light-wave systems are most useful.
Satellite communications
For information delivery to end-users, satellite systems, by nature, have relatively
long path lengths, even for low earth-orbiting satellites. They are also very
expensive to deploy and therefore each satellite must serve many users.
Additionally, the very long paths of geostationary satellites cause information
latency that makes many real-time applications unusable. As a solution to the last-
mile problem, satellite systems have application and sharing limitations. The ICE
which they transmit must be spread over a relatively large geographical area. This
causes the received signal to be relatively small, unless very large or directional
terrestrial antennas are used. A parallel problem exists when a satellite is receiving.
In that case, the satellite system must have a very great information capacity in
order to accommodate a multitude of sharing users and each user must have large
antenna size, with attendant directivity and pointing requirements, in order to
obtain even modest information-rate transfer. These requirements render high-
information-capacity, bi-directional information systems uneconomical. This is a
reason that the Iridium satellite system was not more successful.
The scope of hosting services varies widely. The most basic is web page and small-
scale file hosting, where files can be uploaded via File Transfer Protocol (FTP) or a
Web interface. The files are usually delivered to the Web "as is" or with little
processing. Many Internet service providers (ISPs) offer this service free to their
subscribers. People can also obtain Web page hosting from other, alternative
service providers. Personal web site hosting is typically free, advertisement-
sponsored, or inexpensive. Business web site hosting often has a higher expense.
Single page hosting is generally sufficient only for personal web pages. A complex
site calls for a more comprehensive package that provides database support and
application development platforms (e.g. PHP, Java, Ruby on Rails, ColdFusion, and
ASP.NET). These facilities allow the customers to write or install scripts for
applications like forums and content management. For e-commerce, SSL is also
highly recommended.
The host may also provide an interface or control panel for managing the Web
server and installing scripts as well as other services like e-mail. Some hosts
specialize in certain software or services (e.g. e-commerce). They are commonly
used by larger companies to outsource network infrastructure to a hosting
company.
Contents []
1 Hosting reliability and uptime
2 Types of hosting
3 Obtaining hosting
4 See also
5 External links
Hosting reliability and uptime This section does not cite any references or sources.
Please help improve this article by adding citations to reliable sources. Unsourced
material may be challenged and removed. (March 2009)
Multiple racks of servers.Hosting uptime refers to the percentage of time the host is
accessible via the internet. Many providers state that they aim for at least 99.9%
uptime (roughly equivalent to 45 minutes of downtime a month, or less), but there
may be server restarts and planned (or unplanned) maintenance in any hosting
environment, which may or may not be considered part of the official uptime
promise.
Many providers tie uptime and accessibility into their own service level agreement
(SLA). SLAs sometimes include refunds or reduced costs if performance goals are
not met.
Types of hosting
A typical server "rack," commonly seen in colocation centres.Internet hosting
services can run Web servers; see Internet hosting services.
Many large companies who are not internet service providers also need a computer
permanently connected to the web so they can send email, files, etc. to other sites.
They may also use the computer as a website host so they can provide details of
their goods and services to anyone interested. Additionally these people may
decide to place online orders.
Free web hosting service: offered by different companies with limited services,
sometimes supported by advertisements, and often limited when compared to paid
hosting.
Shared web hosting service: one's website is placed on the same server as many
other sites, ranging from a few to hundreds or thousands. Typically, all domains
may share a common pool of server resources, such as RAM and the CPU. The
features available with this type of service can be quite extensive. A shared website
may be hosted with a reseller.
Reseller web hosting: allows clients to become web hosts themselves. Resellers
could function, for individual domains, under any combination of these listed types
of hosting, depending on who they are affiliated with as a reseller. Resellers'
accounts may vary tremendously in size: they may have their own virtual dedicated
server to a collocated server. Many resellers provide a nearly identical service to
their provider's shared hosting plan and provide the technical support themselves.
Virtual Dedicated Server: also known as a Virtual Private Server (VPS), divides
server resources into virtual servers, where resources can be allocated in a way that
does not directly reflect the underlying hardware. VPS will often be allocated
resources based on a one server to many VPSs relationship, however virtualisation
may be done for a number of reasons, including the ability to move a VPS container
between servers. The users may have root access to their own virtual space.
Customers are sometimes responsible for patching and maintaining the server.
Dedicated hosting service: the user gets his or her own Web server and gains full
control over it (root access for Linux/administrator access for Windows); however,
the user typically does not own the server. Another type of Dedicated hosting is
Self-Managed or Unmanaged. This is usually the least expensive for Dedicated
plans. The user has full administrative access to the box, which means the client is
responsible for the security and maintenance of his own dedicated box.
Managed hosting service: the user gets his or her own Web server but is not allowed
full control over it (root access for Linux/administrator access for Windows);
however, they are allowed to manage their data via FTP or other remote
management tools. The user is disallowed full control so that the provider can
guarantee quality of service by not allowing the user to modify the server or
potentially create configuration problems. The user typically does not own the
server. The server is leased to the client.
Colocation web hosting service: similar to the dedicated web hosting service,
but the user owns the colo server; the hosting company provides physical space
that the server takes up and takes care of the server. This is the most powerful and
expensive type of web hosting service. In most cases, the colocation provider may
provide little to no support directly for their client's machine, providing only the
electrical, Internet access, and storage facilities for the server. In most cases for
colo, the client would have his own administrator visit the data center on site to do
any hardware upgrades or changes.
Cloud Hosting: is a new type of hosting platform that allows customers powerful,
scalable and reliable hosting based on clustered load-balanced servers and utility
billing. Removing single-point of failures and allowing customers to pay for only
what they use versus what they could use.
Clustered hosting: having multiple servers hosting the same content for better
resource utilization. Clustered Servers are a perfect solution for high-availability
dedicated hosting, or creating a scalable web hosting solution. A cluster may
separate web serving from database hosting capability.
Grid hosting: this form of distributed hosting is when a server cluster acts like a grid
and is composed of multiple nodes.
Home server: usually a single machine placed in a private residence can be used
to host one or more web sites from a usually consumer-grade broadband
connection. These can be purpose-built machines or more commonly old PCs. Some
ISPs actively attempt to block home servers by disallowing incoming requests to
TCP port 80 of the user's connection and by refusing to provide static IP addresses.
A common way to attain a reliable DNS hostname is by creating an account with a
dynamic DNS service. A dynamic DNS service will automatically change the IP
address that a URL points to when the IP address changes.
Some specific types of hosting provided by web host service providers: