Академический Документы
Профессиональный Документы
Культура Документы
by Erik Bosrup
Nv3b 1998
Preface
When I started with this project my aim was to learn about TCP/IP and how the Internet works in
general. Once I got into the huge amount of information that exists about Internet I discovered that I
wasn’t really interested in the technical terms and exactly how everything works. Instead I decided to
write a text that mostly anyone with slight computer and Internet experience could read and understand
most of.
While going through the information I also found papers about new standards and projects that I saw
relevant to the future development of the Internet. This made up my project question, how will the
Internet handle the future? Or perhaps how will the future handle Internet? To explain the coming
generation Internet one must know how the Internet once started, how it works today, who controls it
and many other things. This matched my interest in finding out how the Internet works with my plans
to examine the nets future. All of these things make up one big mix of information, it’s not too detailed,
many things are left out. My goal was to get a general feeling for what the Internet is and what it will
be.
Page 2 of 19
The Internet – Yesterday, Today, Tomorrow by Erik Bosrup
Table of contents
Page 3 of 19
The Internet – Yesterday, Today, Tomorrow by Erik Bosrup
Page 4 of 19
The Internet – Yesterday, Today, Tomorrow by Erik Bosrup
Internet Technology
Internet is not one big network. As the name claims it is inter-net, thus a network connecting networks.
This is important to know as it is the base of the Internet foundation. When you logon to your local
Internet provider, you connect to their network, which is connected to many others. This is the strength
of Internet, if one network malfunctions, the other can function normally without it.
protocol://host.domain.tld
Application Protocol
Today everyone knows that a text starting with www is a World Wide Web address, this is not
completely true. Actually it is the http:// part of the URL that specifies that we want to connect to the
part of the server that handles the Hyper Text Transfer Protocol although if we try to connect to an
address starting with the www prefix our software assumes it to be http. The http protocol is the
Internet standard for exchanging HTML files between clients and servers. HTML or HyperText Markup
Language is the language used to layout pages so they may contain text, pictures, multimedia and Java
among else. In this case we are acting as a client since we are requesting a document from someone
else. Other than http there is also a secure encrypted version of http, called Secure HyperText Transfer
Protocol (https://) and the File Transfer Protocol (ftp://) among many others. So the first part of the
URL, the protocol tells our software how we want to connect to the server and what kind of reply we
are expecting.
Host
Next comes the host. A host is a computer that is connected to the Internet. When you use your modem
to connect to your local ISP (Internet Service Provider) or LAN you will also become a host. However
only certain computers have hostnames that works in an URL. If you connect through an ISP you will
not get one that can be used in URLs.
Domain
Since the communication between hosts is based on IP (Internet Protocol) addresses and the computers
themselves don’t know where on the net an URL is to be found, they can only say “I want to talk to
host 130.92.112.96”. For this reason we have domains. A domain is a way for us humans to remember
a location on the Internet. The computer must translate the domain into a number in order to make
contact to the host.
Page 5 of 19
The Internet – Yesterday, Today, Tomorrow by Erik Bosrup
Domains can be mostly anything, different TLD registrars (the organisation that manages the registry)
have different rules for registering domains and as long as you follow their rules and domain names
rules found in RFC 952, 1035, 1123, you are free to use your imagination. (More on RFC’s later on.)
Page 6 of 19
The Internet – Yesterday, Today, Tomorrow by Erik Bosrup
transfers that need more speed, like audio and video streaming, at the cost of reliability use UDP (User
Datagram Protocol).
When applications talk to each other over the Internet, like when you look at someone’s homepage,
they do they do not take part in the actual sending and receiving. This is where the transport protocol
layer comes in. The application tells the transport protocol what to send and where to send it and then
in the case of TCP it is up to the transport protocol to make sure that everything gets sent and that
everything arrives in the same state that it was sent. And if anything goes wrong it is also the transport
protocol layer that handles it by re-sending. The transport layer also splits that data into smaller pieces,
if you want to send someone a two-megabyte file, it can’t all be sent at once, so the transport protocol
layer splits it into smaller chunks.
Internet Protocol
The Internet Protocol (IP) layer gets the datagrams (the parts of a file that has been split up) from TCP
(or whatever transport protocol that is being used) and adds some of its own information and does the
actual logical transfer over the Internet. Simplified this can be described as if TCP makes sure
everything goes through, while IP actually makes it happen.
IP only does the logical transfer, this might sound weird but it isn’t. Since Internet spans over many
different types of networks such as Ethernet and Token Ring which all have their own way of
communicating there is a need for a layer that can work on top of them all. The Internet consists of
many networks connected to each other, some networks might have connections to many other
networks while others only have one route out to the rest of the Internet. It is the Internet protocol’s job
to find out how to move the data between the different networks.
Routing
Finding out how to move this data between networks is called routing. Where two networks are
connected to each other there is a router or a gateway, these are used to move data between the two
networks. So what IP has to do is to check if the target computer is in the same network and if so, just
send it away. If not it must find out to which route it should take. For this it uses a routing table, in it
there is a list of IP addresses and to what gateway they should be sent. If there isn’t an entry for the
target IP, it is sent to the default route. The default route is the gateway that is most likely to be the
correct one. When it knows where to send the datagram, it does so and it’s that networks responsibility
to get the information to the correct computer or onwards to another network.
Subnets
Networks on the Internet are called subnets, a subnet can also have it’s own subnets. A large university
can for example be a subnet of the Internet and have subnets for each faculty. The purpose of making
small networks it to stop one malfunctioning hardware device from stopping the entire network. This is
a part of the overall Internet strategy, there should always be a way out. If on connection goes down
there is always another. When IP decides if a host is located within the current subnet it looks at the IP
address and analyses it.
All the computers connected to the Internet must have its own IP, and because networks have different
size, i.e. number of hosts, there are different network classes. The system is constructed in such a way
that Class A networks may have many subnets and hosts, while Class B networks fewer, and the class
system ranges down to smaller and smaller classes each with fewer hosts. All the networks connected
have been assigned one or two network ranges from a central authority in which they can decide what
computer gets what number. A company that requests X amount of IP’s might not have the need for an
entire Class B network can then be assigned two or three Class C networks. The point of not giving out
more IP’s than necessary is due because the Internet is starting to run low on IP’s.
Ethernet
Ethernet is a very popular and widely spread type of Local Area Network. The most common form of
Page 7 of 19
The Internet – Yesterday, Today, Tomorrow by Erik Bosrup
Ethernet is called 10BaseT, which denotes the maximum transmission speed of 10 Mbps using copper
twisted cables. Recent enhancements of Ethernet bumps the speed to a maximum of 100 Mbps, this
system is called 100BaseT.
When Ethernet was designed one of the goals was to make sure that two computers could not share the
same address. Because of this every Ethernet network interface card (NIC) sold has it’s unique
Ethernet address consisting of 48 bits (a bit is either 0 or 1), all the Ethernet manufacturers has to
register with a central authority that is monitoring this.
We will again use our browser example, lets say that you have requested a small text document from
www.internet.com and the server sends it over to you (mycomputer.network.se). First of all the server
will add information about what is sent in the HyperText Transfer Protocol, discussed earlier. This will
tell your application what data the packet is containing. Next TCP will take all that information and add
it’s own headers to it and send it all down to the IP level. IP will also add it’s own headers, as each
protocol layer only understands its surrounding neighbours. Ethernet will not understand TCP headers
and HTTP will not understand IP headers. The IP layer will now found out how the packet is to be sent,
and it’s most likely through Ethernet so it passes it down to Ethernet. Once the Ethernet package
reaches mycomputer.network.se, Ethernet will remove its headers and send it back up to IP. IP will the
do the same and give the information to TCP. As you can read from its name, Transmission Control
Protocol, TCP check the information so it hasn’t got corrupt while transferred, if so it asks the server to
send it again. If it’s all right it will remove its headers and give the information to you browsing
software that removes the HTTP headers and you can now see the text file in your browser!
Page 8 of 19
The Internet – Yesterday, Today, Tomorrow by Erik Bosrup
environments. Behind all of this it isn’t such a simple matter. As we discussed earlier large files have to
be split into smaller pieces so it can transferred easily. This wasn’t completely true. Almost everything
that is transferred must be split, or fragmented. Every physical network type has it’s own limit on how
big packages it accepts and if a larger one arrives it must then handle the splitting and re-assembly, that
on packages that might already be split. Sending out too large packets can then of course make Internet
transfers slower as hardware on other places must work, besides the increased traffic volume this
generates.
ICMP
Internet has many other protocols than the ones we have discussed here so far. ICMP or Internet
Control Message Protocol would probably be described as Internet’s error reporting protocol. If a
packet of data takes too long to deliver, an ICMP message will be sent to the sender telling what
happened. Also if a system tries to transfer some data to a network outside the local one through the
default router and that router has been told there is a better way to the target, the source will receive a
reply stating so. The Internet Control Message Protocol really is what its name says, a message
protocol for reporting errors, it doesn’t find errors itself.
UDP
We talked about UDP, User Datagram Protocol before and we said it was less reliable and faster. It is
less reliable because its headers are smaller and it has fewer features to verify that the information
transferred is correct. While TCP is what is called a connection protocol, in other words both
computers talking respond to each other data so they both know if everything worked, UDP is
connectionless. This means that in for example an audio stream from a live radio show is sent to the
listener just like in real broadcast radio. Ready or not, we’re transmitting now. It’s basically up to the
listener to make sure he’s ready to receive. This of course means loss, some data will not reach the
listener and that is what makes is less reliable but faster. UDP itself doesn’t have any error checking
but the application using the protocol may, it is however then easier to use TCP that has it built-in.
Ports
When TCP receives data from IP, it does not directly know how it should be sent to the application
layer. Many Internet applications might be running so there must be a way to find out what application
wants what. This is done by using ports. When we connect with our browser to www.internet.com, our
software knows we want to connect to the HTTP part of the server since we are using the world-wide
web (it can also be specified by typing http://www.internet.com:80). To make sure the server knows
we’re requesting an HTTP document we add the standardised port 80 to our request. With the request
we also add the port we want the server to communicate with us through, this can be any free port. This
way the two computers TCP software can get the data sorted out correctly. Different types of
transferred data uses different ports that are standardised to make sure there are no clashes.
Dawn of internetworking
The groundwork for Internet was created as early as in 1957. That year USSR launched the first
satellite, Sputnik. To establish lead in military science and technology the US Department of Defence
formed the Advanced Research Projects Agency, commonly known as ARPA. Later in the 60’s, ARPA
started to study networks and how it could be used to spread information. In 1969 the first few
networks were connected. The first system to send e-mail across a distributed network was developed
1971 by Ray Tomlinson and the telnet (allowing users to login on remote computers) specifications
arrived one year later. The first drafts for a networked called Ethernet were created in ‘73 and a year
later there was a detailed description of the Transmission Control Protocol. The Usenet newsgroups
were created in 1979 and in 1982 Department of Defence declared TCP/IP to be standard. At this time
the number of hosts connected was very low, in 1984 it broke the 1000 boundary. Three years later that
Page 9 of 19
The Internet – Yesterday, Today, Tomorrow by Erik Bosrup
number had changed to 10000, but we are still far from the Internet explosion.
Most of this all happened before computers were widely spread, IBM released its first PC, based on
Intel’s 8088 processor in 1981. The Pentium processor family that currently is being phased out arrived
in 1994. The users connected to the Internet at this time were researchers and students, connected by
university networks.
A worm that infected computers on the Internet with a program that took up system resources (like
memory) created a need for some sort of team that would try to find solutions to make such issues less
dangerous. The team was called Computer Emergency Rescue Team (CERT). They work by writing
advisories and reports on how to avoid problems.
What most people tend to define the Internet as, is the web. The World-Wide Web standard was
created in 1991 by CERN and the predecessor to Netscape Navigator, Mosaic saw light two years later.
Common people started to get Internet access in 1994-95, it’s around those years the numbers of hosts,
domains and networks started to increase rapidly. Yet only a small amount of the earth’s population is
connected.
The so-called browser war between Microsoft’s Internet Explorer and Netscape’s Navigator started in
1996 when the two companies released their 3.0 browsers. When this is being written there still isn’t a
winner but Netscape has been forced to make its browser free (Microsoft’s has always been), including
the source code. Perhaps the US Justice Department will prevent Microsoft from giving its browser
away, perhaps they will split the company into pieces. At least it shows the future importance of the
Internet when Microsoft embeds its browser into the core of its operating system.
Page 10 of 19
The Internet – Yesterday, Today, Tomorrow by Erik Bosrup
Engineering Task Force and the Internet Research Task Force. The IETF handles all the current
protocol standards and promotes further development. IETF also handles operation and management of
the Internet. The IRTF is more of the Internet’s future department. They take care of all the future
problems of the Internet and how they are to be handled. Among their work is how the net should
handles billions of hosts, faster connections and wireless Internet. For this they have to look at new
protocols and how they can be incorporated into the current system without major service interruptions.
Internet’s Future
To provide more IP addresses the addresses in IPv6 have been expanded to 128-bit, or approximately
340,282,366,920,938,463,463,374,607,431,768,211,456 theoretically available IP addresses. This is the
limit that the engineers think we will stay below for quite some time.
IPv6 has been designed to enable high-performance, scalable internetworks to remain viable well into
the next century. A large part of this design process involved correcting the inadequacies if IPv4. One
major problem that has been fixed is the routing. IPv6 does not use different network classes for
routing instead it uses a system that provides flexibility to expand networks yet making the routing
quick. With many addresses to work with the addressing has been layed out so they first of all are
sorted by their major connection points. One such point is Sunet in Sweden, there all the major
Swedish ISP’s connect to each other as well as with foreign countries. Each ISP will then have a large
address range that it can provide to companies, minor ISP’s and dialup customers. This makes routing
much easier, Internet backbone routers will no longer have to have huge databases of over 40,000
entries.
Page 11 of 19
The Internet – Yesterday, Today, Tomorrow by Erik Bosrup
Security
With IPv4 there isn’t any security at IP level. One of the design goals for version 6 is to provide
authentication and encryption at a lower level. Previously encryption had to be done at a higher level,
usually at the application layer. The authentication part makes sure that the information is actually
coming from the source that it is claiming to be. This ensures that valuable data or passwords that is
stored on a system cannot be spoofed (method to change the source address to make the packet appear
coming from a different host) to intruders. Encryption is made by adding extra headers to the IP packet
with encryption keys and other handshaking information. This way every packet can be encrypted by
itself at a lower level, preventing sniffers (program to eavesdrop network traffic) from accessing the
information in the packet.
Neighbour Discovery
One of the major headaches for network administrators of large networks it managing IP addresses.
The InterNIC wants to have as many addresses free as possible for future usage, giving the
administrators a lot of work tracking which addresses that are used and which are free. When IPv6 is
used on a network such problems can be disavowed. The protocol has a sort of autoconfiguration so
when a host is connected to a network it will talk to the local router by using a temporary IP address
and the router will tell the host what IP it should use. The router has previously been defined a range of
addresses by the system administrator. In the same way if a network is moved or there is a change of
ISP, resulting in a major IP change, the administrator will reconfigure the router to the new IP range
and it will then, by the Neighbour Discovery (ND) protocol, tell the hosts their new IP’s.
Forwarding
To support highly dynamic situations in the future IPv6, contains features for IP forwarding. When a
user leaves work to go on a business trip for example he will logout from the local are network. The
system will then tell the local router that all data to that user is to be forwarded to his laptop IP instead
of his work IP. Forwarding allows domain name entries to be unchanged while the user is connected to
a network on the other side of the earth.
Page 12 of 19
The Internet – Yesterday, Today, Tomorrow by Erik Bosrup
Transition
When or if
IPv6 makes it
to the common
market the
transition will
not be too
hard. The next
generation
protocol is created to work with the old version of IP. The first routers that will be installed using the
new protocol will also handle the old version so IPv4 can talk to it during the transition period. The
only dependency that exists is the DNS. When a subnet is upgraded to IPv6, the domain name server
must also be updated to handle the new IP addresses. The network that the subnet is connected to does
not have to be upgraded. If an IPv6
host connects to a different IPv6 host
on a different subnet where the data
has to travel over an old IPv4 network,
it will only get encapsulated with IPv4
headers. This method is called
tunnelling. When the packet once
reaches the destination IPv6 network
the IPv4 headers will be removed by
the router and the packet will be
submitted to the correct IPv6 host. The
old version four network will not know
that it ever carried something it actually cannot handle.
6bone
Currently there is a virtual
world-wide IPv6 network
called 6bone created to test
implementations of IPv6 in
a working environment
while not risking
production routers and
important systems. The
network operates on top of
the ordinary Internet by
tunnelling discussed
earlier. 6bone is not
however a new Internet
that we will move to once
IPv6 is ready for
commercial use, instead it
is just a playground for
scientist and it will disappear when IPv6 becomes widely used.
Page 13 of 19
The Internet – Yesterday, Today, Tomorrow by Erik Bosrup
It might seem great with all these new categories but will they actually matter? The owners of many
domains today registered them to make profit. By registering corporate or product names they want to
sell them to the rightful owner later on. The same way they can also register good domains like video
or cd.store by just being quick to register and then sell to the highest bidder. To stop domain
opportunist large corporations also have to register their domain at all top level domains just the way
they have done with the country domains. Most likely the new domains, whenever (or if) they arrive
will just create a storm of registrations, and all the sought-after domains will be taken immediately.
And along with them there will also be the normal copyright disputes etc that already exist with the
.com domain.
Future connections
Many new connection forms are emerging as the demand for high speed Internet grows. Users no
longer wish to browse with slow modems. In this section we will look into some of the technologies
that might become popular in the future.
Fixed connections
The modems most people use to connect to the Internet have a speed of 33.6 kbps (thousand bits per
second), this gives a transfer rate of about 3 kb/sec (thousand bytes per second) on the Internet. When
downloading files this is very slow. The phone lines in general support much higher communication
speeds, here are some of them.
Satellite
Connections through satellite is starting to become available, the pro’s of it is the high data transfer
speed. Common users can expect speed ranging from 400 to 800 kbps while professional equipment
could increase that speed dramatically over the 10 Mbps’. The big con with satellites for consumer
usage is that it is a one way system, you will have to have a modem connection open for
communication back to the Internet (to request and acknowledge information). Another con is the
latency, transferring data up to space takes a while, this creates some slight delays that could for
example make gameplay over Internet very tedious.
xDSL
The DSL family of technologies is just like ISDN and extension of your current phone line. DSL
technology however provides much higher speeds but also requires technical upgrades at the local
telephone station. Besides that you cannot be to far from the telephone station as background noise will
disturb the signal, giving you much slow transfers than the 9 Mbps that ADSL can offer. Digital
Subscriber Line, which is its long name, is probably one of the connection forms that will be popular in
the future, as long as you live near the telephone station.
Cable Modem
The cables that already are laid out to handle cable TV can carry data very well. Many cable networks
Page 14 of 19
The Internet – Yesterday, Today, Tomorrow by Erik Bosrup
are only good at providing data, users connecting through a cable modem can get speeds of a couple of
Mbps from the Internet while sending might go down to a few hundred kbps. This differs widely
depending on the system that the cable operator is using.
Mobile connections
GSM
Just as the Internet wasn’t created to grow like it did, the European mobile phone system, GSM (Global
System Mobile) wasn’t created to handle data. The system is currently
limited to 9.6 kbps, while normal telephone line modems can get speeds
up to 56 kbps. This makes mobile Internet access very limited, only e-
mail messages can be sent and received at a reasonable speed and
browsing www would be very slow. Connection to a mobile phone is no
longer needed, telecommunication companies have phones with
integrated computers as well as PC-Cards with built-in phones. Fast
communication over GSM is not very good yet but by the year 2001, the
GSM systems are expected to be enhanced for data transfer at 384 kbps.
Wireless Internet
There are systems designed specifically for wireless Internet, in Seattle, Washington DC and San
Francisco systems consist of 1000’s of small transmitters on light poles. The system provides access all
over the central area at ordinary modem speed. The system is very flexible in the sense you can move
around freely in the city and it requires only a small antenna on the special Ricochetmodem. The system
currently has more than 15,000 users and bandwidth upgrades to provide higher data speeds can be
expected in the future.
Page 15 of 19
The Internet – Yesterday, Today, Tomorrow by Erik Bosrup
They do have a good point with this as most of the devices that are planned to be connected to the
home network are connected to with an electric cable. Also the speed of it is currently the same as
SWAP, with improvements likely to come. Internet over electric lines also works for out of the house
connections, like browsing and e-mail. The great advantage of it is that everyone has it and it would
only require minor changes to the power system and a small adapter at home.
Page 16 of 19
The Internet – Yesterday, Today, Tomorrow by Erik Bosrup
Conclusion
As I’ve worked with this project I’ve made up pictures in my head about he we will be connected in the
future and how we will use the Internet. The only thing that I can say I am really certain will happen is
mobile Internet. Cell phone usage and Internet usage has exploded hand in hand. The same way that we
want to travel and make calls with our cell phones we will also want to travel and connect to the
Internet. The problem with this is of course the bandwidth. Wireless communication doesn’t at present
day provide enough speed for useful usage. Perhaps UMTS will be the solution to this. As fixed
Internet connections start to provide enough bandwidth to support real TV and video broadcasting the
users will want the same features in their flexible laptop computers. The question is will the mobile
Internet systems provide what the users want?
Connecting all our home devices into one local home network will be one of the great advantages of
the Internet in the future. Letting all our home devices talk to each other has great advantages, just take
for example a normal employee. Wouldn’t it be great to have a camera in the fridge to check if
anything needs to be bought on the way home from work? Or perhaps to start the coffee machine and
have fresh coffee ready every morning at breakfast and when the work ends?
In the long run I think CD discs, DVD discs and other multimedia media’s will be phased out. When
users starting getting more bandwidth such devices will become redundant. The Internet is a better
platform for multimedia than storage discs ever can be. Updates can be done in the information
whenever needed making patches and updates unnecessary. An argument against this theory is that a
DVD discs with its gigabytes of data will take a long time to transfer. In this lies the strength of the
Internet, instead of sending all the data to the user in one large file it will be streamed. This way the
user can use one part of the multimedia application while the next part is being downloaded to the
computer, making it ready for use when the user wants it.
Mixing the local home device network (as I like to call it) with the Internet can have certain qualities.
One scenario would be stereos, instead of playing radio from standard FM radio, the radio signals from
all over the world would arrive over the electricity line. No more CD’s, when you want listen to music,
start your TV, go through the menu system until you find the song you want to listen to and it will be
played over the Internet. Naturally the same goes for videos and games.
All of this of course has a price, the multinational corporations are racing to be first with a flexible
working solution as the winner can expect large incomes. Users will in the future not buy a CD,
perhaps they could buy unlimited listening to it but the standard would be to pay-per-usage.
I am quite determined that some time in the future we will have this scenario, what I am uncertain
about however is if the personal computers will go away. We won’t have to type on a keyboard,
eventually we will get rid of it, but don’t we want to have some kind of personal storage space that we
know is ours, not publicly available to everyone over Internet. I don’t really believe in Sun’s Network
Computers, I am more for an intermediate solution of Networked Computers, they don’t need CD-
ROM or floppy drives, all they would need is a harddrive to store information and a network interface
card for Internet access.
Working with this project has been very fun. Not only have I learned a lot, hopefully this small report
about the Internet also can teach others. When I was finally finished writing, something struck me. All
the information I used from the Internet I had printed on paper. Even if everyone will be connected in
the future and all our devices around us will communicate with each other people will still want to be
able sit back, relax and enjoy a good book. A physical one, made in real paper and printed with black
ink.
Page 17 of 19
The Internet – Yesterday, Today, Tomorrow by Erik Bosrup
List of references
Paul Simoneau: “Hand-On TCP/IP”
McGraw-Hill 1997, ISBN 0-07-912640-5
“IPv6”
http://whatis.com/ipv6.htm
Page 18 of 19
The Internet – Yesterday, Today, Tomorrow by Erik Bosrup
“WCDMA in brief”
http://www.ericsson.se/wcdma/wcdma/sub_intr/wcdma_in_brief.htm , 24 October 1997
“The compelling case for Wideband CDMA for next-generation mobile Internet and multimedia”
http://www.imt-2000.com/wcdma/wcdma/sub_tech/brochures/cdma.htm , 18 March 1998
Page 19 of 19