You are on page 1of 180

Open Source Programming

"Open source is a set of principles and practices that promote access to the production and design process for various goods, products, resources and technical conclusions or advice. The term is most commonly applied to the source code of software that is made available to the general public with relaxed or non-existent intellectual property restrictions. This allows users to create user-generated software content through incremental individual effort or through collaboration." 1.0 Introduction Open source software (OSS) is defined as computer software for which the source code and certain other rights normally reserved for copyright holders are provided under a software license that meets the Open Source Definition or that is in the public domain. This permits users to use, change, and improve the software, and to redistribute it in modified or unmodified forms. It is very often developed in a public, collaborative manner. Open source software is the most prominent example of open source development and often compared to user-generated content. The term open source software originated as part of a marketing campaign for free software. Hardware is a general term that refers to the physical artifacts of a technology. It may also mean the physical components of a computer system, in the form of computer hardware. Software is a general term used to describe a collection of computer programs, procedures and documentation that perform some tasks on a computer system. Software includes websites, programs, etc. that are coded by programming languages. 2.0 The Latest Open Source Operating System (OS) 2.1 Meanings of open source OS (OSOS) OSOS is operating system that provided for use, modification and redistribution. It can be download from the Internet for free and modify with suggested improvements. 2.2 Examples of open source OS a) Haiku (operating system) Haiku is an open source operating system currently in development designed from the ground up for desktop computing. Inspired by the Be Operating System, Haiku aims to provide users of all levels with a personal computing experience that is simple yet powerful, and void of any unnecessary complexities. Haiku is developed mostly by volunteers around the world in their spare time. Besides the project admins and core members, Haiku also exists thanks to the dedicated support of a fervent and friendly community, and that of Haiku Inc., a non-profit founded by project leader Michael Phipps with the purpose of supporting the development of Haiku. Haiku, formerly known as OpenBeOS, is an open source project dedicated to the re-creation and continuation of the Be Operating System on x86 and PowerPC based computers. Haiku is an open-source operating system currently in development designed from the ground up for desktop computing. Inspired by the BeOS, Haiku aims to provide users of all levels with a personal computing experience that

is simple yet powerful, and free of any unnecessary complexities. b) Syllable A free and open source operating system for Intel x86 Pentium and compatible processors. Its purpose is to create an easy-to-use desktop operating system for the home and small office user. It was forked from the stagnant AtheOS in July 2002. It has a native web browser (ABrowse), email client (Whisper), media player, IDE, and many more applications. Features include: - Native 64-bit journaled file system, the AtheOS File System (usually called AFS, which is not the same as the Andrew File System) - C++ oriented API - Legacy-free, object-oriented graphical desktop environment on a native GUI architecture - POSIX compliance is 99% - Many software ports (Emacs, Vim, Perl, Python, Apache etc.) - GNU toolchain (GCC, Glibc, Binutils, Make) - Preemptive multitasking with multithreading - Symmetric multiprocessing (multiple processor) support - Device drivers for most common hardware (video, sound, network chips) - File system drivers for FAT (read/write), NTFS (read) and ext2 (read). 3.0 3.1 The Latest Open Source Application Software Meaning of open source application software(OSAS) OSAS is an application software (is a computer program or a suite of computer programs that performs a particular function for the user) that is its source codes are available to users and they have the rights to modify it which also means that it is free to download and use. 3.2 Examples of open source application software(OSAS) a)GNOME GNOME Office empowers you with three "best in class" productivity applications available as GNU Free Software. The times of wrestling with file formats, compatibility, and 'halfway-there' features is over. The AbiWord word processor, Gnumeric spreadsheet, and Gnome-DB data access components allow you to get it done now. b) NeoOffice NeoOffice is a fork of the free/open source office suite that is ported to Mac OS X. It implements nearly all of the features of the corresponding version, including a word processor, spreadsheet, presentation program, and graphics program. It is developed by Planamesa Software, and uses Java technology to integrate 'originally developed for Solaris and Linux' with the Aqua interface of Mac OS X. was originally released under both the LGPL and SISSL; it is now released solely under the LGPL. 4.0 4.1 Latest Development in ICT Hardware (state specification OR special features of ONE hardware and compare it to previous model. a) Canon Pixma MP280 b) Canon Pixma MX360 -The Canon Pixma MP280 is an entry level The Canon Pixma MX360 was first all-in-one printer, scanner, and copier with introduced by Canon in January during enhanced photo printing capabilities. It was CES 2011. It is an entry level model the budget-friendly model out of its line of designed for small office/home office use, eight photo printers released by Canon in alongside the Pixma MX410. We have 2010. Taking on the new Pixma photo printer already reviewed the leading flagship inkjet look, it features a simple glossy black design out of the bunch, the Pixma MX882, and with silver accents just like the previously the inkjet that is one step down from it, reviewed sister product, the Canon Pixma the Pixma MX420. Now we will take a MP495. Canon has included Full HD

Movie look at the MX360. This four-in-one can Print and photo editing software, as well as the print, scan, copy, and fax and connects to a Easy Photo Print app for Android smartphones computer via HighSpeed USB 2.0 (note in the package.Moreover, this photo all-in-one that the MX410 has wireless connectivity). offers high color resolution color photo It has an automatic document feeder that capabilities with a maximum of 4800 x 1200 can fit up to 30 sheets and a 100-sheet rear dpi and is Energy Star certified. It is currently feed tray. We tested the MX360 and found selling for only $70 from Canon's website; so that it can print up to 8 black and white keep reading to find out if this is the right pages per minute under the default settings. printer for you. Our reviews include an The MX360 has a list price of $79.99; overview of specifications, testing results, a check out our full review below to find out summary of the build and design, and more. if this is the suitable inkjet for your office at that price. 4.2 Software(state year/date of release, special features of ONE software and compare it to previous versions) Name File Size VLC Media Player 2.0.6 VLC Media Player 2.0.0 21.88 MB Requirement Windows 20.99MB XP / Vista Windows7 / Windows8 / Windows 2000 / XP / Vista / Windows7 / XP64 / Vista64 / Windows7 64 / Windows8 / Windows8 64 Date April 12, 2013 February 20, 2012 License Open source Open source Author Release Features Can play: Can play: MPEG-1,MPEG-2 and MPEG-1,MPEG-2 and MPEG-4 MPEG-4 DVDs, VCDs, and Audio CDs DVDs, VCDs, and Audio Several types of network streams such CDs as HTTP,RTSP,MS and etc. Several types of network From satellite cards (DVB-S) streams such as HTTP,RTSP,MS and etc. 5.0 5.1 Pervasive Computing Meaning of pervasive computing Refers to the use of computers in everyday life, including PDAs, smart phones and other mobile devices. It also refers to computers contained in commonplace objects such as cars and appliances and implies that people are unaware of their presence. One of the Holy Grails of this environment is that all these devices communicate with each other over wireless networks without any interaction required by the user. See pervasive workplace. 5.2 Examples of pervasive computing a)Smart TV *Samsung Smart TV Smart TV, which is also sometimes referred to as "Connected TV" or "Hybrid TV", (not to be confused with Internet TV, Web TV, or LG Electronics's upcoming "SMART TV" branded NetCast Entertainment Access devices), is the phrase used to describe the current trend of integration of the internet and Web 2.0 features into modern television sets and set-top boxes, as well as the technological convergence between computers and these television sets /set-top boxes. These new devices most often also have a much higher focus on online interactive media, Internet TV, over-thetop content, as well as on-demand streaming media, and less focus on traditional broadcast

media like previous generations of television sets and set-top boxes always have had. The technology that enables Smart TVs is not only incorporated into television sets, but also devices such as set-top boxes, Blu-ray players, game consoles, and other companion devices. These devices allow viewers to search and find videos, movies, photos and other content on the web, on a local cable TV channel, on a satellite TV channel, or stored on a local hard drive. b) GPS *Garmin GPS The Global Positioning System (GPS) is a space-based global navigation satellite system (GNSS) that provides location and time information in all weather, anywhere on or near the Earth, where there is an unobstructed line of sight to four or more GPS satellites. It is maintained by the United States government and is freely accessible by anyone with a GPS receiver. The GPS project was developed in 1973 to overcome the limitations of previous navigation systems, integrating ideas from several predecessors, including a number of classified engineering design studies from the 1960s. GPS was created and realized by the U.S. Department of Defense (USDOD) and was originally run with 24 satellites. It became fully operational in 1994. 6.0 Conclusion Open Source can be used by anyone and because it has no copyright claims, so users are free to use, change, and improve the software, and to redistribute it in modified or unmodified forms. Pervasive Computing is to make our lives easier because we can interact with computers. Besides that, we can easily give the computer commands and the computer will grant your wish.

Computer network

From Wikipedia, the free encyclopedia

Network science

Theory Graph Complex network Contagion Small-world Scale-free Community structure

Percolation Evolution Controllability Graph drawing Social capital Link analysis Optimization Reciprocity Closure Homophily Transitivity Preferential attachment Balance theory Network effect Social influence

Network types

Informational (computing) Telecommunication Social Biological Artificial neural Interdependent Semantic Random graph Dependency Flow

Graphs Features

Clique Component Cut

Cycle Data structure Edge Loop Neighborhood Path Vertex Adjacency list / matrix Incidence list / matrix Types

Bipartite Complete Directed Hyper Multi

Random Weighted

Metrics Algorithms Centrality Degree Betweenness Closeness PageRank Motif Clustering Degree distribution Assortativity Distance Modularity


Random graph ErdsRnyi BarabsiAlbert WattsStrogatz Exponential random (ERGM) Epidemic Hierarchical

Lists Topics Software Network scientists


Graph theory Network theory


A computer network or data network is a telecommunications network that allows computers to exchange data. In computer networks, networked computing devices pass data to each other along data connections. The connections (network links) between nodes are established using either cable media or wireless media. The best-known computer network is the Internet. Network computer devices that originate, route and terminate the data are called network nodes.[1] Nodes can include hosts such aspersonal computers, phones, servers as well as networking hardware. Two such devices are said to be networked together when one device is able to exchange information with the other device, whether or not they have a direct connection to each other. Computer networks support applications such as access to the World Wide Web, shared use of application and storage servers, printers, and fax machines, and use of email and instant messaging applications. Computer networks differ in the physical media used to transmit their signals, the communications protocols to organize network traffic, the network's size, topology and organizational intent.

Contents [hide]

1 History 2 Properties 3 Network topology 3.1 Network links 3.2 Network nodes
o o

3.3 Network structure 4 Communications protocols 4.1 Ethernet 4.2 Internet Protocol Suite 4.3 SONET/SDH 4.4 Asynchronous Transfer Mode 5 Geographic scale 6 Organizational scope 6.1 Intranets 6.2 Extranet 6.3 Internetwork 6.4 Internet 6.5 Darknet 7 Routing 8 Network service 9 Network performance
o o o o o o o o o o

9.1 Quality of service 9.2 Network congestion 9.3 Network resilience 10 Security 10.1 Network security 10.2 Network surveillance 10.3 End to end encryption 11 Views of networks 12 See also 13 References 14 Further reading 15 External links
o o o o o o


In the late 1950s, early networks of communicating computers included the military radar system Semi-Automatic Ground Environment (SAGE).

In 1960, the commercial airline reservation system semi-automatic business research environment (SABRE) went online with two connected mainframes.

In 1962, J.C.R. Licklider developed a working group he called the "Intergalactic Computer Network", a precursor to the ARPANET, at the Advanced Research Projects Agency(ARPA).

In 1964, researchers at Dartmouth developed the Dartmouth Time Sharing System for distributed users of large computer systems. The same year, at Massachusetts Institute of

Technology, a research group supported by General Electric and Bell Labs used a computer to route and manage telephone connections.

Throughout the 1960s, Leonard Kleinrock, Paul Baran, and Donald Davies independently developed network systems that used packets to transfer information between computers over a network.

In 1965, Thomas Marill and Lawrence G. Roberts created the first wide area network (WAN). This was an immediate precursor to the ARPANET, of which Roberts became program manager.

Also in 1965, the first widely used telephone switch that implemented true computer control was introduced by Western Electric.

In 1969, the University of California at Los Angeles, the Stanford Research Institute, the University of California at Santa Barbara, and the University of Utah were connected as the beginning of the ARPANET network using 50 kbit/s circuits.[2]

In 1972, commercial services using X.25 were deployed, and later used as an underlying infrastructure for expanding TCP/IP networks.

In 1973, Robert Metcalfe wrote a formal memo at Xerox PARC describing Ethernet, a networking system that was based on the Aloha network, developed in the 1960s byNorman Abramson and colleagues at the University of Hawaii. In July 1976, Robert Metcalfe and David Boggs published their paper "Ethernet: Distributed Packet Switching for Local Computer Networks"[3] and collaborated on several patents received in 1977 and 1978. In 1979, Robert Metcalfe pursued making Ethernet an open standard.[4]

In 1976, John Murphy of Datapoint Corporation created ARCNET, a token-passing network first used to share storage devices.

In 1995, the transmission speed capacity for Ethernet was increased from 10 Mbit/s to 100 Mbit/s. By 1998, Ethernet supported transmission speeds of a Gigabit. The ability of Ethernet to scale easily (such as quickly adapting to support new fiber optic cable speeds) is a contributing factor to its continued use today.[4]

Today, computer networks are the core of modern communication. All modern aspects of the public switched telephone network (PSTN) are computer-controlled. Telephony increasingly runs over the Internet Protocol, although not necessarily the public Internet. The scope of communication has increased significantly in the past decade. This boom in communications would not have been possible without the progressively advancing computer network. Computer networks, and the technologies that make communication between networked computers possible, continue to drive computer hardware, software, and peripherals industries. The expansion of related industries is

mirrored by growth in the numbers and types of people using networks, from the researcher to the home user.

Computer networking may be considered a branch of electrical engineering, telecommunications, computer science, information technology or computer engineering, since it relies upon the theoretical and practical application of the related disciplines. A computer network has the following properties: Facilitates interpersonal communications People can communicate efficiently and easily via email, instant messaging, chat rooms, telephone, video telephone calls, and video conferencing. Allows sharing of files, data, and other types of information Authorized users may access information stored on other computers on the network. Providing access to information on shared storage devices is an important feature of many networks. Allows sharing of network and computing resources Users may access and use resources provided by devices on the network, such as printing a document on a shared network printer. Distributed computing uses computing resources across a network to accomplish tasks. May be insecure A computer network may be used by computer Crackers to deploy computer viruses or computer worms on devices connected to the network, or to prevent these devices from accessing the network (denial of service). May interfere with other technologies Power line communication strongly disturbs certain[5] forms of radio communication, e.g., amateur radio. It may also interfere with last mile access technologies such as ADSLand VDSL. May be difficult to set up A complex computer network may be difficult to set up. It may be costly to set up an effective computer network in a large organization.

Network topology[edit]
Main article: Network topology

The physical layout of a network is usually less important than the topology that connects network nodes. Most diagrams that describe a physical network are therefore topological, rather than geographic. The symbols on these diagrams usually denote network links and network nodes.

Network links[edit]
The communication media used to link devices to form a computer network include electrical cable (HomePNA, power line communication,, optical fiber (fiber-optic communication), and radio waves (wireless networking). In the OSI model, these are defined at layers 1 and 2 the physical layer and the data link layer. A widely adopted family of communication media used in local area network (LAN) technology is collectively known as Ethernet. The media and protocol standards that enable communication between networked devices over Ethernet are defined by IEEE 802.3. Ethernet transmit data over both copper and fiber cables. Wireless LAN standards (e.g. those defined by IEEE 802.11) use radio waves, or others use infrared signals as a transmission medium. Power line communication uses a building's power cabling to transmit data.

Wired technologies


Fiber optic cables are used to transmit light from one computer/network node to another
The orders of the following wired technologies are, roughly, from slowest to fastest transmission speed.

Twisted pair wire is the most widely used medium for all telecommunication. Twisted-pair cabling consist of copper wires that are twisted into pairs. Ordinary telephone wires consist of two insulated copper wires twisted into pairs. Computer network cabling (wired Ethernet as defined by IEEE 802.3) consists of 4 pairs of copper cabling that can be utilized for both voice and data transmission. The use of two wires twisted together helps to reduce crosstalk and electromagnetic induction. The transmission speed ranges from 2 million bits per second to 10 billion bits per second. Twisted pair cabling comes in two forms: unshielded twisted pair (UTP) and shielded twisted-pair (STP). Each form comes in several category ratings, designed for use in various scenarios.

Coaxial cable is widely used for cable television systems, office buildings, and other work-sites for local area networks. The cables consist of copper or aluminum wire surrounded by an insulating layer (typically a flexible material with a high dielectric constant), which itself is surrounded by a conductive layer. The insulation helps minimize interference and distortion. Transmission speed ranges from 200 million bits per second to more than 500 million bits per second.

ITU-T technology uses existing home wiring (coaxial cable, phone lines and power lines) to create a high-speed (up to 1 Gigabit/s) local area network.

An optical fiber is a glass fiber. It carries pulses of light that represent data. Some advantages of optical fibers over metal wires are very low transmission loss and immunity from electrical interference. Optical fibers can simultaneously carry multiple wavelengths of light, which greatly increases the rate that data can be sent, and helps enable data rates of up to trillions of bits per second. Optic fibers can be used for long runs of cable carrying very high data rates, and are used for undersea cables to interconnect continents.

Wireless technologies


Computers are very often connected to networks using wireless links

Main article: Wireless network

Terrestrial microwave Terrestrial microwave communication uses Earth-based transmitters and receivers resembling satellite dishes. Terrestrial microwaves are in the low-gigahertz range, which limits all communications to line-of-sight. Relay stations are spaced approximately 48 km (30 mi) apart.

Communications satellites Satellites communicate via microwave radio waves, which are not deflected by the Earth's atmosphere. The satellites are stationed in space, typically in geosynchronous orbit 35,400 km (22,000 mi) above the equator. These Earth-orbiting systems are capable of receiving and relaying voice, data, and TV signals.

Cellular and PCS systems use several radio communications technologies. The systems divide the region covered into multiple geographic areas. Each area has a low-power transmitter or radio relay antenna device to relay calls from one area to the next area.

Radio and spread spectrum technologies Wireless local area networks use a high-frequency radio technology similar to digital cellular and a low-frequency radio technology. Wireless LANs use spread spectrum technology to enable communication between multiple devices in a limited area. IEEE 802.11 defines a common flavor of open-standards wireless radio-wave technology known as Wifi.

Free-space optical communication uses visible or invisible light for communications. In most cases, line-of-sight propagation is used, which limits the physical positioning of communicating devices.

Exotic technologies


There have been various attempts at transporting data over exotic media: IP over Avian Carriers was a humorous April fool's Request for Comments, issued as RFC 1149. It was implemented in real life in 2001.[6]

Extending the Internet to interplanetary dimensions via radio waves.[7]

Both cases have a large round-trip delay time, which gives slow two-way communication, but doesn't prevent sending large amounts of information.

Network nodes[edit]
Main article: Node (networking)

Apart from the physical communications media described above, networks comprise additional basic system building blocks, such as network interface controller (NICs), repeaters,hubs, bridges, switches, routers, modems, and firewalls.

Network interfaces


An ATM network interface in the form of an accessory card. Very many network interfaces are built-in.
A network interface controller (NIC) is computer hardware that provides a computer with the ability to access the transmission media, and has the ability to process low-level network information. For example the NIC may have a connector for accepting a cable, or an aerial for wireless transmission and reception, and the associated circuitry. The NIC respond to traffic addressed to a network address for either the NIC or the computer as a whole. In Ethernet networks, each network interface controller has a unique Media Access Control (MAC) addressusually stored in the controller's permanent memory. To avoid address conflicts between network devices, the Institute of Electrical and Electronics Engineers(IEEE) maintains and administers MAC address uniqueness. The size of an Ethernet MAC

address is six octets. The three most significant octets are reserved to identify NIC manufacturers. These manufacturers, using only their assigned prefixes, uniquely assign the three least-significant octets of every Ethernet interface they produce.

Repeaters and hubs

can cover longer distances without degradation. In most twisted pair


A repeater is an electronic device that receives a network signal, cleans it of unnecessary noise, and regenerates it. The signal is retransmitted at a higher power level, or to the other side of an obstruction, so that the signal Ethernet configurations, repeaters are required for cable that runs longer than 100 meters. With fiber optics, repeaters can be tens of even hundreds of kilometers apart. A repeater with multiple ports is known as a hub. Repeaters work on the physical layer of the OSI model. Repeaters require a small amount of time to regenerate the signal. This can cause a propagation delay that affects network performance. As a result, many network architectures limit the number of repeaters that can be used in a row, e.g., the Ethernet 5-4-3 rule. Hubs have been mostly obsoleted by modern switches; but repeaters are used for long distance links, notably undersea cabling.



A network bridge connects and filters traffic between two network segments at the data link layer (layer 2) of the OSI model to form a single network. This breaks the network's collision domain but maintains a unified broadcast domain. Network segmentation breaks down a large, congested network into an aggregation of smaller, more efficient networks. Bridges come in three basic types:

Local bridges: Directly connect LANs Remote bridges: Can be used to create a wide area network (WAN) link between LANs. Remote bridges, where the connecting link is slower than the end networks, largely have been replaced with routers.

Wireless bridges: Can be used to join LANs or connect remote devices to LANs.



A network switch is a device that forwards and filters OSI layer 2 datagrams between ports based on the MAC addresses in the packets.[8] A switch is distinct from a hub in that it only forwards the frames to the physical ports involved in the communication rather than all ports connected. It can be thought of as a multi-port bridge.[9] It learns to associate physical ports to MAC addresses by examining the source addresses of received frames. If an unknown destination is targeted, the switch broadcasts to all ports but the source. Switches normally have numerous ports, facilitating a star topology for devices, and cascading additional switches. Multi-layer switches are capable of routing based on layer 3 addressing or additional logical levels. The term switch is often used loosely to include devices such as routers and bridges, as well as devices that may distribute traffic based on load or based on application content (e.g., a Web URL identifier).



A typical home or small office router showing

the ADSL telephone line andEthernet network cable connections

A router is an internetworking device that forwards packets between networks by processing the routing information included in the packet or datagram (Internet protocol information from layer 3). The routing information is often processed in conjunction with the routing table (or forwarding table). A router uses its routing table to determine where to forward packets. (A destination in a routing table can include a "null" interface, also known as the "black hole" interface because data can go into it, however, no further processing is done for said data.)



Modems (MOdulator-DEModulator) are used to connect network nodes via wire not originally designed for digital network traffic, or for wireless. To do this one or more frequencies are modulated by the digital signal to produce an analog signal that can be tailored to give the required properties for transmission. Modems are commonly used for telephone lines, using a Digital Subscriber Line technology.



A firewall is a network device for controlling network security and access rules. Firewalls are typically configured to reject access requests from unrecognized sources while allowing actions from recognized ones. The vital role firewalls play in network security grows in parallel with the constant increase in cyber attacks.

Network structure[edit]
Network topology is the layout or organizational hierarchy of interconnected nodes of a computer network. Different network topologies can affect throughput, but reliability is often more critical. With many technologies,

such as bus networks, a single failure can cause the network to fail entirely. In general the more interconnections there are, the more robust the network is; but the more expensive it is to install.

Common layouts


Common network topologies

Common layouts are:

A bus network: all nodes are connected to a common medium along this medium. This was the layout used in the original Ethernet, called 10BASE5 and 10BASE2.

A star network: all nodes are connected to a special central node. This is the typical layout found in a Wireless LAN, where each wireless client connects to the central Wireless access point.

A ring network: each node is connected to its left and right neighbour node, such that all nodes are connected and that each node can reach each other node by traversing nodes left- or rightwards. The Fiber Distributed Data Interface (FDDI) made use of such a topology.

A mesh network: each node is connected to an arbitrary number of neighbours in such a way that there is at least one traversal from any node to any other.

A fully connected network: each node is connected to every other node in the network.

A tree network: nodes are arranged hierarchically.

Note that the physical layout of the nodes in a network may not necessarily reflect the network topology. As an example, with FDDI, the network topology is a ring (actually two counter-rotating rings), but the physical topology is often a star, because all neighboring connections can be routed via a central physical location.

Overlay network


A sample overlay network

An overlay network is a virtual computer network that is built on top of another network. Nodes in the overlay network are connected by virtual or logical links. Each link corresponds to a path, perhaps through many physical links, in the underlying network. The topology of the overlay network may (and often does) differ from that of the underlying one. For example, many peer-to-peer networks are overlay networks. They are organized as nodes of a virtual system of links that run on top of the Internet.[10] Overlay networks have been around since the invention of networking when computer systems were connected over telephone lines using modems, before any data network existed. The most striking example of an overlay network is the Internet itself. The Internet itself was initially built as an overlay on the telephone network.[10] Even today, at the network layer, each node can reach any other by a direct connection to the desired IP address, thereby creating a fully connected network. The underlying network, however, is composed of a mesh-like interconnect of sub-networks of varying topologies (and technologies). Address resolution and routing are the means that allow mapping of a fully connected IP overlay network to its underlying network.

Another example of an overlay network is a distributed hash table, which maps keys to nodes in the network. In this case, the underlying network is an IP network, and the overlay network is a table (actually a map) indexed by keys. Overlay networks have also been proposed as a way to improve Internet routing, such as through quality of service guarantees to achieve higherquality streaming media. Previous proposals such as IntServ, DiffServ, and IP Multicast have not seen wide acceptance largely because they require modification of all routers in the network.[citation needed]On the other hand, an overlay network can be incrementally deployed on end-hosts running the overlay protocol software, without cooperation from Internet service providers. The overlay network has no control over how packets are routed in the underlying network between two overlay nodes, but it can control, for example, the sequence of overlay nodes that a message traverses before it reaches its destination. For example, Akamai Technologies manages an overlay network that provides reliable, efficient content delivery (a kind of multicast). Academic research includes end system multicast,[11] resilient routing and quality of service studies, among others.

Communications protocols[edit]

The TCP/IP model or Internet layering scheme and its relation to common

protocols often layered on top of it.

A communications protocol is a set of rules for exchanging information over network links. In a protocol stack (also see the OSI model), each protocol leverages the services of the protocol below it. An important example of a protocol stack is HTTP running over TCP overIP over IEEE 802.11. (TCP and IP are members of the Internet Protocol Suite. IEEE 802.11 is a member of the Ethernet protocol suite.) This stack is used between the wireless router and the home user's personal computer when the user is surfing the web. Whilst the use of protocol layering is today ubiquitous across the field of computer networking, it has been historically criticized by many researchers[12] for two principle reasons. Firstly, abstracting the protocol stack in this way may cause a higher layer to duplicate functionality of a lower layer, a prime example being error recovery on both a per-link basis and an end-to-end basis.[13] Secondly, it is common that a protocol implementation at one layer may require data, state or addressing information that is only present at another layer, thus defeating the point of separating the layers in the first place. For example, TCP uses the ECN field in the IPv4 header as an indication of congestion; IP is a network layer protocol whereas TCP is a transport layer protocol. Communication protocols have various characteristics. They may be connection-oriented or connectionless, they may use circuit modeor packet switching, and they may use hierarchical addressing or flat addressing. There are many communication protocols, a few of which are described below.

Ethernet is a family of protocols used in LANs, described by a set of standards together called IEEE 802 published by the Institute of Electrical and Electronics Engineers. It has a flat addressing scheme. It operates mostly at levels 1 and 2 of the OSI model. For home users today, the most

well-known member of this protocol family is IEEE 802.11, otherwise known as Wireless LAN (WLAN). The complete IEEE 802 protocol suite provides a diverse set of networking capabilities. For example, MAC bridging (IEEE 802.1D) deals with the routing of Ethernet packets using a Spanning Tree Protocol, IEEE 802.1Q describes VLANs, and IEEE 802.1X defines a portbased Network Access Control protocol, which forms the basis for the authentication mechanisms used in VLANs (but it is also found in WLANs) it is what the home user sees when the user has to enter a "wireless access key".

Internet Protocol Suite[edit]

The Internet Protocol Suite, also called TCP/IP, is the foundation of all modern internetworking. It offers connection-less as well as connectionoriented services over an inherently unreliable network traversed by datagram transmission at the Internet protocol (IP) level. At its core, the protocol suite defines the addressing, identification, and routing specifications for Internet Protocol Version 4 (IPv4) and for IPv6, the next generation of the protocol with a much enlarged addressing capability.

Synchronous optical networking (SONET) and Synchronous Digital Hierarchy (SDH) are standardized multiplexing protocols that transfer multiple digital bit streams over optical fiber using lasers. They were originally designed to transport circuit mode communications from a variety of different sources, primarily to support real-time, uncompressed, circuitswitched voice encoded in PCM (Pulse-Code Modulation) format. However, due to its protocol neutrality and transport-oriented features, SONET/SDH also was the obvious choice for transporting Asynchronous Transfer Mode (ATM) frames.

Asynchronous Transfer Mode[edit]

Asynchronous Transfer Mode (ATM) is a switching technique for telecommunication networks. It uses asynchronous time-division multiplexing and encodes data into small, fixed-sized cells. This differs from other protocols such as the Internet Protocol Suite or Ethernet that use variable sized packets or frames. ATM has similarity with both circuit andpacket switched networking. This makes it a good choice for

a network that must handle both traditional high-throughput data traffic, and real-time, low-latency content such as voice and video. ATM uses a connection-oriented model in which a virtual circuit must be established between two endpoints before the actual data exchange begins. While the role of ATM is diminishing in favor of next-generation networks, it still plays a role in the last mile, which is the connection between an Internet service provider and the home user. For an interesting write-up of the technologies involved, including the deep stacking of communications protocols used, see.[14]

Geographic scale[edit]
A network can be characterized by its physical capacity or its organizational purpose. Use of the network, including user authorization and access rights, differ accordingly. Personal area network A personal area network (PAN) is a computer network used for communication among computer and different information technological devices close to one person. Some examples of devices that are used in a PAN are personal computers, printers, fax machines, telephones, PDAs, scanners, and even video game consoles. A PAN may include wired and wireless devices. The reach of a PAN typically extends to 10 meters.[15] A wired PAN is usually constructed with USB and FireWire connections while technologies such as Bluetooth and infrared communication typically form a wireless PAN. Local area network A local area network (LAN) is a network that connects computers and devices in a limited geographical area such as a home, school, office building, or closely positioned group of buildings. Each computer or device on the network is a node. Wired LANs are most likely based on Ethernet technology. Newer standards such as ITU-T also provide a way to create a wired LAN using existing wiring, such as coaxial cables, telephone lines, and power lines.[16]

A LAN is depicted in the accompanying diagram. All interconnected devices use the network layer (layer 3) to handle multiple subnets (represented by different colors). Those inside the library have 10/100 Mbit/s Ethernet connections to the user device and a Gigabit Ethernet connection to the central router. They could be called Layer 3 switches, because they only have Ethernet interfaces and support the Internet Protocol. It might be more correct to call them access routers, where the router at the top is a distribution router that connects to the Internet and to the academic networks' customer access routers. The defining characteristics of a LAN, in contrast to a wide area network (WAN), include higher data transfer rates, limited geographic range, and lack of reliance on leased lines to provide connectivity. Current Ethernet or other IEEE 802.3 LAN technologies operate at data transfer rates up to 10 Gbit/s. The IEEE investigates the standardization of 40 and 100 Gbit/s rates.[17] A LAN can be connected to a WAN using a router. Home area network A home area network (HAN) is a residential LAN used for communication between digital devices typically deployed in the home, usually a small number of personal computers and accessories, such as printers and mobile computing devices. An important function is the sharing of Internet access, often a broadband service through a cable TV or digital subscriber line (DSL) provider. Storage area network A storage area network (SAN) is a dedicated network that provides access to consolidated, block level data storage. SANs are primarily used to make storage devices, such as disk arrays, tape libraries, and optical jukeboxes, accessible to servers so that the devices appear like locally attached devices to the operating system. A SAN typically has its own network of storage devices that are generally not accessible through the local area network by other devices. The cost and complexity of SANs dropped in the early 2000s to levels allowing wider adoption across both enterprise and small to medium sized business environments. Campus area network

A campus area network (CAN) is made up of an interconnection of LANs within a limited geographical area. The networking equipment (switches, routers) and transmission media (optical fiber, copper plant, Cat5 cabling, etc.) are almost entirely owned by the campus tenant / owner (an enterprise, university, government, etc.). For example, a university campus network is likely to link a variety of campus buildings to connect academic colleges or departments, the library, and student residence halls. Backbone network A backbone network is part of a computer network infrastructure that provides a path for the exchange of information between different LANs or sub-networks. A backbone can tie together diverse networks within the same building, across different buildings, or over a wide area. For example, a large company might implement a backbone network to connect departments that are located around the world. The equipment that ties together the departmental networks constitutes the network backbone. When designing a network backbone, network performance and network congestion are critical factors to take into account. Normally, the backbone network's capacity is greater than that of the individual networks connected to it. Another example of a backbone network is the Internet backbone, which is the set of wide area networks (WANs) and core routers that tie together all networks connected to theInternet. Metropolitan area network A Metropolitan area network (MAN) is a large computer network that usually spans a city or a large campus. Wide area network A wide area network (WAN) is a computer network that covers a large geographic area such as a city, country, or spans even intercontinental distances. A WAN uses a communications channel that combines many types of media such as telephone lines, cables, and air waves. A WAN often makes use of transmission facilities provided by common carriers, such as telephone companies. WAN technologies generally function at the

lower three layers of the OSI reference model: the physical layer, the data link layer, and thenetwork layer. Enterprise private network An enterprise private network is a network that a single organization builds to interconnect its office locations (e.g., production sites, head offices, remote offices, shops) so they can share computer resources. Virtual private network A virtual private network (VPN) is an overlay network in which some of the links between nodes are carried by open connections or virtual circuits in some larger network (e.g., the Internet) instead of by physical wires. The data link layer protocols of the virtual network are said to be tunneled through the larger network when this is the case. One common application is secure communications through the public Internet, but a VPN need not have explicit security features, such as authentication or content encryption. VPNs, for example, can be used to separate the traffic of different user communities over an underlying network with strong security features. VPN may have best-effort performance, or may have a defined service level agreement (SLA) between the VPN customer and the VPN service provider. Generally, a VPN has a topology more complex than point-topoint. Global area network A global area network (GAN) is a network used for supporting mobile across an arbitrary number of wireless LANs, satellite coverage areas, etc. The key challenge in mobile communications is handing off user communications from one local coverage area to the next. In IEEE Project 802, this involves a succession of terrestrial wireless LANs.[18]

Organizational scope[edit]
Networks are typically managed by the organizations that own them. Private enterprise networks may use a combination of intranets and extranets. They may also provide network access to the Internet, which has no single owner and permits virtually unlimited global connectivity.


An intranet is a set of networks that are under the control of a single administrative entity. The intranet uses the IP protocol and IP-based tools such as web browsers and file transfer applications. The administrative entity limits use of the intranet to its authorized users. Most commonly, an intranet is the internal LAN of an organization. A large intranet typically has at least one web server to provide users with organizational information. An intranet is also anything behind the router on a local area network.

An extranet is a network that is also under the administrative control of a single organization, but supports a limited connection to a specific external network. For example, an organization may provide access to some aspects of its intranet to share data with its business partners or customers. These other entities are not necessarily trusted from a security standpoint. Network connection to an extranet is often, but not always, implemented via WAN technology.

An internetwork is the connection of multiple computer networks via a common routing technology using routers.


Partial map of the Internet based on the January 15,

2005 data found Each line is drawn between two nodes, representing two IP addresses. The length of the lines are indicative of the delay between those two nodes. This graph represents less than 30% of the Class C networks reachable.
The Internet is the largest example of an internetwork. It is a global system of interconnected governmental, academic, corporate, public, and private computer networks. It is based on the networking technologies of the Internet Protocol Suite. It is the successor of theAdvanced Research Projects Agency Network (ARPANET) developed by DARPA of the United States Department of Defense. The Internet is also the communications backbone underlying the World Wide Web (WWW). Participants in the Internet use a diverse array of methods of several hundred documented, and often standardized, protocols compatible with the Internet Protocol Suite and an addressing system (IP addresses) administered by the Internet Assigned Numbers Authority andaddress registries. Service providers and large enterprises exchange information about the reachability of their address spaces through the Border Gateway

Protocol (BGP), forming a redundant worldwide mesh of transmission paths.

A Darknet is an overlay network, typically running on the internet, that is only accessible through specialized software. A darknet is an anonymizing network where connections are made only between trusted peers sometimes called "friends" (F2F)[19] using nonstandard protocols and ports. Darknets are distinct from other distributed peer-to-peer networks as sharing is anonymous (that is, IP addresses are not publicly shared), and therefore users can communicate with little fear of governmental or corporate interference.[20]


Routing calculates good paths through a network for information to take. For example from node 1 to node 6 the best routes are

likely to be 1-8-7-6 or 1-810-6, as this has the thickest routes.

Routing is the process of selecting network paths to carry network traffic. Routing is performed for many kinds of networks, including circuit switching networks and packet switched networks. In packet switched networks, routing directs packet forwarding (the transit of logically addressed network packets from their source toward their ultimate destination) through intermediate nodes. Intermediate nodes are typically network hardware devices such as routers, bridges,gateways, firewalls, or switches. Generalpurpose computers can also forward packets and perform routing, though they are not specialized hardware and may suffer from limited performance. The routing process usually directs forwarding on the basis of routing tables, which maintain a record of the routes to various network destinations. Thus, constructing routing tables, which are held in the router's memory, is very important for efficient routing. Most routing algorithms use only one network path at a time. Multipath routing techniques enable the use of multiple alternative paths. There are usually multiple routes that can be taken, and to choose between them, different elements can be considered to decide which routes get installed into the routing table, such as (sorted by priority): 1. Prefix-Length: where longer subnet masks are preferred (independent if it is within a routing protocol or over different routing protocol) 2. Metric: where a lower metric/cost is preferred (only valid within one and the same routing protocol) 3. Administrative distance: where a lower distance is preferred (only valid between different routing protocols)

Routing, in a more narrow sense of the term, is often contrasted with bridging in its assumption that network addresses are structured and that similar addresses imply proximity within the network. Structured addresses allow a single routing table entry to represent the route to a group of devices. In large networks, structured addressing (routing, in the narrow sense) outperforms unstructured addressing (bridging). Routing has become the dominant form of addressing on the Internet. Bridging is still widely used within localized environments.

Network service[edit]
Network services are applications hosted by servers on a computer network, to provide some functionality for members or users of the network, or to help the network itself to operate. The World Wide Web, E-mail,[21] printing and network file sharing are examples of well-known network services. Network services such as DNS (Domain Name System) give names for IP and MAC addresses (people remember names like nm.lan better than numbers like,[22] and DHCP to ensure that the equipment on the network has a valid IP address.[23] Services are usually based on a service protocol that defines the format and sequencing of messages between clients and servers of that network service.

Network performance[edit]
Quality of service[edit]
Depending on the installation requirements, network performance is usually measured by the quality of service of a telecommunications product. The parameters that affect this typically can include throughput, jitter, bit error rate and latency. The following list gives examples of network performance measures for a circuit-switched network and one type of packet-switched network, viz. ATM:

Circuit-switched networks: In circuit switched networks, network performance is synonymous with the grade of service. The number of rejected calls is a measure of how well the network is performing under heavy traffic loads.[24] Other types of performance measures can include the level of noise and echo.

ATM: In an Asynchronous Transfer Mode (ATM) network, performance can be measured by line rate, quality of service (QoS), data throughput, connect time, stability, technology, modulation technique and modem enhancements.[25]

There are many ways to measure the performance of a network, as each network is different in nature and design. Performance can also be modelled instead of measured. For example, state transition diagrams are often used to model queuing performance in a circuit-switched network. The network planner uses these diagrams to analyze how the network performs in each state, ensuring that the network is optimally designed.[26]

Network congestion[edit]
Network congestion occurs when a link or node is carrying so much data that its quality of service deteriorates. Typical effects include queueing delay, packet loss or the blockingof new connections. A consequence of these latter two is that incremental increases in offered load lead either only to small increase in network throughput, or to an actual reduction in network throughput. Network protocols that use aggressive retransmissions to compensate for packet loss tend to keep systems in a state of network congestioneven after the initial load is reduced to a level that would not normally induce network congestion. Thus, networks using these protocols can exhibit two stable states under the same level of load. The stable state with low throughput is known as congestive collapse. Modern networks use congestion control and congestion avoidance techniques to try to avoid congestion collapse. These include: exponential backoff in protocols such as802.11's CSMA/CA and the original Ethernet, window reduction in TCP, and fair queueing in devices such as routers. Another method to avoid the negative effects of network

congestion is implementing priority schemes, so that some packets are transmitted with higher priority than others. Priority schemes do not solve network congestion by themselves, but they help to alleviate the effects of congestion for some services. An example of this is 802.1p. A third method to avoid network congestion is the explicit allocation of network resources to specific flows. One example of this is the use of Contention-Free Transmission Opportunities (CFTXOPs) in the ITU-T standard, which provides high-speed (up to 1 Gbit/s) Local area networking over existing home wires (power lines, phone lines and coaxial cables). For the Internet RFC 2914 addresses the subject of congestion control in detail.

Network resilience[edit]
Network resilience is "the ability to provide and maintain an acceptable level of service in the face of faults and challenges to normal operation.[27]

Network security[edit]
Network security consists of provisions and policies adopted by the network administrator to prevent and monitor unauthorized access, misuse, modification, or denial of the computer network and its network-accessible resources.[28] Network security is the authorization of access to data in a network, which is controlled by the network administrator. Users are assigned an ID and password that allows them access to information and programs within their authority. Network security is used on a variety of computer networks, both public and private, to secure daily transactions and communications among businesses, government agencies and individuals.

Network surveillance[edit]
Network surveillance is the monitoring of data being transferred over computer networks such as the Internet. The monitoring is often done surreptitiously and may be done by or at the behest of governments, by corporations, criminal organizations, or individuals. It may or may not be legal and may or may not require authorization from a court or other independent agency.

Computer and network surveillance programs are widespread today, and almost all Internet traffic is or could potentially be monitored for clues to illegal activity. Surveillance is very useful to governments and law enforcement to maintain social control, recognize and monitor threats, and prevent/investigate criminal activity. With the advent of programs such as the Total Information Awareness program, technologies such as high speed surveillance computers and biometrics software, and laws such as theCommunications Assistance For Law Enforcement Act, governments now possess an unprecedented ability to monitor the activities of citizens.[29] However, many civil rights and privacy groupssuch as Reporters Without Borders, the Electronic Frontier Foundation, and the American Civil Liberties Unionhave expressed concern that increasing surveillance of citizens may lead to a mass surveillance society, with limited political and personal freedoms. Fears such as this have led to numerous lawsuits such as Hepting v. AT&T.[29][30] The hacktivist group Anonymous has hacked into government websites in protest of what it considers "draconian surveillance".[31][32]

End to end encryption[edit]

End-to-end encryption (E2EE) is a digital communications paradigm of uninterrupted protection of data traveling between two communicating parties. It involves the originating party encrypting data so only the intended recipient can decrypt it, with no dependency on third parties. End-to-end encryption prevents intermediaries, such as Internet providersor application service providers, from discovering or tampering with communications. Endto-end encryption generally protects both confidentiality and integrity. Examples of end-to-end encryption include PGP for email, OTR for instant messaging, ZRTP for telephony, and TETRA for radio. Typical server-based communications systems do not include end-to-end encryption. These systems can only guarantee protection of communications between clients andservers, not between the communicating parties themselves. Examples of non-E2EE systems are Google Talk, Yahoo Messenger, Facebook, and Dropbox. Some such systems, for example LavaBit and SecretInk, have even described

themselves as offering "end-to-end" encryption when they do not. Some systems that normally offer end-to-end encryption have turned out to contain a back doorthat subverts negotiation of the encryption key between the communicating parties, for example Skype. The end-to-end encryption paradigm does not directly address risks at the communications endpoints themselves, such as the technical exploitation of clients, poor quality random number generators, or key escrow. E2EE also does not address traffic analysis, which relates to things such as the identities of the end points and the times and quantities of messages that are sent.

Views of networks[edit]
Users and network administrators typically have different views of their networks. Users can share printers and some servers from a workgroup, which usually means they are in the same geographic location and are on the same LAN, whereas a Network Administrator is responsible to keep that network up and running. A community of interest has less of a connection of being in a local area, and should be thought of as a set of arbitrarily located users who share a set of servers, and possibly also communicate via peerto-peertechnologies. Network administrators can see networks from both physical and logical perspectives. The physical perspective involves geographic locations, physical cabling, and the network elements (e.g., routers, bridges and application layer gateways) that interconnect the physical media. Logical networks, called, in the TCP/IP architecture, subnets, map onto one or more physical media. For example, a common practice in a campus of buildings is to make a set of LAN cables in each building appear to be a common subnet, using virtual LAN (VLAN) technology. Both users and administrators are aware, to varying extents, of the trust and scope characteristics of a network. Again using TCP/IP architectural terminology, an intranet is a community of interest under private administration usually by an enterprise, and is only accessible by authorized users (e.g. employees).[33] Intranets do not have to be connected

to the Internet, but generally have a limited connection. An extranet is an extension of an intranet that allows secure communications to users outside of the intranet (e.g. business partners, customers).[33] Unofficially, the Internet is the set of users, enterprises, and content providers that are interconnected by Internet Service Providers (ISP). From an engineering viewpoint, theInternet is the set of subnets, and aggregates of subnets, which share the registered IP address space and exchange information about the reachability of those IP addresses using the Border Gateway Protocol. Typically, the human-readable names of servers are translated to IP addresses, transparently to users, via the directory function of theDomain Name System (DNS). Over the Internet, there can be business-to-business (B2B), business-toconsumer (B2C) and consumer-to-consumer (C2C) communications. When money or sensitive information is exchanged, the communications are apt to be protected by some form of communications security mechanism. Intranets and extranets can be securely superimposed onto the Internet, without any access by general Internet users and administrators, using secure Virtual Private Network (VPN) technology.

Mobile computing is human computer interaction by which a computer is expected to be transported during normal usage. Mobile computing involves mobile

communication, mobile hardware, and mobile software. Communication issues include ad hoc and infrastructure networks as well as communication properties, protocols, data formats and concrete technologies. Hardware includes mobile devices or device components. Mobile software deals with the

characteristics and requirements of mobile applications.

Contents [hide]

1 Definitions 2 Devices 3 Limitations 4 In-vehicle computing and fleet computing 5 Security issues involved in mobile computing 6 Portable computing devices 7 Mobile data communication 8 See also

9 References 9.1 Citations 9.2 Bibliography 10 Further reading

o o


Mobile Computing is "taking a computer and all necessary files and software out into the field".

Mobile computing is

any type of computing which use Internet or intranet and respective communications links, as WAN, LAN, WLAN etc. Mobile computers may

form a wireless personal network or a piconet. There are at least three different classes of mobile computing items:

portable computers, compacted lightweight units including a full character set keyboard and primarily intended as hosts for software that may be parametrized, as

laptops, notebooks, notepads, etc.

mobile phones including a restricted key set primarily intended but not restricted to for vocal communications, as cell phones, smart phones, phonepads, etc. wearable computers, mostly limited to functional keys and primarily intended as incorporation of software

agents, as watches, wristbands, necklaces, keyless implants, etc. The existence of these classes is expected to be long lasting, and complementary in personal usage, none replacing one the other in all features of convenience..

Many types of mobile computers have been

introduced since the 1990s including the:

Personal digital assistant/enterprise digital assistant Smartphone Tablet computer Ultra-Mobile PC Wearable computer Range & Bandwidth: Mobile Internet access is generally


slower than direct cable connections, using technologies such as GPRS and EDGE, and more recently HSDPA andHSUPA 3G and 4G networks. These networks are usually available within range of commercial cell phone towers. Higher speed wireless LANs are

inexpensive but have very limited range.

Security standards: When working mobile, one is dependent on public networks, requiring careful use of VPN. Security is a major concern while concerning the mobile computing standards on the fleet. One can easily attack the VPN through a huge

number of networks interconnected through the line.

Power consumption: When a power outlet or portable generator is not available, mobile computers must rely entirely on battery power. Combined with the compact size of many mobile devices, this often means unusually expensive batteries must be

used to obtain the necessary battery life.

Transmission interferences: Weather, terrain, and the range from the nearest signal point can all interfere with signal reception. Reception in tunnels, some buildings, and rural areas is often poor. Potential health hazards: People who use mobile devices while driving are often

distracted from driving and are thus assumed more likely to be involved in traffic accidents. (While this may seem obvious, there is considerable discussion about whether banning mobile device use while driving reduces accidents or not.
[3][4] [2]

) Cell phones may

interfere with sensitive medical devices. Questions

concerning mobile phone radiation and health have been raised.

Human interface with device: Screens and keyboards tend to be small, which may make them hard to use. Alternate input methods such as speech or handwriting recognition require training.

In-vehicle computing and fleet computing[edit]

Many commercial and government field forces deploy a ruggedized portable computer with their fleet of vehicles. This requires the units to be anchored to the vehicle for driver safety, device security, and ergonomics. Rugged computers are rated for severe vibration associated with large service vehicles and off-road

driving and the harsh environmental conditions of constant professional use such as in emergency medical services, fire, and public safety.

The Compaq Portable - Circa 1982 pre-laptop

Other elements affecting function in vehicle:

Operating temperature: A vehicle cabin can often experience temperature swings from -20F to +140F. Computers typically must be able to withstand these temperatures while operating. Typical fan-based cooling has stated limits of 95F-100F of ambient temperature, and temperatures below freezing require localized heaters to

bring components up to operating temperature (based on independent studies by the SRI Group and by Panasonic R&D).

Vibration can decrease the life expectancy of computer components, notably rotational storage such as HDDs.

Visibility of standard screens becomes an issue in bright sunlight. Touchscreen users easily interact with the units in the field without removing gloves. High-temperature battery settings: Lithium ion batteries are sensitive to high temperature conditions for charging. A computer designed for the mobile

environment should be designed with a hightemperature charging function that limits the charge to 85% or less of capacity.

External antenna connections go through the typical metal cabins of vehicles which would block wireless reception, and take advantage of much more capable external

communication and navigation equipment. Several specialized manufacturers such as First Mobile Technologies, National Products Inc (Ram Mounts), Gamber Johnson and LedCo build mounts for vehicle mounting of computer equipment for a wide range of vehicles. The mounts are built to withstand the harsh

conditions and maintain ergonomics. Specialized installation companies design the mount design, assembling the parts, and installing them in a safe and consistent manner away from airbags, vehicle HVAC controls, and driver controls. Frequently installations will include a WWAN modem, power conditioning equipment,

transceiver antennae mounted external to the vehicle, and WWAN/WLAN/GPS/etc.

Security issues involved in mobile computing[edit]

Main article: Mobile security Mobile security or mobile phone security has become increasingly important in mobile computing. It is of particular concern as it relates to the security of personal

information now stored on the smartphone. More and more users and businesses use smartphones as communication tools but also as a means of planning and organizing their work and private life. Within companies, these technologies are causing profound changes in the organization of information systems and therefore they

have become the source of new risks. Indeed, smartphones collect and compile an increasing amount of sensitive information to which access must be controlled to protect the privacy of the user and the intellectual property of the company. All smartphones, as computers, are preferred

targets of attacks. These attacks exploit weaknesses related to smartphones that can come from means of communication like SMS,MMS, wifi networks, and GSM. There are also attacks that exploit software vulnerabilities from both the web browser and operating system. Finally, there are forms of malicious software that rely

on the weak knowledge of average users. Different security countermeasures are being developed and applied to smartphones, from security in different layers of software to the dissemination of information to end users. There are good practices to be observed at all levels, from design to use, through the development

of operating systems, software layers, and downloadable apps.

Portable computing devices[edit]

Main articles: Mobile device and Portable computer This section ma y require cle anup to meet

Wikipedia' s quality standards. No cleanu p reason has been specified. Please help improv e this section if

you can. (Februar

y 2009)

Several categories of portable computing devices can run on batteries but are not usually classified as laptops: portable computers, keyboardless tablet PCs, Internet tablets, PDAs, ultra mobile PCs (UMPCs) and smartphones.

A portable computer is a general-purpose computer that can be easily moved from place to place, but cannot be used while in transit, usually because it requires some "setting-up" and an AC power source. The most famous example is the Osborne 1. Portable computers are also called a "transportable" or a "luggable" PC.

A tablet computer that lacks a keyboard (also known as a non-convertible tablet) is shaped like a slate or a paper notebook. Instead a physical keyboard it has a touchscreenwith some combination of virtual keyboard, stylus and/or handwriting recognition software. Tablets may not be best suited for

applications requiring a physical keyboard for typing, but are otherwise capable of carrying out most of the tasks of an ordinary laptop.

A personal digital assistant (PDA) is a small, usually pocket-sized, computer with limited functionality. It is intended to supplement and to synchronize with a desktop

computer, giving access to contacts, address book, notes, e-mail and other features.


A PDA with a web browser is an Internet tablet, an Internet appliance in tablet form. It does not have as much

computing power as a full tablet computer and its applications suite is limited, and it can not replace a general purpose computer. Internet tablets typically feature an MP3 and video player, a web browser, a chat application and a picture viewer.

An ultra mobile PC is a fullfeatured, PDA-sized computer

running a general-purpose operating system.

A smartphone has a wide range of features and installable applications.

A carputer is installed in an automobile. It operates as a wireless computer, sound system, GPS, and DVD player. It also contains word processing software and is bluetooth compatible.

A Fly Fusion Pentop Computer is a computing device the size and shape of a pen. It functions as a writing utensil, MP3 player, language translator, digital storage device, and calculator.

Boundaries that separate these categories are blurry at times. For example, the OQO UMPC is also a PDA-sized tablet PC; the Apple eMate had the

clamshell form factor of a laptop, but ran PDA software. The HP Omnibook line of laptops included some devices small more enough to be called ultra mobile PCs. The hardware of the Nokia 770 internet tablet is essentially the same as that of a PDA such as the Zaurus 6000; the only reason it's not called a PDA is that it does not

have PIM software. On the other hand, both the 770 and the Zaurus can run some desktop Linux software, usually with modifications.
Mobile data communication[edit]

Wireless data connections used in mobile computing take three general forms so.

Cellular data service uses

technologies such as GSM, CDMA or GPRS, 3G networks such asW-




and more

recently 4G networks such as LTE, LTEAdvanced.


These networks

are usually available within range of commercial cell towers.Wi-Fi connections offer higher performance,

may be

either on a private business network or accessed through public hotspots, and have a

typical range of 100 feet indoors and up to 1000 feet outdoors.


Satellite Internet

access covers areas where cellular and Wi-Fi are not available

and may be set up

anywhere the user has a line of sight to the satellite's location,


which for satellites

in geostationary orbit means having an unobstructed view of the southern sky.



enterprise deployments combine networks from multiple cellular networks or use a mix of cellular, Wi-Fi and satellite.

When using a mix

of networks, a mobile virtual private network (mobile VPN) not only handles thesecurity concerns, but also performs the multiple network logins automatically and keeps

the application connections alive to prevent crashes or data loss during network transitions or coverage loss


Samsung announces Galaxy ACE Style smartphone

4 mins ago, comments (0) This new budget smartphone comes with a 4.0-inch display, dual-core processor and Android KitKat.

Verizon buys Cincinatti Bell for $210 million

3 hours ago, comments (3) Verizon has acquired Cincinnati Bell for $210 million. The transition will be made final in the 2nd half of 2014.

New Sharp Aquos flagship offers unparalleled screen-to-body ratio

15 hours ago, comments (74) The Sharp Aquos Xx 302Sh provides a 5.2" 1080p display and still measures smaller than most flagships.

Qualcomm unveils Snapdragon 810 and 808 64-bit chipsets

16 hours ago, comments (65) The Snapdragon 810 is an octa-core, while the 808 is a hexa-core. Both come with new Adreno GPUs.

HTC Q1 2014 results: net loss, missed revenue forecast

18 hours ago, comments (31) HTC published unaudited financial results for the first quarter of 2014 and it's another one in the red.

Major Asha update hits now with Mix Radio and new camera

18 hours ago, comments (18) The second major update to the Asha line is coming over the air now with a bunch of new features.

LAVA introduces midrange Iris Pro 20 smartphone

19 hours ago, comments (15) The latest 4.7" smartphone from Indian manufacturer LAVA will be impressively compact and light.

Octa-core Galaxy S5 tops S801-packing model in AnTuTu

19 hours ago, comments (37) The Exynos chipsets is showing better CPU performance and has a slight lead in 3D too. Oppo Find 7a is now on pre-order for 399 21 hours ago, comments (15) Google Play Edition treatment available for HTC One (M8) 21 hours ago, comments (14) Mysterious Moto X+1 branding leaks out on Twitter 1 day ago, comments (21) Sony Mobile rises to number 2 phone maker in India 2 days ago, comments (76) Samsung Galaxy Note 4 rumors alert

2 days ago, comments (129) Nokia Lumia 630 will be priced at 150 in Europe 2 days ago, comments (29)


Samsung Galaxy S5 reviewFab Five

Every time the Samsung Galaxy S counter flips a digit, the world is getting ready to be amazed. The lineup that stood up to the iPhone and has been pulling Android to the very top of the food chain, is under new leadership effective last month...

Elemental Sony Xperia Z2 Tablet review

Thinnest, lightest, highest water resistance there are plenty of superlatives you can throw at the Sony Xperia Z2 Tablet prettiest is another, though that one's more subjective. What is certain is that Sony has crafted one of the best tablets of 2014...

One and only HTC One (M8) review

To say there are great expectations of the freshly announced HTC One (M8) would be a massive understatement. This is, after all, one of the flagships to shape the entire season.... Updated with display and loudspeaker tests.

Perfect frame Nokia Lumia Icon review

The Nokia Lumia Icon for Verizon Wireless is the most capable smartphone to come out of the Finnish manufacturer to date. A successor to the Nokia Lumia 928, the carrier-exclusive Windows...

March 2014 GSMArena smartphone shopping guide

We're back with our latest smartphone shopper's guide, and our first for 2014! In this edition we'll see which new devices announced at the MWC made the cut as good buys, next to some...

Royal duel LG G Pro 2 vs Samsung Galaxy Note 3

March 2014 The GSMArena tablet shopping guide

By the dozen Samsung Galaxy Note Pro 12.2 review

Double vision Samsung Galaxy Grand 2 review

See you 2morrow LG G Pro 2 review

MWC 2014 Nokia X, X+ and XL hands-on

MWC 2014 Various brands overview

MWC 2014

Samsung Galaxy S5, Gear 2, 2 Neo, Fit hands-on

HTC Desire 816 hands-on MWC 2014

Lenovo S860, S850 and S660, Yoga 10 HD+ hands-on MWC 2014

Sony Xperia Z2, Tablet Z2, M2 hands-on MWC 2014

LG G Pro 2, G2 mini and L III series MWC 2014

Huawei MediaPad and Ascend hands-on MWC 2014

Alcatel One Touch Idol 2, Mini 2, Mini 2 S, PopFit MWC 2014


Samsung Galaxy S5 battery life test2 hours ago

LatentSense security learns how you touch your phone3 hours ago

Alleged specs of Samsung SM-T800 slate with 10.5 inch AMOLED display leak out4 hours ago

Sony Xperia Z2 tops the DxOMark mobile photography rankings6 hours ago


Samsung Galaxy S5 G9009D

Nokia Lumia 930

Huawei Ascend Y600 Samsung ATIV SE Nokia Lumia 635 Nokia Lumia 630 Dual SIM Nokia Lumia 630 Samsung Galaxy Tab 4 8.0 LTE


Nokia X 360 view Sony Xperia E1 gallery Sony Xperia E1 360 view Nokia X gallery Sony Xperia Z2 Tablet gallery HTC One (M8) 360 view HTC One (M8) gallery LG G Pro 2 360 view


Sony Xperia T2 Ultra dual

LG G2 mini LTE

HTC One (M8) Sony Xperia Z2 Tablet LTE Sony Xperia Z2 Tablet Wi-Fi HTC Desire 816 Samsung Galaxy Tab Pro 8.4 3G/LTE Samsung Galaxy Tab Pro 10.1

htc one x $500.00 Right now on eBay


Device 1. Samsung Galaxy S5 2. Nokia X 3. Samsung Galaxy Grand 2 4. Motorola Moto G 5. Samsung Galaxy Note 3 6. Samsung I9500 Galaxy S4 7. HTC One (M8) 8. Nokia Lumia 930 By daily interest Device 1. Sony Xperia Z Sony Xperia Z1 2. Compact 3. Sony Xperia Z1 4. LG Nexus 5 5. HTC One (M8) Sony Xperia Z2 Tablet 6. LTE 7. Sony Xperia Z Ultra 8. HTC One

Daily hits 112,398 83,659 50,790 47,415 46,915 46,264 44,947 37,683



8.573 5,722

8.555 1,080 8.530 2,499 8.378 1,855 8.346 629

8.288 229 8.235 1,559 8.227





By user ratings

Voice over IP
From Wikipedia, the free encyclopedia
(Redirected from Voice over Internet Protocol)

It has been suggested that Web-based VoIP be merged into this article.

(Discuss) Proposed since February 2013.

Voice over Internet Protocol (VoIP) is a methodology and group of technologies for the delivery of voice communications and multimedia sessions over Internet Protocol (IP) networks, such as the Internet. Other terms commonly associated with VoIP are IP telephony, Internet telephony, voice over broadband (VoBB), broadband telephony, IP communications, and broadband phone service. The term Internet telephony specifically refers to the provisioning of communications services (voice, fax, SMS, voice-messaging) over the public Internet, rather than via the public switched telephone network (PSTN). The steps and principles involved in originating VoIP telephone calls are similar to traditional digital telephony and involve signaling, channel setup, digitization of the analog voice signals, and encoding. Instead of being transmitted over a circuit-switched network, however, the digital information is packetized, and transmission occurs as Internet Protocol (IP) packets over a packet-switched network. Such transmission entails careful considerations about resource management different fromtime-division multiplexing (TDM) networks. Early providers of voice over IP services offered business models and technical solutions that mirrored the architecture of the legacy telephone network. Second-generation providers, such as Skype, have built closed networks for private user bases, offering the benefit of free calls and convenience while potentially charging for access to other communication networks, such as the PSTN. This has limited the freedom of users to mix-and-match third-party hardware and software. Third-generation providers, such asGoogle Talk, have adopted[1] the concept of federated VoIP which is a departure from the architecture of the legacy networks. These solutions typically allow dynamic interconnection between users on any two domains on the Internet when a user wishes to place a call. VoIP systems employ session control and signaling protocols to control the signaling, set-up, and tear-down of calls. They transport audio streams over IP networks using special media delivery protocols that encode voice, audio, video with audio codecs, and video codecs as Digital audio by streaming media. Various codecs exist that optimize the media stream based on application requirements and network bandwidth; some implementations rely on narrowband and compressed speech, while others support high fidelity stereo codecs. Some popular codecs include -law and a-law versions of G.711, G.722, which is a high-fidelity codec marketed as HD Voice by Polycom, a popular open source voice codec known as iLBC, a codec that only uses 8 kbit/s each way called G.729, and many others. VoIP is available on many smartphones, personal computers, and on Internet access devices. Calls and SMS text messages may be sent over 3G or Wi-Fi.[2]

Contents [hide] 1 Pronunciation 2 Protocols 3 Adoption o o o 3.1 Consumer market 3.2 PSTN and mobile network providers 3.3 Corporate use

4 Quality of service o 4.1 Layer 2

5 PSTN integration o o 5.1 Number portability 5.2 Emergency calls

6 Fax support 7 Power requirements 8 Redundancy 9 Security 10 Caller ID 11 Compatibility with traditional analog telephone sets 12 Support for other telephony devices 13 User and administrative interfaces 14 Operational cost 15 Regulatory and legal issues o o o o o 15.1 European Union 15.2 India 15.3 Middle East 15.4 South Korea 15.5 United States

16 Historical milestones 17 See also 18 References 19 External links

The acronym "VoIP" has been pronounced variably since the inception of the term. Apart from spelling out the acronym letter by letter, /vioapi/ (vee-oh-eye-pee), there are three likely possible pronunciations: /voapi/ (vo-eye-pee) and /voip/ (vo-ipp), have been used, but generally, the single syllable /vjp/ (voyp, as in voice) may be the most common within the industry.[3]

Voice over IP has been implemented in various ways using both proprietary protocols and protocols based on open standards. Examples of the VoIP protocols are:

H.323 Media Gateway Control Protocol (MGCP) Session Initiation Protocol (SIP) H.248 (also known as Media Gateway Control (Megaco)) Real-time Transport Protocol (RTP) Real-time Transport Control Protocol (RTCP) Secure Real-time Transport Protocol (SRTP) Session Description Protocol (SDP) Inter-Asterisk eXchange (IAX) Jingle XMPP VoIP extensions Skype protocol Teamspeak

The H.323 protocol was one of the first VoIP protocols that found widespread implementation for long-distance traffic, as well as local area network services. However, since the development of newer, less complex protocols such as MGCP and SIP, H.323 deployments are increasingly limited to carrying existing long-haul network traffic. In particular, the Session Initiation Protocol (SIP) has gained widespread VoIP market penetration. These protocols can be used by special-purpose software, such as Jitsi, or integrated into a web page (web-based VoIP), like Google Talk.

Consumer market[edit]

Example of residential network including VoIP

A major development that started in 2004 was the introduction of mass-market VoIP services that utilize existing broadband Internet access, by which subscribers place and receive telephone calls in much the same manner as they would via the public switched telephone network(PSTN). Fullservice VoIP phone companies provide inbound and outbound service with direct inbound dialing. Many offer unlimited domestic calling for a flat monthly subscription fee. This sometimes includes international calls to certain countries. Phone calls between subscribers of the same provider are usually free when flat-fee service is not available. A VoIP phone is necessary to connect to a VoIP service provider. This can be implemented in several ways:

Dedicated VoIP phones connect directly to the IP network using technologies such as wired Ethernet or wireless Wi-Fi. They are typically designed in the style of traditional digital business telephones.

An analog telephone adapter is a device that connects to the network and implements the electronics and firmware to operate a conventional analog telephone attached through a modular phone jack. Some residential Internet gateways and cablemodems have this function built in.

A softphone is application software installed on a networked computer that is equipped with a microphone and speaker, or headset. The application typically presents a dial pad and display field to the user to operate the application by mouse clicks or keyboard input.

PSTN and mobile network providers[edit]

It is becoming increasingly common for telecommunications providers to use VoIP telephony over dedicated and public IP networks to connect switching centers and to interconnect with other telephony network providers; this is often referred to as "IP backhaul."[4][5]

Smartphones and Wi-Fi-enabled mobile phones may have SIP clients built into the firmware or available as an application download.

Corporate use[edit]
Because of the bandwidth efficiency and low costs that VoIP technology can provide, businesses are migrating from traditional copper-wire telephone systems to VoIP systems to reduce their monthly phone costs. In 2008, 80% of all new Private branch exchange (PBX) lines installed internationally were VoIP.[6] VoIP solutions aimed at businesses have evolved into unified communications services that treat all communicationsphone calls, faxes, voice mail, e-mail, Web conferences, and moreas discrete units that can all be delivered via any means and to any handset, including cellphones. Two kinds of competitors are competing in this space: one set is focused on VoIP for medium to large enterprises, while another is targeting the small-to-medium business (SMB) market.[7] VoIP allows both voice and data communications to be run over a single network, which can significantly reduce infrastructure costs.[8] The prices of extensions on VoIP are lower than for PBX and key systems. VoIP switches may run on commodity hardware, such as personal computers. Rather than closed architectures, these devices rely on standard interfaces.[8] VoIP devices have simple, intuitive user interfaces, so users can often make simple system configuration changes. Dual-mode phones enable users to continue their conversations as they move between an outside cellular service and an internal Wi-Fi network, so that it is no longer necessary to carry both a desktop phone and a cellphone. Maintenance becomes simpler as there are fewer devices to oversee.[8] Skype, which originally marketed itself as a service among friends, has begun to cater to businesses, providing free-of-charge connections between any users on the Skype network and connecting to and from ordinary PSTN telephones for a charge.[9] In the United States the Social Security Administration (SSA) is converting its field offices of 63,000 workers from traditional phone installations to a VoIP infrastructure carried over its existing data network.[10][11]

Quality of service[edit]
Communication on the IP network is perceived as less reliable in contrast to the circuit-switched public telephone network because it does not provide a network-based mechanism to ensure that data packets are not lost, and are delivered in sequential order.[citation needed] It is a best-effort network

without fundamental Quality of Service (QoS) guarantees. Therefore, VoIP implementations may face problems with latency, packet loss, and jitter.[12][13] By default, network routers handle traffic on a first-come, first-served basis. Network routers on high volume traffic links may introduce latency that exceeds permissible thresholds for VoIP. Fixed delays cannot be controlled, as they are caused by the physical distance the packets travel; however, latency can be minimized by marking voice packets as being delay-sensitive with methods such as DiffServ.[12] VoIP endpoints usually have to wait for completion of transmission of previous packets, before new data may be sent. Although it is possible to preempt (abort) a less important packet in midtransmission, this is not commonly done, especially on high-speed links where transmission times are short even for maximum-sized packets.[14] An alternative to preemption on slower links, such as dialup and digital subscriber line (DSL), is to reduce the maximum transmission time by reducing the maximum transmission unit. But every packet must contain protocol headers, so this increases relative header overhead on every link traversed, not just the bottleneck (usually Internet access) link.[14] DSL modems provide Ethernet (or Ethernet over USB) connections to local equipment, but inside they are actually Asynchronous Transfer Mode (ATM) modems. They use ATM Adaptation Layer 5 (AAL5) to segment each Ethernet packet into a series of 53-byte ATM cells for transmission, reassembling them back into Ethernet frames at the receiving end. A virtual circuit identifier (VCI) is part of the 5-byte header on every ATM cell, so the transmitter can multiplex the active virtual circuits (VCs) in any arbitrary order. Cells from thesame VC are always sent sequentially. However, a majority of DSL providers use only one VC for each customer, even those with bundled VoIP service. Every Ethernet frame must be completely transmitted before another can begin. If a second VC were established, given high priority and reserved for VoIP, then a low priority data packet could be suspended in mid-transmission and a VoIP packet sent right away on the high priority VC. Then the link would pick up the low priority VC where it left off. Because ATM links are multiplexed on a cell-by-cell basis, a high priority packet would have to wait at most 53 byte times to begin transmission. There would be no need to reduce the interface MTU and accept the resulting increase in higher layer protocol overhead, and no need to abort a low priority packet and resend it later. ATM has substantial header overhead: 5/53 = 9.4%, roughly twice the total header overhead of a 1500 byte Ethernet frame. This "ATM tax" is incurred by every DSL user whether or not they take advantage of multiple virtual circuits - and few can.[12]

ATM's potential for latency reduction is greatest on slow links, because worst-case latency decreases with increasing link speed. A full-size (1500 byte) Ethernet frame takes 94 ms to transmit at 128 kbit/s but only 8 ms at 1.5 Mbit/s. If this is the bottleneck link, this latency is probably small enough to ensure good VoIP performance without MTU reductions or multiple ATM VCs. The latest generations of DSL, VDSL and VDSL2, carry Ethernet without intermediate ATM/AAL5 layers, and they generally support IEEE 802.1p priority tagging so that VoIP can be queued ahead of less timecritical traffic.[12] Voice, and all other data, travels in packets over IP networks with fixed maximum capacity. This system may be more prone to congestion[citation needed] and DoS attacks[15] than traditional circuit switched systems; a circuit switched system of insufficient capacity will refuse new connections while carrying the remainder without impairment, while the quality of real-time data such as telephone conversations on packet-switched networks degrades dramatically.[12] Fixed delays cannot be controlled as they are caused by the physical distance the packets travel. They are especially problematic when satellite circuits are involved because of the long distance to a geostationary satellite and back; delays of 400600 ms are typical. When the load on a link grows so quickly that its switches experience queue overflows, congestion results and data packets are lost. This signals a transport protocol like TCP to reduce its transmission rate to alleviate the congestion. But VoIP usually uses UDP not TCP because recovering from congestion through retransmission usually entails too much latency.[12] So QoS mechanisms can avoid the undesirable loss of VoIP packets by immediately transmitting them ahead of any queued bulk traffic on the same link, even when that bulk traffic queue is overflowing. The receiver must resequence IP packets that arrive out of order and recover gracefully when packets arrive too late or not at all. Jitter results from the rapid and random (i.e. unpredictable) changes in queue lengths along a given Internet path due to competition from other users for the same transmission links. VoIP receivers counter jitter by storing incoming packets briefly in a "dejitter" or "playout" buffer, deliberately increasing latency to improve the chance that each packet will be on hand when it is time for the voice engine to play it. The added delay is thus a compromise between excessive latency and excessive dropout, i.e. momentary audio interruptions. Although jitter is a random variable, it is the sum of several other random variables that are at least somewhat independent: the individual queuing delays of the routers along the Internet path in question. Thus according to the central limit theorem, we can model jitter as a gaussian random variable. This suggests continually estimating the mean delay and its standard deviation and setting the playout delay so that only packets delayed more than several standard deviations above the mean will arrive too late to be useful. In practice, however, the variance in latency of many Internet

paths is dominated by a small number (often one) of relatively slow and congested "bottleneck" links. Most Internet backbone links are now so fast (e.g. 10 Gbit/s) that their delays are dominated by the transmission medium (e.g. optical fiber) and the routers driving them do not have enough buffering for queuing delays to be significant. It has been suggested to rely on the packetized nature of media in VoIP communications and transmit the stream of packets from the source phone to the destination phone simultaneously across different routes (multi-path routing).[16] In such a way, temporary failures have less impact on the communication quality. In capillary routing it has been suggested to use at the packet level Fountain codes or particularly raptor codes for transmitting extra redundant packets making the communication more reliable.[citation needed] A number of protocols have been defined to support the reporting of quality of service (QoS) and quality of experience (QoE) for VoIP calls. These include RTCP Extended Report (RFC 3611), SIP RTCP Summary Reports, H.460.9 Annex B (for H.323), H.248.30 and MGCP extensions. The RFC 3611 VoIP Metrics block is generated by an IP phone or gateway during a live call and contains information on packet loss rate, packet discard rate (because of jitter), packet loss/discard burst metrics (burst length/density, gap length/density), network delay, end system delay, signal / noise / echo level, Mean Opinion Scores (MOS) and R factors and configuration information related to the jitter buffer. RFC 3611 VoIP metrics reports are exchanged between IP endpoints on an occasional basis during a call, and an end of call message sent via SIP RTCP Summary Report or one of the other signaling protocol extensions. RFC 3611 VoIP metrics reports are intended to support real time feedback related to QoS problems, the exchange of information between the endpoints for improved call quality calculation and a variety of other applications. Rural areas in particular are greatly hindered in their ability to choose a VoIP system over PBX. This is generally down to the poor access to superfast broadband in rural country areas. With the release of 4G data, there is a potential for corporate users based outside of populated areas to switch their internet connection to 4G data, which is comparatively as fast as a regular superfast broadband connection. This greatly enhances the overall quality and user experience of a VoIP system in these areas. This method was already trialled in rural Germany, surpassing all expectations.[17]

Layer 2[edit]
A number of protocols that deal with the data link layer and physical layer include quality-of-service mechanisms that can be used to ensure that applications like VoIP work well even in congested scenarios. Some examples include:

IEEE 802.11e is an approved amendment to the IEEE 802.11 standard that defines a set of quality-of-service enhancements for wireless LAN applications through modifications to the Media Access Control (MAC) layer. The standard is considered of critical importance for delay-sensitive applications, such as voice over wireless IP.

IEEE 802.1p defines 8 different classes of service (including one dedicated to voice) for traffic on layer-2 wired Ethernet.

The ITU-T standard, which provides a way to create a high-speed (up to 1 gigabit per second) Local area network(LAN) using existing home wiring (power lines, phone lines and coaxial cables). provides QoS by means of "Contention-Free Transmission Opportunities" (CFTXOPs) which are allocated to flows (such as a VoIP call) which require QoS and which have negotiated a "contract" with the network controllers.

PSTN integration[edit]
The Media VoIP Gateway connects the digital media stream, so as to complete creating the path for voice as well as data media. It includes the interface for connecting the standard PSTN networks with the ATM and Inter Protocol networks. The Ethernet interfaces are also included in the modern systems, which are specially designed to link calls that are passed via the VoIP.[18] E.164 is a global FGFnumbering standard for both the PSTN and PLMN. Most VoIP implementations support E.164 to allow calls to be routed to and from VoIP subscribers and the PSTN/PLMN.[19] VoIP implementations can also allow other identification techniques to be used. For example, Skype allows subscribers to choose "Skype names"[20] (usernames) whereas SIP implementations can use URIs[21] similar to email addresses. Often VoIP implementations employ methods of translating non-E.164 identifiers to E.164 numbers and vice-versa, such as the Skype-In service provided by Skype[22] and the ENUM service in IMS and SIP.[23] Echo can also be an issue for PSTN integration.[24] Common causes of echo include impedance mismatches in analog circuitry and acoustic coupling of the transmit and receive signal at the receiving end.

Number portability[edit]
Local number portability (LNP) and Mobile number portability (MNP) also impact VoIP business. In November 2007, the Federal Communications Commission in the United States released an order extending number portability obligations to interconnected VoIP providers and carriers that support VoIP providers.[25] Number portability is a service that allows a subscriber to select a new telephone carrier without requiring a new number to be issued. Typically, it is the responsibility of the former carrier to "map" the old number to the undisclosed number assigned by the new carrier. This is

achieved by maintaining a database of numbers. A dialed number is initially received by the original carrier and quickly rerouted to the new carrier. Multiple porting references must be maintained even if the subscriber returns to the original carrier. The FCC mandates carrier compliance with these consumer-protection stipulations. A voice call originating in the VoIP environment also faces challenges to reach its destination if the number is routed to a mobile phone number on a traditional mobile carrier. VoIP has been identified in the past as a Least Cost Routing (LCR) system, which is based on checking the destination of each telephone call as it is made, and then sending the call via the network that will cost the customer the least.[26] This rating is subject to some debate given the complexity of call routing created by number portability. With GSM number portability now in place, LCR providers can no longer rely on using the network root prefix to determine how to route a call. Instead, they must now determine the actual network of every number before routing the call. Therefore, VoIP solutions also need to handle MNP when routing a voice call. In countries without a central database, like the UK, it might be necessary to query the GSM network about which home network a mobile phone number belongs to. As the popularity of VoIP increases in the enterprise markets because of least cost routing options, it needs to provide a certain level of reliability when handling calls. MNP checks are important to assure that this quality of service is met. Handling MNP lookups before routing a call provides some assurance that the voice call will actually work.

Emergency calls[edit]
A telephone connected to a land line has a direct relationship between a telephone number and a physical location, which is maintained by the telephone company and available to emergency responders via the national emergency response service centers in form of emergency subscriber lists. When an emergency call is received by a center the location is automatically determined from its databases and displayed on the operator console. In IP telephony, no such direct link between location and communications end point exists. Even a provider having hardware infrastructure, such as a DSL provider, may only know the approximate location of the device, based on the IP address allocated to the network router and the known service address. However, some ISPs do not track the automatic assignment of IP addresses to customer equipment.[27] IP communication provides for device mobility. For example, a residential broadband connection may be used as a link to a virtual private network of a corporate entity, in which case the IP address being used for customer communications may belong to the enterprise, not being the network

address of the residential ISP. Such off-premise extensions may appear as part of an upstream IP PBX. On mobile devices, e.g., a 3G handset or USB wireless broadband adapter, the IP address has no relationship with any physical location known to the telephony service provider, since a mobile user could be anywhere in a region with network coverage, even roaming via another cellular company. At the VoIP level, a phone or gateway may identify itself with a Session Initiation Protocol (SIP) registrar by its account credentials. In such cases, the Internet telephony service provider (ITSP) only knows that a particular user's equipment is active. Service providers often provide emergency response services by agreement with the user who registers a physical location and agrees that emergency services are only provided to that address if an emergency number is called from the IP device. Such emergency services are provided by VoIP vendors in the United States by a system called Enhanced 911 (E911), based on the Wireless Communications and Public Safety Act of 1999. The VoIP E911 emergency-calling system associates a physical address with the calling party's telephone number. All VoIP providers that provide access to the public switched telephone network are required to implement E911,[27] a service for which the subscriber may be charged. However, end-customer participation in E911 is not mandatory and customers may opt-out of the service.[27] The VoIP E911 system is based on a static table lookup. Unlike in cellular phones, where the location of an E911 call can be traced using assisted GPS or other methods, the VoIP E911 information is only accurate so long as subscribers, who have the legal responsibility, are diligent in keeping their emergency address information current.

Fax support[edit]
Support for fax has been problematic in many VoIP implementations, as most voice digitization and compression codecs are optimized for the representation of human voice and the proper timing of the modem signals cannot be guaranteed in a packet-based, connection-less network. An alternative IP-based solution for delivering fax-over-IP called T.38 is available. Sending faxes using VoIP is sometimes referred to as FoIP, or Fax over IP.[28] The T.38 protocol is designed to compensate for the differences between traditional packet-less communications over analog lines and packet-based transmissions which are the basis for IP communications. The fax machine could be a traditional fax machine connected to the PSTN, or an ATA box (or similar). It could be a fax machine with an RJ-45 connector plugged straight into an IP network, or it could be a computer pretending to be a fax machine.[29] Originally, T.38 was designed to use UDP and TCP transmission methods across an IP network. TCP is better suited for use

between two IP devices. However, older fax machines, connected to an analog system, benefit from UDP near real-time characteristics due to the "no recovery rule" when a UDP packet is lost or an error occurs during transmission.[30] UDP transmissions are preferred as they do not require testing for dropped packets and as such since each T.38 packet transmission includes a majority of the data sent in the prior packet, a T.38 termination point has a higher degree of success in re-assembling the fax transmission back into its original form for interpretation by the end device. This in an attempt to overcome the obstacles of simulating real time transmissions using packet based protocol.[31] There have been updated versions of T.30 to resolve the fax over IP issues, which is the core fax protocol. Some newer high end fax machines have T.38 built-in capabilities which allow the user to plug right into the network and transmit/receive faxes in native T.38 like the Ricoh 4410NF Fax Machine.[32] A unique feature of T.38 is that each packet contains a portion of the main data sent in the previous packet. With T.38, two successive lost packets are needed to actually lose any data. The data one will lose will only be a small piece, but with the right settings and error correction mode, there is an increased likelihood that they will receive enough of the transmission to satisfy the requirements of the fax machine for output of the sent document. While many late-model analog telephone adapters (ATAs) support T.38, uptake has been limited as many voice-over-IP providers perform least-cost routing which selects the least expensive PSTN gateway in the called city for an outbound message. There is typically no means to ensure that that gateway is T.38 capable. Providers often place their own equipment (such as an Asterisk PBX installation) in the signal path, which creates additional issues as every link in the chain must be T.38 aware for the protocol to work. Similar issues arise if a provider is purchasing local direct inward dial numbers from the lowest bidder in each city, as many of these may not be T.38 enabled.

Power requirements[edit]
Telephones for traditional residential analog service are usually connected directly to telephone company phone lines which provide direct current to power most basic analog handsets independently of locally available electrical power. IP Phones and VoIP telephone adapters connect to routers or cable modems which typically depend on the availability of mains electricity or locally generated power.[33] Some VoIP service providers use customer premises equipment (e.g., cablemodems) with battery-backed power supplies to assure uninterrupted service for up to several hours in case of local power failures. Such batterybacked devices typically are designed for use with analog handsets.

Some VoIP service providers implement services to route calls to other telephone services of the subscriber, such a cellular phone, in the event that the customer's network device is inaccessible to terminate the call. The susceptibility of phone service to power failures is a common problem even with traditional analog service in areas where many customers purchase modern telephone units that operate with wireless handsets to a base station, or that have other modern phone features, such as built-in voicemail or phone book features.

The historical separation of IP networks and the PSTN provided redundancy when no portion of a call was routed over IP network. An IP network outage would not necessarily mean that a voice communication outage would occur simultaneously, allowing phone calls to be made during IP network outages. When telephone service relies on IP network infrastructure such as the Internet, a network failure can isolate users from all telephony communication, including Enhanced 911 and equivalent services in other locales.[original research?] However, the network design envisioned by DARPA in the early 1980s included a fault tolerant architecture under adverse conditions.

The security concerns of VoIP telephone systems are similar to those of any Internet-connected device. This means that hackers who know about these vulnerabilities can institutedenial-ofservice attacks, harvest customer data, record conversations and compromise voicemail messages.[34][35][36] Compromised VoIP user account or session credentials may enable an attacker to incur substantial charges from third-party services, such as long-distance or international telephone calling. The technical details of many VoIP protocols create challenges in routing VoIP traffic through firewalls and network address translators, used to interconnect to transit networks or the Internet. Private session border controllers are often employed to enable VoIP calls to and from protected networks. For example, Skype uses a proprietary protocol to route calls through other Skype peers on the network, enabling it to traverse symmetric NATs and firewalls. Other methods to traverse NAT devices involve assistive protocols such asSTUN and Interactive Connectivity Establishment (ICE). Many consumer VoIP solutions do not support encryption of the signaling path or the media, however securing a VoIP phone is conceptually easier to implement than on traditional telephone circuits. A result of the lack of encryption is a relative easy to eavesdrop on VoIP calls when access

to the data network is possible.[37] Free open-source solutions, such as Wireshark, facilitate capturing VoIP conversations. Standards for securing VoIP are available in the Secure Real-time Transport Protocol (SRTP) and the ZRTP protocol for analog telephony adapters as well as for somesoftphones. IPsec is available to secure point-to-point VoIP at the transport level by using opportunistic encryption. In 2005, Skype invited a researcher, Tom Berson, to assess the security of the Skype software, and his conclusions are available in a published report.[38] Government and military organizations use various security measures to protect VoIP traffic, such as voice over secure IP (VoSIP), secure voice over IP (SVoIP), and secure voice over secure IP (SVoSIP).[39] The distinction lies in whether encryption is applied in the telephone or in the network[40] or both. Secure voice over secure IP is accomplished by encrypting VoIP with protocols such as SRTP or ZRTP. Secure voice over IP is accomplished by using Type 1 encryption on a classified network, like SIPRNet.[41][42][43][44][45]Public Secure VoIP is also available with free GNU programs and in many popular commercial VoIP programs via libraries such as ZRTP.[46]

Caller ID[edit]
Further information: Caller ID spoofing Caller ID support among VoIP providers varies, but is provided by the majority of VoIP providers. Many VoIP service providers allow callers to configure arbitrary caller ID information, thus permitting spoofing attacks.[47] Business-grade VoIP equipment and software often makes it easy to modify caller ID information, providing many businesses great flexibility. The United States enacted the Truth in Caller ID Act of 2009 on December 22, 2010. This law makes it a crime to "knowingly transmit misleading or inaccurate caller identification information with the intent to defraud, cause harm, or wrongfully obtain anything of value ...".[48] Rules implementing the law were adopted by the Federal Communications Commission on June 20, 2011.[49]

Compatibility with traditional analog telephone sets[edit]

Most analog telephone adapters do not decode dial pulses generated by older telephones, supporting only touch-tone. Pulse-to-tone converters are commercially available;[50] a user reports that a few specific ATA models (such as the Grandstream 502) recognise pulse dial directly,[51][52] but are poorly documented and provide no assurance that newer models in the same series will retain this compatibility.

Support for other telephony devices[edit]

Another challenge for VoIP implementations is the proper handling of outgoing calls from other telephony devices such as digital video recorders, satellite television receivers, alarm systems, conventional modems and other similar devices that depend on access to a PSTN telephone line for some or all of their functionality. These types of calls sometimes complete without any problems, but in other cases they fail. If VoIP and cellular substitution becomes very popular, some ancillary equipment makers may be forced to redesign equipment, because it would no longer be possible to assume a conventional PSTN telephone line would be available in consumers' houses.

User and administrative interfaces[edit]

Voice over IP services typically take advantage of other Internet- or web-based facilities for operation and administration. Websites provide customer interaction, account configuration, service statistics, and billing. In addition, VoIP communication sessions may be launched directly from webpages or software that issue requests to web-based facilities. Web-based VoIP uses this integration to conduct telephone sessions without the need for a telephone set, be it conventional POTS- or IP-based. An example is the click-to-callservice, in which a software agent running in the web-browser permits users to click on a telephone number embedded in any web page to initiate a telephone call. The service only requires a microphone and an audio head set connected to the user's computer.

Operational cost[edit]
VoIP can be a benefit for reducing communication and infrastructure costs. Examples include:

Routing phone calls over existing data networks to avoid the need for separate voice and data networks.[53]

The ability to transmit more than one telephone call over a single broadband connection. Secure calls using standardized protocols (such as Secure Real-time Transport Protocol). Most of the difficulties of creating a secure telephone connection over traditional phone lines, such as digitizing and digital transmission, are already in place with VoIP. It is only necessary to encrypt and authenticate the existing data stream.

Regulatory and legal issues[edit]

As the popularity of VoIP grows, governments are becoming more interested in regulating VoIP in a manner similar to PSTN services.[54] Throughout the developing world, countries where regulation is weak or captured by the dominant operator, restrictions on the use of VoIP are imposed, including in Panamawhere VoIP is taxed, Guyana where VoIP is prohibited and India where its retail commercial sales is allowed but only for long distance service.[55] In Ethiopia, where the government is nationalising telecommunication service, it is a criminal offence to offer services using VoIP. The country has installed firewalls to prevent international calls being made using VoIP. These measures were taken after the popularity of VoIP reduced the income generated by the state owned telecommunication company.

European Union[edit]
This section is outdated. Please update this article to reflect recent events or newly available information.
Last update: 2006 (September 2013)

In the European Union, the treatment of VoIP service providers is a decision for each national telecommunications regulator, which must use competition law to define relevant national markets and then determine whether any service provider on those national markets has "significant market power" (and so should be subject to certain obligations). A general distinction is usually made between VoIP services that function over managed networks (via broadband connections) and VoIP services that function over unmanaged networks (essentially, the Internet).[citation needed] The relevant EU Directive is not clearly drafted concerning obligations which can exist independently of market power (e.g., the obligation to offer access to emergency calls), and it is impossible to say definitively whether VoIP service providers of either type are bound by them. A review of the EU Directive is under way and should be complete by 2007.[citation needed]

In India, it is legal to use VoIP, but it is illegal to have VoIP gateways inside India.[56] This effectively means that people who have PCs can use them to make a VoIP call to any number, but if the remote side is a normal phone, the gateway that converts the VoIP call to a POTS call is not permitted by law to be inside India.[56] In the interest of the Access Service Providers and International Long Distance Operators the Internet telephony was permitted to the ISP with restrictions. Internet Telephony is considered to be different service in its scope, nature and kind from real time voice as offered by other Access Service

Providers and Long Distance Carriers. Hence the following type of Internet Telephony are permitted in India:[57] (a) PC to PC; within or outside India (b) PC / a device / Adapter conforming to standard of any international agencies like- ITU or IETF etc. in India to PSTN/PLMN abroad. (c) Any device / Adapter conforming to standards of International agencies like ITU, IETF etc. connected to ISP node with static IP address to similar device / Adapter; within or outside India. (d) Except whatever is described in condition (ii) above, no other form of Internet Telephony is permitted. (e) In India no Separate Numbering Scheme is provided to the Internet Telephony. Presently the 10 digit Numbering allocation based on E.164 is permitted to the Fixed Telephony, GSM, CDMA wireless service. For Internet Telephony the numbering scheme shall only conform to IP addressing Scheme of Internet Assigned Numbers Authority (IANA). Translation of E.164 number / private number to IP address allotted to any device and vice versa, by ISP to show compliance with IANA numbering scheme is not permitted. (f) The Internet Service Licensee is not permitted to have PSTN/PLMN connectivity. Voice communication to and from a telephone connected to PSTN/PLMN and following E.164 numbering is prohibited in India.

Middle East[edit]
In the UAE and Oman it is illegal to use any form of VoIP, to the extent that Web sites of Gizmo5 are blocked. Providing or using VoIP services is illegal in Oman. Those who violate the law stand to be fined 50,000 Omani Rial (about 130,317 US dollars) or spend two years in jail or both. In 2009, police in Oman have raided 121 Internet cafes throughout the country and arrested 212 people for using/providing VoIP services.[citation needed]

South Korea[edit]
In South Korea, only providers registered with the government are authorized to offer VoIP services. Unlike many VoIP providers, most of whom offer flat rates, Korean VoIP services are generally metered and charged at rates similar to terrestrial calling. Foreign VoIP providers encounter high barriers to government registration. This issue came to a head in 2006 when Internet service providers providing personal Internet services by contract to United States Forces Korea members residing on USFK bases threatened to block off access to VoIP services used by USFK members as an economical way to keep in contact with their families in the United States, on the grounds that the service members' VoIP providers were not registered. A

compromise was reached between USFK and Korean telecommunications officials in January 2007, wherein USFK service members arriving in Korea before June 1, 2007, and subscribing to the ISP services provided on base may continue to use their US-based VoIP subscription, but later arrivals must use a Korean-based VoIP provider, which by contract will offer pricing similar to the flat rates offered by US VoIP providers.[58]

United States[edit]
In the United States, the Federal Communications Commission requires all interconnected VoIP service providers to comply with requirements comparable to those for traditional telecommunications service providers. VoIP operators in the US are required to support local number portability; make service accessible to people with disabilities; pay regulatory fees, universal service contributions, and other mandated payments; and enable law enforcement authorities to conduct surveillance pursuant to the Communications Assistance for Law Enforcement Act (CALEA). "Interconnected" VoIP operators also must provide Enhanced 911 service, disclose any limitations on their E-911 functionality to their consumers, and obtain affirmative acknowledgements of these disclosures from all consumers.[59] VoIP operators also receive the benefit of certain US telecommunications regulations, including an entitlement to interconnection and exchange of traffic with incumbent local exchange carriers via wholesale carriers. Providers of "nomadic" VoIP servicethose who are unable to determine the location of their usersare exempt from state telecommunications regulation.[60] Another legal issue that the US Congress is debating concerns changes to the Foreign Intelligence Surveillance Act. The issue in question is calls between Americans and foreigners. The National Security Agency (NSA) is not authorized to tap Americans' conversations without a warrantbut the Internet, and specifically VoIP does not draw as clear a line to the location of a caller or a call's recipient as the traditional phone system does. As VoIP's low cost and flexibility convinces more and more organizations to adopt the technology, the surveillance for law enforcement agencies becomes more difficult. VoIP technology has also increased security concerns because VoIP and similar technologies have made it more difficult for the government to determine where a target is physically located when communications are being intercepted, and that creates a whole set of new legal challenges.[61]

Historical milestones[edit]

1973: Network Voice Protocol (NVP) developed by Danny Cohen and others to carry real time voice over Arpanet[citation needed]

1974: The Institute of Electrical and Electronic Engineers (IEEE) published a paper titled "A Protocol for Packet Network Interconnection".[62]

1974: Network Voice Protocol (NVP) first tested over Arpanet in August 1974, carrying 16k CVSD encoded voice first implementation of Voice over IP

1977: Danny Cohen, Vint Cerf, Jon Postel agree to separate IP from TCP, and create UDP for carrying real time traffic

1981: IPv4 is described in RFC 791. 1985: The National Science Foundation commissions the creation of NSFNET.[63] 1986: Proposals from various standards organizations[specify] for Voice over ATM, in addition to commercial packet voice products from companies such as StrataCom

1991: First Voice Over IP application, Speak Freely, released as public domain. Originally written by John Walker and further developed by Brian C. Wiles.[64]

1992: Voice over Frame Relay standards development within Frame Relay Forum 1994: MTALK, a freeware VoIP application for Linux[65] 1995: VocalTec releases the first commercial Internet phone software.[66][67]

Beginning in 1995, Intel, Microsoft and Radvision initiated standardization activities for VoIP communications system.[68]


ITU-T begins development of standards for the transmission and signaling of voice communications over Internet Protocol networks with the H.323 standard.[69]

US telecommunication companies petition the US Congress to ban Internet phone technology.[70]

1997: Level 3 began development of its first softswitch, a term they coined in 1998.[71] 1999:

The Session Initiation Protocol (SIP) specification RFC 2543 is released.[72] Mark Spencer of Digium develops the first open source private branch exchange (PBX) software (Asterisk).[73]

2004: Commercial VoIP service providers proliferate.

From Wikipedia, the free encyclopedia

For other uses, see Blog (disambiguation).


News Writing style Ethics Objectivity Values Attribution Defamation

Editorial independence Journalism school

Index of journalism articles Areas

Arts Business Data

Entertainment Environment Fashion Medicine Politics Science Sports Technology Trade Traffic Weather World Genres

Advocacy Analytic Blogging

Broadcast Citizen Civic

Collaborative Comics-based Community Database Gonzo Immersion Investigative Literary Muckraking Narrative

"New Journalism" Non-profit Online Opinion Peace

Photojournalism Scientific Visual Watchdog Social impact

Fourth Estate

Freedom of the press Infotainment Media bias Public relations Press service Propaganda model Yellow journalism News media

Newspapers Magazines TV and radio Internet News agencies Alternative media Roles

Journalists (reporters)

Columnist Blogger Editor Copy editor Meteorologist Presenter (news) Photographer

Pundit / commentator Category: Journalism


A blog (a truncation of the expression web log)[1] is a discussion or informational site published on the World Wide Web and consisting of discrete entries ("posts") typically displayed in reverse chronological order (the most recent post appears first). Until 2009 blogs were usually the work of a single individual, occasionally of a small group, and often covered a single subject. More recently "multi-author blogs" (MABs) have developed, with posts written by large numbers of authors and professionally edited. MABs from newspapers, other media outlets,universities, think tanks, advocacy groups and similar institutions account for an increasing quantity of blog traffic. The rise of Twitter and other "microblogging" systems helps integrate MABs and single-author blogs into societal newstreams. Blog can also be used as a verb, meaning to maintain or add content to a blog. The emergence and growth of blogs in the late 1990s coincided with the advent of web publishing tools that facilitated the posting of content by non-technical users. (Previously, a knowledge of such technologies as HTML and FTP had been required to publish content on the Web.)

A majority are interactive, allowing visitors to leave comments and even message each other via GUI widgets on the blogs, and it is this interactivity that distinguishes them from other static websites.[2] In that sense, blogging can be seen as a form of social networking service. Indeed, bloggers do not only produce content to post on their blogs, but also build social relations with their readers and other bloggers.[3]There are high-readership blogs which do not allow comments, such as Daring Fireball. Many blogs provide commentary on a particular subject; others function as more personal online diaries; others function more as online brand advertising of a particular individual or company. A typical blog combines text, images, and links to other blogs, Web pages, and other media related to its topic. The ability of readers to leave comments in an interactive format is an important contribution to the popularity of many blogs. Most blogs are primarily textual, although some focus on art (art blogs), photographs (photoblogs), videos (video blogs or "vlogs"), music (MP3 blogs), and audio (podcasts). Microblogging is another type of blogging, featuring very short posts. In education, blogs can be used as instructional resources. These blogs are referred to as edublogs. On 16 February 2011, there were over 156 million public blogs in existence.[4] On 20 February 2014, there were around 172 million Tumblr[5]and 75.8 million WordPress[6] blogs in existence worldwide. According to critics and other bloggers, Blogger is the most popular blogging service used today, however Blogger does not offer public statistics.[7][8] Technorati has 1.3 million blogs as of February 22, 2014[9]
Contents [hide] 1 History o o o o 1.1 Origins 1.2 Rise in popularity 1.3 Political impact 1.4 Mainstream popularity

2 Types 3 Community and cataloging 4 Popularity 5 Blurring with the mass media 6 Consumer-generated advertising in blogs 7 Legal and social consequences o 7.1 Defamation or liability

o o o o

7.2 Employment 7.3 Political dangers 7.4 Personal safety 7.5 Behavior

8 See also 9 References 10 Further reading 11 External links


Early example of a "diary" style blog consisting of text and images transmitted wirelessly in real time from a wearable computer with head-up display, 22 February 1995

Main articles: History of blogging and online diary The term "weblog" was coined by Jorn Barger[10] on 17 December 1997. The short form, "blog", was coined by Peter Merholz, who jokingly broke the word weblog into the phrase we blog in the sidebar of his blog in April or May 1999.[11][12][13] Shortly thereafter,Evan Williams at Pyra Labs used "blog" as both a noun and verb ("to blog", meaning "to edit one's weblog or to post to one's weblog") and devised the term "blogger" in connection with Pyra Labs' Blogger product, leading to the popularization of the terms.[14]

Before blogging became popular, digital communities took many forms, including Usenet, commercial online services such as GEnie, BiX and the early CompuServe, e-mail lists[15] and Bulletin Board Systems (BBS). In the 1990s, Internet forum software, created running

conversations with "threads". Threads are topical connections between messages on a virtual "corkboard". From 14 June 1993 Mosaic Communications Corporation maintained their "Whats New"[16] list of new websites, updated daily and archived monthly. The page was accessible by a special "What's New" button in the Mosaic web browser. The modern blog evolved from the online diary, where people would keep a running account of their personal lives. Most such writers called themselves diarists, journalists, or journalers. Justin Hall, who began personal blogging in 1994 while a student at Swarthmore College, is generally recognized as one of the earlier bloggers,[17] as is Jerry Pournelle.[18] Dave Winer's Scripting News is also credited with being one of the older and longer running weblogs.[19][20] The Australian Netguide magazine maintained the Daily Net News[21] on their web site from 1996. Daily Net News ran links and daily reviews of new websites, mostly in Australia. Another early blog was Wearable Wireless Webcam, an online shared diary of a person's personal life combining text, video, and pictures transmitted live from a wearable computer and EyeTap device to a web site in 1994. This practice of semi-automated blogging with live video together with text was referred to as sousveillance, and such journals were also used as evidence in legal matters. Early blogs were simply manually updated components of common Web sites. However, the evolution of tools to facilitate the production and maintenance of Web articles posted in reverse chronological order made the publishing process feasible to a much larger, less technical, population. Ultimately, this resulted in the distinct class of online publishing that produces blogs we recognize today. For instance, the use of some sort of browser-based software is now a typical aspect of "blogging". Blogs can be hosted by dedicated blog hosting services, or they can be run using blog software, or on regular web hosting services. Some early bloggers, such as The Misanthropic Bitch, who began in 1997, actually referred to their online presence as a zine, before the term blog entered common usage.

Rise in popularity
After a slow start, blogging rapidly gained in popularity. Blog usage spread during 1999 and the years following, being further popularized by the near-simultaneous arrival of the first hosted blog tools:

Bruce Ableson launched Open Diary in October 1998, which soon grew to thousands of online diaries. Open Diary innovated the reader comment, becoming the first blog community where readers could add comments to other writers' blog entries.

Brad Fitzpatrick started LiveJournal in March 1999.

Andrew Smales created in July 1999 as an easier alternative to maintaining a "news page" on a Web site, followed by Diaryland in September 1999, focusing more on a personal diary community.[22]

Evan Williams and Meg Hourihan (Pyra Labs) launched in August 1999 (purchased by Google in February 2003)

Political impact
See also: Political blog

On 6 December 2002, Josh Marshall's blog called attention to U.S. SenatorLott's comments regarding Senator Thurmond. Senator Lott was eventually to resign his Senate leadership position over the matter.

An early milestone in the rise in importance of blogs came in 2002, when many bloggers focused on comments by U.S. Senate Majority Leader Trent Lott.[23] Senator Lott, at a party honoring U.S. Senator Strom Thurmond, praised Senator Thurmond by suggesting that the United States would have been better off had Thurmond been elected president. Lott's critics saw these comments as a tacit approval ofracial segregation, a policy advocated by Thurmond's 1948 presidential campaign. This view was reinforced by documents and recorded interviews dug up by bloggers. (See Josh Marshall's Talking Points Memo.) Though Lott's comments were made at a public event attended by the media, no major media organizations reported on his controversial comments until after blogs

broke the story. Blogging helped to create a political crisis that forced Lott to step down as majority leader. Similarly, blogs were among the driving forces behind the "Rathergate" scandal. To wit: (television journalist) Dan Rather presented documents (on the CBS show 60 Minutes) that conflicted with accepted accounts of President Bush's military service record. Bloggers declared the documents to be forgeries and presented evidence and arguments in support of that view. Consequently, CBS apologized for what it said were inadequate reporting techniques (see Little Green Footballs). Many bloggers view this scandal as the advent of blogs' acceptance by the mass media, both as a news source and opinion and as means of applying political pressure.[original research?] The impact of these stories gave greater credibility to blogs as a medium of news dissemination. Though often seen as partisan gossips,[citation needed] bloggers sometimes lead the way in bringing key information to public light, with mainstream media having to follow their lead. More often, however, news blogs tend to react to material already published by the mainstream media. Meanwhile, an increasing number of experts blogged, making blogs a source of in-depth analysis.[original research?] In Russia, some political bloggers have started to challenge the dominance of official, overwhelmingly pro-government media. Bloggers such as Rustem Adagamov and Alexei Navalny have many followers and the latter's nickname for the ruling United Russia party as the "party of crooks and thieves" and been adopted by anti-regime protesters.[24] This led to the Wall Street Journal calling Navalny "the manVladimir Putin fears most" in March 2012.[25]

Mainstream popularity
By 2004, the role of blogs became increasingly mainstream, as political consultants, news services, and candidates began using them as tools for outreach and opinion forming. Blogging was established by politicians and political candidates to express opinions on war and other issues and cemented blogs' role as a news source. (See Howard Dean and Wesley Clark.) Even politicians not actively campaigning, such as the UK's Labour Party's MP Tom Watson, began to blog to bond with constituents. In January 2005, Fortune magazine listed eight bloggers whom business people "could not ignore": Peter Rojas, Xeni Jardin, Ben Trott, Mena Trott, Jonathan Schwartz, Jason Goldman, Robert Scoble, and Jason Calacanis.[26] Israel was among the first national governments to set up an official blog.[27] Under David Saranga, the Israeli Ministry of Foreign Affairs became active in adopting Web 2.0initiatives, including an official video blog[27] and a political blog.[28] The Foreign Ministry also held a microblogging press conference via Twitter about its war with Hamas, with Saranga answering questions from the public

in common text-messaging abbreviations during a live worldwide press conference.[29] The questions and answers were later posted on IsraelPolitik, the country's official political blog.[30] The impact of blogging upon the mainstream media has also been acknowledged by governments. In 2009, the presence of the American journalism industry had declined to the point that several newspaper corporations were filing for bankruptcy, resulting in less direct competition between newspapers within the same circulation area. Discussion emerged as to whether the newspaper industry would benefit from a stimulus package by the federal government. U.S. President Barack Obama acknowledged the emerging influence of blogging upon society by saying "if the direction of the news is all blogosphere, all opinions, with no serious fact-checking, no serious attempts to put stories in context, then what you will end up getting is people shouting at each other across the void but not a lot of mutual understanding.[31]

There are many different types of blogs, differing not only in the type of content, but also in the way that content is delivered or written. Personal blogs The personal blog is an ongoing diary or commentary written by an individual. Microblogging Microblogging is the practice of posting small pieces of digital contentwhich could be text, pictures, links, short videos, or other mediaon the Internet. Microblogging offers a portable communication mode that feels organic and spontaneous to many and has captured the public imagination. Friends use it to keep in touch, business associates use it to coordinate meetings or share useful resources, and celebrities and politicians (or their publicists) microblog about concert dates, lectures, book releases, or tour schedules. A wide and growing range of add-on tools enables sophisticated updates and interaction with other applications, and the resulting profusion of functionality is helping to define new possibilities for this type of communication.[32] Examples of these include Twitter, Facebook, Tumblr, and by far the largest WeiBo. Corporate and organizational blogs A blog can be private, as in most cases, or it can be for business purposes. Blogs used internally to enhance the communication and culture in a corporation or externally formarketing, branding or public relations purposes are called corporate blogs. Similar blogs for clubs and societies are called club blogs, group blogs, or by similar names; typical use is to inform members and other interested parties of club and member activities.

By genre Some blogs focus on a particular subject, such as political blogs, health blogs, travel blogs (also known as travelogs), gardening blogs, house blogs,[33][34] fashion blogs,project blogs, education blogs, niche blogs, classical music blogs, quizzing blogs and legal blogs (often referred to as a blawgs) or dreamlogs. How To/Tutorial blogs are becoming increasing popular.[35] Two common types of genre blogs are art blogs and music blogs. A blog featuring discussions especially about home and family is not uncommonly called a mom blog and one made popular is by Erica Diamond who created which is syndicated to over two million readers monthly.[36][37][38][39][40][41] While not a legitimate type of blog, one used for the sole purpose of spamming is known as a Splog. By media type A blog comprising videos is called a vlog, one comprising links is called a linklog, a site containing a portfolio of sketches is called a sketchblog or one comprising photos is called a photoblog. Blogs with shorter posts and mixed media types are called tumblelogs. Blogs that are written on typewriters and then scanned are called typecast or typecast blogs; see typecasting (blogging). A rare type of blog hosted on the Gopher Protocol is known as a Phlog. By device Blogs can also be defined by which type of device is used to compose it. A blog written by a mobile device like a mobile phone or PDA could be called a moblog.[42] One early blog was Wearable Wireless Webcam, an online shared diary of a person's personal life combining text, video, and pictures transmitted live from a wearable computer andEyeTap device to a web site. This practice of semi-automated blogging with live video together with text was referred to as sousveillance. Such journals have been used as evidence in legal matters.[citation needed] Reverse blog A Reverse Blog is composed by its users rather than a single blogger. This system has the characteristics of a blog, and the writing of several authors. These can be written by several contributing authors on a topic, or opened up for anyone to write. There is typically some limit to the number of entries to keep it from operating like a Web Forum.

Community and cataloging

The Blogosphere

The collective community of all blogs is known as the blogosphere. Since all blogs are on the internet by definition, they may be seen as interconnected and socially networked, through blogrolls, comments, linkbacks (refbacks, trackbacks or pingbacks) and backlinks. Discussions "in the blogosphere" are occasionally used by the media as a gauge of public opinion on various issues. Because new, untapped communities of bloggers and their readers can emerge in the space of a few years, Internet marketers pay close attention to "trends in the blogosphere".[43] Blog search engines Several blog search engines are used to search blog contents, such as Bloglines, BlogScope, and Technorati. Technorati, which is among the more popular blog search engines, provides current information on both popular searches and tags used to categorize blog postings.[44] The research community is working on going beyond simple keyword search, by inventing new ways to navigate through huge amounts of information present in the blogosphere, as demonstrated by projects like BlogScope, which was shut down in 2012.[citation needed] Blogging communities and directories Several online communities exist that connect people to blogs and bloggers to other bloggers, including BlogCatalog and MyBlogLog.[45] Interest-specific blogging platforms are also available. For instance, Blogster has a sizable community of political bloggers among its members. Global Voices aggregates international bloggers, "with emphasis on voices that are not ordinarily heard in international mainstream media."[46] Blogging and advertising It is common for blogs to feature advertisements either to financially benefit the blogger or to promote the blogger's favorite causes. The popularity of blogs has also given rise to "fake blogs" in which a company will create a fictional blog as a marketing tool to promote a product.[47]

Researchers have actively analyzed the dynamics of how blogs become popular. There are essentially two measures of this: popularity through citations, as well as popularity through affiliation (i.e., blogroll). The basic conclusion from studies of the structure of blogs is that while it takes time for a blog to become popular through

blogrolls, permalinkscan boost popularity more quickly, and are perhaps more indicative of popularity and authority than blogrolls, since they denote that people are actually reading the blog's content and deem it valuable or noteworthy in specific cases.[48] The blogdex project was launched by researchers in the MIT Media Lab to crawl the Web and gather data from thousands of blogs in order to investigate their social properties. Information was gathered by the tool for over four years, during which it autonomously tracked the most contagious information spreading in the blog community, ranking it by recency and popularity. It can therefore[original research?] be considered the first instantiation of a memetracker. The project was replaced by which in turn has been replaced by Blogs are given rankings by blog search engine Technorati based on the number of incoming links and Alexa Internet (Web hits of Alexa Toolbar users). In August 2006, Technorati found that the most linked-to blog on the internet was that of Chinese actress Xu Jinglei.[49] Chinese media Xinhua reported that this blog received more than 50 million page views, claiming it to be the most popular blog in the world.[50] Technorati rated Boing Boing to be the most-read groupwritten blog.[49]

Blurring with the mass media

Many bloggers, particularly those engaged in participatory journalism, differentiate themselves from the mainstream media, while others are members of that media working through a different

channel. Some institutions see blogging as a means of "getting around the filter" and pushing messages directly to the public. Some critics[who?] worry that bloggers respect neither copyright nor the role of the mass media in presenting society with credible news. Bloggers and other contributors to user-generated content are behindTime magazine naming their 2006 person of the year as "You". Many mainstream journalists, meanwhile, write their own blogs well over 300, according to's J-blog list.[citation needed] The first known use of a blog on a news site was in August 1998, when Jonathan Dube of The Charlotte Observer published one chronicling Hurricane Bonnie.[51] Some bloggers have moved over to other media. The following bloggers (and others) have appeared on radio and television: Duncan Black (known widely by his pseudonym, Atrios), Glenn Reynolds (Instapundit), Markos Moulitsas Zniga (Daily Kos), Alex Steffen (Worldchanging), Ana Marie Cox (Wonkette), Nate Silver (, and Ezra Klein (Ezra Klein blog in The American Prospect, now in the Washington Post). In counterpoint, Hugh Hewitt exemplifies a mass media personality who has moved in the other direction, adding to his reach in "old media" by being an influential blogger. Similarly, it was Emergency Preparedness and Safety Tips On Air and Online blog articles that capturedSurgeon General of the United States Richard Carmona's attention and earned his kudos for the associated broadcasts by talk show host Lisa Tolliver and

Westchester Emergency Volunteer ReservesMedical Reserve Corps Director Marianne Partridge.[52][53][54][55] Blogs have also had an influence on minority languages, bringing together scattered speakers and learners; this is particularly so with blogs in Gaelic languages. Minority language publishing (which may lack economic feasibility) can find its audience through inexpensive blogging. There are many examples of bloggers who have published books based on their blogs, e.g., Salam Pax, Ellen Simonetti, Jessica Cutler, ScrappleFace. Blog-based books have been given the name blook. A prize for the best blog-based book was initiated in 2005,[56] the Lulu Blooker Prize.[57] However, success has been elusive offline, with many of these books not selling as well as their blogs. Only blogger Tucker Max made The New York Times Best Seller list.[58] The book based on Julie Powell's blog "The Julie/Julia Project" was made into the film Julie & Julia, apparently the first to do so.

Consumer-generated advertising in blogs

Consumer-generated advertising is a relatively new and controversial development and it has created a new model of marketing communication from businesses to consumers. Among the various forms of advertising on blog, the most controversial are the sponsored posts.[59] These are blog entries or posts and may be in the form of feedback, reviews, opinion, videos, etc. and usually contain a link back to the desired site using a keyword/s.

Blogs have led to some disintermediation and a breakdown of the traditional advertising model where companies can skip over the advertising agencies (previously the only interface with the customer) and contact the customers directly themselves. On the other hand, new companies specialised in blog advertising have been established, to take advantage of this new development as well. However, there are many people who look negatively on this new development. Some believe that any form of commercial activity on blogs will destroy the blogospheres credibility.[60]

Legal and social consequences

Blogging can result in a range of legal liabilities and other unforeseen consequences.[61]

Defamation or liability
Several cases have been brought before the national courts against bloggers concerning issues of defamation or liability. U.S. payouts related to blogging totaled $17.4 million by 2009; in some cases these have been covered by umbrella insurance.[62] The courts have returned with mixed verdicts. Internet Service Providers (ISPs), in general, are immune from liability for information that originates with third parties (U.S. Communications Decency Act and the EU Directive 2000/31/EC). In Doe v. Cahill, the Delaware Supreme Court held that stringent standards had to be met to unmask the anonymous bloggers, and also took the unusual step of dismissing the libel case itself (as unfounded under American libel law) rather than

referring it back to the trial court for reconsideration.[63] In a bizarre twist, the Cahills were able to obtain the identity of John Doe, who turned out to be the person they suspected: the town's mayor, Councilman Cahill's political rival. The Cahills amended their original complaint, and the mayor settled the case rather than going to trial. In January 2007, two prominent Malaysian political bloggers, Jeff Ooi and Ahirudin Attan, were sued by a pro-government newspaper, The New Straits Times Press (Malaysia) Berhad, Kalimullah bin Masheerul Hassan, Hishamuddin bin Aun and Brenden John a/l John Pereira over an alleged defamation. The plaintiff was supported by the Malaysian government.[64] Following the suit, the Malaysian government proposed to "register" all bloggers in Malaysia in order to better control parties against their interest.[65] This is the first such legal case against bloggers in the country. In the United States, blogger Aaron Wall was sued by Traffic Power for defamation and publication of trade secrets in 2005.[66] According to Wired Magazine, Traffic Power had been "banned from Google for allegedly rigging search engine results."[67] Wall and other "white hat" search engine optimization consultants had exposed Traffic Power in what they claim was an effort to protect the public. The case addressed the murky legal question of who is liable for comments posted on blogs.[68] The case was dismissed for lack of personal jurisdiction, and Traffic Power failed to appeal within the allowed time.[69] In 2009, a controversial and landmark decision by The Hon. Mr Justice Eady refused to grant an

order to protect the anonymity of Richard Horton. Horton was a police officer in the United Kingdom who blogged about his job under the name "NightJack".[70] In 2009, NDTV issued a legal notice to Indian blogger Kunte for a blog post criticizing their coverage of the Mumbai attacks.[71] The blogger unconditionally withdrew his post, which resulted in several Indian bloggers criticizing NDTV for trying to silence critics.[72]

Employees who blog about elements of their place of employment can begin to affect the brand recognition of their employer. In general, attempts by employee bloggers to protect themselves by maintaining anonymity have proved ineffective.[73] Delta Air Lines fired flight attendant Ellen Simonetti because she posted photographs of herself in uniform on an airplane and because of comments posted on her blog "Queen of Sky: Diary of a Flight Attendant" which the employer deemed inappropriate.[74][75] This case highlighted the issue of personal blogging and freedom of expression versus employer rights and responsibilities, and so it received wide media attention. Simonetti took legal action against the airline for "wrongful termination, defamation of character and lost future wages".[76] The suit was postponed while Delta was in bankruptcy proceedings (court docket).[77] In early 2006, Erik Ringmar, a tenured senior lecturer at the London School of Economics, was ordered by the convenor of his department to "take

down and destroy" his blog in which he discussed the quality of education at the school.[78] Mark Cuban, owner of the Dallas Mavericks, was fined during the 2006 NBA playoffs for criticizing NBA officials on the court and in his blog.[79] Mark Jen was terminated in 2005 after 10 days of employment as an Assistant Product Manager at Google for discussing corporate secrets on his personal blog, then called 99zeros and hosted on the Google-owned Blogger service.[80] He blogged about unreleased products and company finances a week before the company's earnings announcement. He was fired two days after he complied with his employer's request to remove the sensitive material from his blog.[81] In India, blogger Gaurav Sabnis resigned from IBM after his posts questioned the claims of a management school IIPM.[82] Jessica Cutler, aka "The Washingtonienne",[83] blogged about her sex life while employed as a congressional assistant. After the blog was discovered and she was fired,[84] she wrote a novel based on her experiences and blog: The Washingtonienne: A Novel. Cutler is presently being sued by one of her former lovers in a case that could establish the extent to which bloggers are obligated to protect the privacy of their real life associates.[85] Catherine Sanderson, a.k.a. Petite Anglaise, lost her job in Paris at a British accountancy firm because of blogging.[86] Although given in the blog in a fairly anonymous manner, some of the descriptions of the firm and some of its people were less than flattering. Sanderson later won a

compensation claim case against the British firm, however.[87] On the other hand, Penelope Trunk wrote an upbeat article in the Boston Globe back in 2006, entitled "Blogs 'essential' to a good career".[88] She was one of the first journalists to point out that a large portion of bloggers are professionals and that a well-written blog can help attract employers.

Political dangers
Blogging can sometimes have unforeseen consequences in politically sensitive areas. Blogs are much harder to control than broadcast or even print media. As a result, totalitarianand authoritarian regimes often seek to suppress blogs and/or to punish those who maintain them. In Singapore, two ethnic Chinese were imprisoned under the countrys anti-sedition law for posting anti-Muslim remarks in their blogs.[89] Egyptian blogger Kareem Amer was charged with insulting the Egyptian president Hosni Mubarak and an Islamic institution through his blog. It is the first time in the history of Egypt that a blogger was prosecuted. After a brief trial session that took place in Alexandria, the blogger was found guilty and sentenced to prison terms of three years for insulting Islamand inciting sedition, and one year for insulting Mubarak.[90] Egyptian blogger Abdel Monem Mahmoud was arrested in April 2007 for anti-government writings in his blog.[91] Monem is a member of the then banned Muslim Brotherhood.

After the 2011 Egyptian revolution, the Egyptian blogger Maikel Nabil Sanad was charged with insulting the military for an article he wrote on his personal blog and sentenced to 3 years.[92] After expressing opinions in his personal blog about the state of the Sudanese armed forces, Jan Pronk, United Nations Special Representative for the Sudan, was given three days notice to leave Sudan. The Sudanese army had demanded his deportation.[93][94] In Myanmar, Nay Phone Latt, a blogger, was sentenced to 20 years in jail for posting a cartoon critical of head of state Than Shwe.[95]

Personal safety
See also: Cyberstalking and Internet homicide One consequence of blogging is the possibility of attacks or threats against the blogger, sometimes without apparent reason. Kathy Sierra, author of the innocuous blog "Creating Passionate Users",[96] was the target of such vicious threats and misogynistic insults that she canceled her keynote speech at a technology conference in San Diego, fearing for her safety.[97] While a blogger's anonymity is often tenuous, Internet trolls who would attack a blogger with threats or insults can be emboldened by anonymity. Sierra and supporters initiated an online discussion aimed at countering abusive online behavior[98] and developed a blogger's code of conduct.

The Blogger's Code of Conduct is a proposal by Tim O'Reilly for bloggers to enforce civility on their blogs by being civil themselves and

moderating comments on their blog. The code was proposed in 2007 due to threats made to blogger Kathy Sierra.[99] The idea of the code was first reported by BBC News, who quoted O'Reilly saying, "I do think we need some code of conduct around what is acceptable behaviour, I would hope that it doesn't come through any kind of regulation it would come through self-regulation."[100] O'Reilly and others came up with a list of seven proposed ideas:[101][102][103][104] 1. Take responsibility not just for your own words, but for the comments you allow on your blog. 2. Label your tolerance level for abusive comments. 3. Consider eliminating anonymous comments. 4. Ignore the trolls. 5. Take the conversation offline, and talk directly, or find an intermediary who can do so. 6. If you know someone who is behaving badly, tell them so. 7. Don't say anything online that you wouldn't say in person. These ideas were predictably intensely discussed on the Web and in the media. While the internet has continued to grow, with online activity and discourse only picking up both in positive and negative ways in terms of blog interaction, the proposed Code has drawn more widespread attention to the necessity of monitoring blogging activity and social norms being as important online as offline

Personal area network

From Wikipedia, the free encyclopedia

Computer network types by spatial scope

Near field Communication (NFC) Body (BAN) Personal (PAN) Car/Electronics (CAN) Near-me (NAN)

Local (LAN)

Home (HAN) Storage (SAN)

Campus (CAN) Backbone Metropolitan (MAN) Wide (WAN) Cloud (IAN) Internet Interplanetary Internet


A personal area network (PAN) is a computer network used for data transmission among devices such as computers, telephonesand personal digital assistants. PANs can be used for communication among the personal devices themselves (intrapersonal communication), or for connecting to a higher level network and the Internet (an uplink). A wireless personal area network (WPAN) is a PAN carried over wireless network technologies such as:

IrDA Wireless USB Bluetooth Z-Wave ZigBee Body Area Network

The reach of a WPAN varies from a few centimeters to a few meters. A PAN may also be carried over wired computer buses such asUSB and FireWire.
Contents [hide] 1 Wireless Personal Area Network

o o o o

1.1 Bluetooth 1.2 Infrared Data Association 1.3 Wi-Fi 1.4 Body area network

2 See also 3 References 4 External links

Wireless Personal Area Network[edit]

A wireless personal area network (WPAN) is a personal area network a network for interconnecting devices centered on an individual person's workspace in which the connections are wireless. Wireless PAN is based on the standard IEEE 802.15. The two kinds of wireless technologies used for WPAN are Bluetooth and Infrared Data Association. A WPAN could serve to interconnect all the ordinary computing and communicating devices that many people have on their desk or carry with them today; or it could serve a more specialized purpose such as allowing the surgeon and other team members to communicate during an operation. A key concept in WPAN technology is known as "plugging in". In the ideal scenario, when any two WPAN-equipped devices come into close proximity (within several meters of each other) or within a few kilometers of a central server, they can communicate as if connected by a cable. Another important feature is the ability of each device to lock out other devices selectively, preventing needless interference or unauthorized access to information. The technology for WPANs is in its infancy and is undergoing rapid development. Proposed operating frequencies are around 2.4 GHz in digital modes. The objective is to facilitate seamless operation among home or business devices and systems. Every device in a WPAN will be able to plug into any other device in the same WPAN, provided they are within physical range of one another. In addition, WPANs worldwide will be interconnected. Thus, for example, an archeologist on site in Greece might use a PDA to directly access databases at the University of Minnesota in Minneapolis, and to transmit findings to that database.

Bluetooth uses short-range radio waves over distances up to approximately 10 metres. For example, Bluetooth devices such as a keyboards, pointing devices, audio head sets, printers may connect to personal digital assistants (PDAs), cell phones, or computers wirelessly.

A Bluetooth PAN is also called a piconet (combination of the prefix "pico," meaning very small or one trillionth, and network), and is composed of up to 8 active devices in a master-slave relationship (a very large number of devices can be connected in "parked" mode). The first Bluetooth device in the piconet is the master, and all other devices are slaves that communicate with the master. A piconet typically has a range of 10 metres (33 ft), although ranges of up to 100 metres (330 ft) can be reached under ideal circumstances.

Infrared Data Association[edit]

Infrared Data Association (IrDA) uses infrared light, which has a frequency below the human eye's sensitivity. Infrared in general is used, for instance, in TV remotes. Typical WPAN devices that use IrDA include printers, keyboards, and other serial data interfaces.[1]

This section does not cite any references or sources. Please help improve this section by adding citations to reliable sources. Unsourced material may be challenged and removed. (October 2012)

Wi-Fi uses radio waves for connection over distances up to around 91 metres, usually in a local area network (LAN) environment. Wi-Fi can be used to connect local area networks, to connect cellphones to the Internet to download music and other multimedia, to allow PC multimedia content to be stream to the TV (Wireless Multimedia Adapter), and to connect video game consoles to their networks (Nintendo Wi-Fi Connection).

Body area network[edit]

[hide]This section has multiple issues. Please help improve it or discuss these issues on the talk pag The neutrality of this section is disputed. (December 2011) This section's factual accuracy is disputed. (December 2011)

A body area networks is based on the IEEE 802.15.6 standard for transmission via the capacitive near field of human skin allowing near field communication of devices worn by and near the wearer.[2] The Skinplex implementation can detect and communicate up to 1 metre (3 ft 3 in) from a human body.[3] It is used for access control to door locks and jamming protection in convertible car roofs. Projects that implement Body Area Network include the work of RedTacton RT/Aswini.

Virtual private network From Wikipedia, the free encyclopedia "VPN" redirects here. For other uses, see VPN (disambiguation).

VPN connectivity overview A virtual private network (VPN) extends a private network across a public network, such as the Internet. It enables a computer to send and receive data across shared or public networks as if it is directly connected to the private network, while benefiting from the functionality, security and management policies of the private network.[1] A VPN is created by establishing a virtual point-topoint connection through the use of dedicated connections, virtual tunneling protocols, or traffic encryption. A virtual private network connection across the Internet is similar to a wide area network (WAN) link between sites. From a user perspective, the extended network resources are accessed in the same way as resources available within the private network.[2] VPNs allow employees to securely access their company's intranet while traveling outside the office. Similarly, VPNs securely connect geographically disparate offices of an organization, creating one cohesive network. VPN technology is also used by Internet users to connect to proxy servers for the purpose of protecting personal identity and location. Contents [hide]

1 Types 2 Security mechanisms


2.1 Authentication

3 Routing

3.1 Provider-provisioned VPN building-blocks

4 User-visible PPVPN services


4.1 OSI Layer 2 services

o o

4.2 OSI Layer 3 PPVPN architectures 4.3 Unencrypted tunnels

5 Trusted delivery networks 6 VPNs in mobile environments 7 See also 8 References 9 Further reading 10 External links

Types[edit] Early data networks allowed VPN-style remote connectivity through dial-up modems or through leased line connections utilizing Frame Relay and Asynchronous Transfer Mode(ATM) virtual circuits, provisioned through a network owned and operated by telecommunication carriers. These networks are not considered true VPNs because they passively secure the data being transmitted by the creation of logical data streams.[3] They have given way to VPNs based on IP and IP/Multiprotocol Label Switching (MPLS) Networks, due to significant cost-reductions and increased bandwidth[4] provided by new technologies such as Digital Subscriber Line (DSL)[5] and fiber-optic networks. VPNs can be either remote-access (connecting an individual computer to a network) or site-to-site (connecting two networks together). In a corporate setting, remote-access VPNs allow employees to access their company's intranet from home or while traveling outside the office, and site-to-site VPNs allow employees in geographically disparate offices to share one cohesive virtual network. A VPN can also be used to interconnect two similar networks over a dissimilar middle network; for example, two IPv6 networks over an IPv4network.[6] VPN systems may be classified by:

the protocols used to tunnel the traffic. the tunnel's termination point location, e.g., on the customer edge or network-provider edge. whether they offer site-to-site or remote-access connectivity. the levels of security provided. the OSI layer they present to the connecting network, such as Layer 2 circuits or Layer 3 network connectivity.

Security mechanisms[edit] To prevent disclosure of private information, VPNs typically allow only authenticated remote access and make use of encryption techniques. VPNs provide security by the use of tunneling protocols and through security procedures such as encryption. The VPN security model provides:

confidentiality such that even if the network traffic is sniffed at the packet level (see network sniffer and Deep packet inspection), an attacker would only see encrypted data sender authentication to prevent unauthorized users from accessing the VPN. message integrity to detect any instances of tampering with transmitted messages

Secure VPN protocols include the following:

Internet Protocol Security (IPsec) as initially developed by the Internet Engineering Task Force (IETF) for IPv6, which was required in all standards-compliant implementations of IPv6 before RFC 6434 made it only a recommendation.[7] This standards-based security protocol is also widely used with IPv4 and the Layer 2 Tunneling Protocol. Its design meets most security goals: authentication, integrity, and confidentiality. IPsec uses encryption, encapsulating an IP packet inside an IPsec packet. De-encapsulation happens at the end of the tunnel, where the original IP packet is decrypted and forwarded to its intended destination. Transport Layer Security (SSL/TLS) can tunnel an entire network's traffic (as it does in the OpenVPN project and SoftEther VPN project[8]) or secure an individual connection. A number of vendors provide remote-access VPN capabilities through SSL. An SSL VPN can connect from locations where IPsec runs into trouble with Network Address Translation and firewall rules. Datagram Transport Layer Security (DTLS) - used in Cisco AnyConnect VPN and in OpenConnect VPN[9] to solve the issues SSL/TLS has with tunneling over UDP. Microsoft Point-to-Point Encryption (MPPE) works with the Point-to-Point Tunneling Protocol and in several compatible implementations on other platforms. Microsoft Secure Socket Tunneling Protocol (SSTP) tunnels Point-to-Point Protocol (PPP) or Layer 2 Tunneling Protocol traffic through an SSL 3.0 channel. (SSTP was introduced in Windows Server 2008 and in Windows Vista Service Pack 1.) Multi Path Virtual Private Network (MPVPN). Ragula Systems Development Company owns the registered trademark "MPVPN".[10] Secure Shell (SSH) VPN - OpenSSH offers VPN tunneling (distinct from port forwarding) to secure remote connections to a network or to inter-network links. OpenSSH server provides a limited number of concurrent tunnels. The VPN feature itself does not support personal authentication.[11][12][13]

Authentication[edit] Tunnel endpoints must be authenticated before secure VPN tunnels can be established. Usercreated remote-access VPNs may use passwords, biometrics, two-factor authentication or other cryptographic methods. Network-to-network tunnels often use passwords or digital certificates. They permanently store the key to allow the tunnel to establish automatically, without intervention from the user. Routing[edit] Tunneling protocols can operate in a point-to-point network topology that would theoretically not be considered a VPN, because a VPN by definition is expected to support arbitrary and changing sets of network nodes. But since most router implementations support a software-defined tunnel

interface, customer-provisioned VPNs often are simply defined tunnels running conventional routing protocols. Provider-provisioned VPN building-blocks[edit] Depending on whether a provider-provisioned VPN (PPVPN)[clarification needed] operates in layer 2 or layer 3, the building blocks described below may be L2 only, L3 only, or combine them both. Multiprotocol label switching (MPLS) functionality blurs the L2-L3 identity.[citation needed][original

RFC 4026 generalized the following terms to cover L2 and L3 VPNs, but they were introduced in RFC 2547.[14] More information on the devices below can also be found in Lewis, Cisco Press.[15] Customer (C) devices A device that is within a customer's network and not directly connected to the service provider's network. C devices are not aware of the VPN. Customer Edge device (CE) A device at the edge of the customer's network which provides access to the PPVPN. Sometimes it's just a demarcation point between provider and customer responsibility. Other providers allow customers to configure it. Provider edge device (PE) A PE is a device, or set of devices, at the edge of the provider network which connects to customer networks through CE devices and presents the provider's view of the customer site. PEs are aware of the VPNs that connect through them, and maintain VPN state. Provider device (P) A P device operates inside the provider's core network and does not directly interface to any customer endpoint. It might, for example, provide routing for many provider-operated tunnels that belong to different customers' PPVPNs. While the P device is a key part of implementing PPVPNs, it is not itself VPN-aware and does not maintain VPN state. Its principal role is allowing the service provider to scale its PPVPN offerings, for example, by acting as an aggregation point for multiple PEs. P-to-P connections, in such a role, often are high-capacity optical links between major locations of providers. User-visible PPVPN services[edit] This section deals with the types of VPN considered in the IETF. OSI Layer 2 services[edit] Virtual LAN

A Layer 2 technique that allows for the coexistence of multiple LAN broadcast domains, interconnected via trunks using the IEEE 802.1Q trunking protocol. Other trunking protocols have been used but have become obsolete, including Inter-Switch Link (ISL), IEEE 802.10 (originally a security protocol but a subset was introduced for trunking), and ATM LAN Emulation (LANE). Virtual private LAN service (VPLS) Developed by IEEE, VLANs allow multiple tagged LANs to share common trunking. VLANs frequently comprise only customer-owned facilities. Whereas VPLS as described in the above section (OSI Layer 1 services) supports emulation of both point-to-point and point-to-multipoint topologies, the method discussed here extends Layer 2 technologies such as 802.1d and 802.1q LAN trunking to run over transports such as Metro Ethernet. As used in this context, a VPLS is a Layer 2 PPVPN, rather than a private line, emulating the full functionality of a traditional local area network (LAN). From a user standpoint, a VPLS makes it possible to interconnect several LAN segments over a packet-switched, or optical, provider core; a core transparent to the user, making the remote LAN segments behave as one single LAN.[16] In a VPLS, the provider network emulates a learning bridge, which optionally may include VLAN service. Pseudo wire (PW) PW is similar to VPWS, but it can provide different L2 protocols at both ends. Typically, its interface is a WAN protocol such as Asynchronous Transfer Mode or Frame Relay. In contrast, when aiming to provide the appearance of a LAN contiguous between two or more locations, the Virtual Private LAN service or IPLS would be appropriate. Ethernet over IP tunneling EtherIP (RFC 3378) is an Ethernet over IP tunneling protocol specification. EtherIP has only packet encapsulation mechanism. It has no confidentiality nor message integrity protection. EtherIP is introduced in the FreeBSD network stack [17] and the SoftEther VPN[18] server program. IP-only LAN-like service (IPLS) A subset of VPLS, the CE devices must have L3 capabilities; the IPLS presents packets rather than frames. It may support IPv4 or IPv6. OSI Layer 3 PPVPN architectures[edit] This section discusses the main architectures for PPVPNs, one where the PE disambiguates duplicate addresses in a single routing instance, and the other, virtual router, in which the PE contains a virtual router instance per VPN. The former approach, and its variants, have gained the most attention. One of the challenges of PPVPNs involves different customers using the same address space, especially the IPv4 private address space.[19] The provider must be able to disambiguate overlapping addresses in the multiple customers' PPVPNs.

BGP/MPLS PPVPN In the method defined by RFC 2547, BGP extensions advertise routes in the IPv4 VPN address family, which are of the form of 12-byte strings, beginning with an 8-byte Route Distinguisher (RD) and ending with a 4-byte IPv4 address. RDs disambiguate otherwise duplicate addresses in the same PE. PEs understand the topology of each VPN, which are interconnected with MPLS tunnels, either directly or via P routers. In MPLS terminology, the P routers are Label Switch Routers without awareness of VPNs. Virtual router PPVPN The Virtual Router architecture,[20][21] as opposed to BGP/MPLS techniques, requires no modification to existing routing protocols such as BGP. By the provisioning of logically independent routing domains, the customer operating a VPN is completely responsible for the address space. In the various MPLS tunnels, the different PPVPNs are disambiguated by their label, but do not need routing distinguishers. Unencrypted tunnels[edit] Main article: Tunneling protocol Some virtual networks may not use encryption to protect the privacy of data. While VPNs often provide security, an unencrypted overlay network does not neatly fit within the secure or trusted categorization. For example, a tunnel set up between two hosts that used Generic Routing Encapsulation (GRE) would in fact be a virtual private network, but neither secure nor trusted. Native plaintext tunneling protocols include Layer 2 Tunneling Protocol (L2TP) when it is set up without IPsec and Point-to-Point Tunneling Protocol (PPTP) or Microsoft Point-to-Point Encryption (MPPE). Trusted delivery networks[edit] Trusted VPNs do not use cryptographic tunneling, and instead rely on the security of a single provider's network to protect the traffic.[22]

Multi-Protocol Label Switching (MPLS) often overlays VPNs, often with quality-of-service control over a trusted delivery network. Layer 2 Tunneling Protocol (L2TP)[23] which is a standards-based replacement, and a compromise taking the good features from each, for two proprietary VPN protocols: Cisco's Layer 2 Forwarding (L2F)[24] (obsolete as of 2009) and Microsoft's Point-to-Point Tunneling Protocol (PPTP).[25]

From the security standpoint, VPNs either trust the underlying delivery network, or must enforce security with mechanisms in the VPN itself. Unless the trusted delivery network runs among physically secure sites only, both trusted and secure models need an authentication mechanism for users to gain access to the VPN.

VPNs in mobile environments[edit] Main article: Mobile virtual private network Mobile VPNs are used in a setting where an endpoint of the VPN is not fixed to a single IP address, but instead roams across various networks such as data networks from cellular carriers or between multiple Wi-Fi access points.[26] Mobile VPNs have been widely used in public safety, where they give law enforcement officers access to mission-critical applications, such as computer-assisted dispatch and criminal databases, while they travel between different subnets of a mobile network.[27] They are also used in field service management and by healthcare organizations,[28] among other industries. Increasingly, mobile VPNs are being adopted by mobile professionals who need reliable connections.[28] They are used for roaming seamlessly across networks and in and out of wirelesscoverage areas without losing application sessions or dropping the secure VPN session. A conventional VPN cannot survive such events because the network tunnel is disrupted, causing applications to disconnect, time out,[26] or fail, or even cause the computing device itself to crash.[28] Instead of logically tying the endpoint of the network tunnel to the physical IP address, each tunnel is bound to a permanently associated IP address at the device. The mobile VPN software handles the necessary network authentication and maintains the network sessions in a manner transparent to the application and the user.[26] The Host Identity Protocol(HIP), under study by the Internet Engineering Task Force, is designed to support mobility of hosts by separating the role of IP addresses for host identification from their locator functionality in an IP network. With HIP a mobile host maintains its logical connections established via the host identity identifier while associating with different IP addresses when roaming between access networks. See also[edit]

Anonymizer Opportunistic encryption Split tunneling Mediated VPN VPNBook OpenVPN BartVPN UT-VPN Tinc (protocol) DMVPN (Dynamic Multipoint VPN) Virtual Private LAN Service over MPLS Ethernet Virtual Private LAN (EVP-LAN or E-LAN) defined by MEF MPLS SoftEther VPN, another open-source VPN program which supports SSL-VPN, IPsec, L2TP, OpenVPN, EtherIP and SSTP protocols listed in the Security mechanisms section.[29]

References[edit] 1. Jump up^ Mason, Andrew G. Cisco Secure Virtual Private Network. Cisco Press, 2002, p. 7 2. Jump up^ Microsoft Technet. "Virtual Private Networking: An Overview". 3. Jump up^ Cisco Systems, et al.. Internet working Technologies Handbook, Third Edition. Cisco Press, 2000, p. 232. 4. Jump up^ Lewis, Mark. Comparing, Designing. And Deploying VPNs. Cisco Press, 20069, p. 5 5. Jump up^ International Engineering Consortium. Digital Subscriber Line 2001. Intl. Engineering Consortium, 2001, p. 40. 6. Jump up^ Technet Lab. "IPv6 traffic over VPN connections". 7. Jump up^ RFC 6434, "IPv6 Node Requirements", E. Jankiewicz, J. Loughney, T. Narten (December 2011) 8. Jump up^ SoftEther VPN: Using HTTPS Protocol to Establish VPN Tunnels 9. Jump up^ "OpenConnect". Retrieved 2013-04-08. "OpenConnect is a client for Cisco's AnyConnect SSL VPN [...] OpenConnect is not officially supported by, or associated in any way with, Cisco Systems. It just happens to interoperate with their equipment." 10. Jump up^ Trademark Applications and Registrations Retrieval (TARR) 11. Jump up^ OpenBSD ssh manual page, VPN section 12. Jump up^ Unix Toolbox section on SSH VPN 13. Jump up^ Ubuntu SSH VPN how-to 14. Jump up^ E. Rosen & Y. Rekhter (March 1999). "RFC 2547 BGP/MPLS VPNs". Internet Engineering Task Forc (IETF). 15. Jump up^ Lewis, Mark (2006). Comparing, designing, and deploying VPNs (1st print. ed.). Indianapolis, Ind.: Cisco Press. pp. 56. ISBN 1587051796. 16. Jump up^ Ethernet Bridging (OpenVPN) 17. Jump up^ Glyn M Burton: RFC 3378 EtherIP with FreeBSD, 03 February 2011 18. Jump up^ news: Multi-protocol SoftEther VPN becomes open source, January 2014 19. Jump up^ Address Allocation for Private Internets, RFC 1918, Y. Rekhter et al.,February 1996 20. Jump up^ RFC 2917, A Core MPLS IP VPN Architecture 21. Jump up^ RFC 2918, E. Chen (September 2000) 22. Jump up^ Cisco Systems, Inc. (2004). Internetworking Technologies Handbook. Networking Technology Series (4 ed.). Cisco Press. p. 233. ISBN 9781587051197. Retrieved 2013-02-15. "[...] VPNs using dedicated circuits, such as Frame Relay [...] are sometimes called trusted VPNs, because customers trust that the network facilities operated by the service providers will not be compromised." 23. Jump up^ Layer Two Tunneling Protocol "L2TP", RFC 2661, W. Townsley et al.,August 1999

24. Jump up^ IP Based Virtual Private Networks, RFC 2341, A. Valencia et al., May 1998 25. Jump up^ Point-to-Point Tunneling Protocol (PPTP), RFC 2637, K. Hamzeh et al., July 1999 26. ^ Jump up to:a b c Phifer, Lisa. "Mobile VPN: Closing the Gap",, July 16, 2006. 27. Jump up^ Willett, Andy. "Solving the Computing Challenges of Mobile Officers",, May, 2006. 28. ^ Jump up to:a b c Cheng, Roger. "Lost Connections", The Wall Street Journal, December 11, 2007. 29. Jump up^ news: Multi-protocol SoftEther VPN becomes open source, January 2014 Further reading[edit]

Kelly, Sean (August 2001). "Necessity is the mother of VPN invention". Communication News: 2628. ISSN 0010-3632. Archived from the original on 2001-12-17. "VPN Buyers Guide". Communication News: 3438. August 2001. ISSN 0010-3632.

External links[edit] [hide]

v t e

Virtual private networking

Communications protocol

SSTP IPsec L2TP L2TPv3 PPTP Split tunneling SSL/TLS (Opportunistic : tcpcrypt)

Free software

FreeS/WAN n2n Openswan OpenVPN Social VPN strongSwan tcpcrypt

tinc Cloudvpn VTun Libreswan SoftEther VPN

Vendor-driven protocols

Layer 2 Forwarding Protocol DirectAccess

Proprietary software

Check Point VPN-1 Cisco Systems VPN Client Gbridge Hamachi Microsoft Forefront Unified Access Gateway Wippien

Risk Vectors

Content-control software Deep content inspection Deep packet inspection IP address blocking Network enumeration Stateful firewall TCP Reset VPN blocking


Network architecture Computer network security Internet privacy Crypto-anarchism Virtual private networks

Wireless LAN

From Wikipedia, the free encyclopedia For the radio stations in Lancaster, Pennsylvania, see WLAN (AM) and WLAN-FM.

This notebook computer is connected to a wireless access pointusing a PC card wireless card.

An example of a Wi-Fi network. A wireless local area network (WLAN) links two or more devices using some wireless distribution method (typically spread-spectrum orOFDM radio), and usually providing a connection through an access point to the wider Internet. This gives users the ability to move around within a local coverage area and still be connected to the network. Most modern WLANs are based on IEEE 802.11 standards, marketed under the Wi-Fi brand name. Wireless LANs have become popular in the home due to ease of installation, and in commercial complexes offering wireless access to their customers; often for free. New York City, for instance, has begun a pilot program to provide city workers in all five boroughs of the city with wireless Internet access.[1]

An embedded RouterBoard 112 withU.FL-RSMA pigtail and R52 mini PCI Wi-Ficard widely used by wireless Internet service providers (WISPs) Contents [hide]

1 History 2 Architecture
o o o o

2.1 Stations 2.2 Basic service set 2.3 Extended service set 2.4 Distribution system

3 Types of wireless LANs

o o o

3.1 Peer-to-peer 3.2 Bridge 3.3 Wireless distribution system

4 Roaming 5 Applications 6 References

History[edit] Norman Abramson, a professor at the University of Hawaii, developed the worlds first wireless computer communication network, ALOHAnet (operational in 1971), using low-cost ham-like radios. The system included seven computers deployed over four islands to communicate with the central computer on the Oahu Island without using phone lines. [2] "In 1979, F.R. Gfeller and U. Bapst published a paper in the IEEE Proceedings reporting an experimental wireless local area network using diffused infrared communications. Shortly thereafter, in 1980, P. Ferrert reported on an experimental application of a single code spread spectrum radio for wireless terminal communications in the IEEE National Telecommunications Conference. In May

1985, the efforts of Marcus led the FCC to announce experimental ISM bands for commercial application of spread spectrum technology. Later on, M. Kavehrad reported on an experimental wireless PBX system using code division multiple access. These efforts prompted significant industrial activities in the development of a new generation of wireless local area networks and it updated several old discussions in the portable and mobile radio industry. The first generation of wireless data modems was developed in the early 1980's by amateur communication groups. They added a voice band data communication modem, with data rates below 9600 bps, to an existing short distance radio system such as a walkie talkie. The second generation of wireless modems was developed immediately after the FCC announcement in the experimental bands for non-military use of the spread spectrum technology. These modems provided data rates on the order of hundreds of Kbps. The third generation of wireless modem now aims at compatibility with the existing LANs with data rates on the order of Mbps. Currently, several companies are developing the third generation products with data rates above 1 Mbps and a couple of products have already been announced. "[3]

54 Mbit/s WLAN PCI Card (802.11g) "The first of the IEEE Workshops on Wireless LAN was held in 1991. At that time early wireless LAN products had just appeared in the market and the IEEE 802.11 committee had just started its activities to develop a standard for wireless LANs. The focus of that first workshop was evaluation of the alternative technologies. By 1996, the technology was relatively mature, a variety of applications had been identified and addressed and technologies that enable these applications were well understood. Chip sets aimed at wireless LAN implementations and applications, a key enabling technology for rapid market growth, were emerging in the market. Wireless LANs were being used in hospitals, stock exchanges, and in building and campus settings for nomadic access, point-to-point LAN bridges, ad hoc networking, and even larger applications through internetworking. The IEEE 802.11 standard and variants and alternatives, such as the wireless LAN interoperability forum and the European HiperLANspecification had made rapid progress, and the unlicensed PCS Unlicensed Personal Communications Services and the proposed SUPERNet, later on renamed as U-NII, bands also presented new opportunities."[4] WLAN hardware initially cost so much that it was only used as an alternative to cabled LAN in places where cabling was difficult or impossible. Early development included industry-specific solutions and proprietary protocols, but at the end of the 1990s these were replaced by standards, primarily the various versions of IEEE 802.11 (in products using the Wi-Fi brand name). An alternative ATM-like 5 GHz standardized technology,HiperLAN/2, has so far not succeeded in the market, and with the

release of the faster 54 Mbit/s 802.11a (5 GHz) and 802.11g (2.4 GHz) standards, it is even more unlikely that it will ever succeed.[citation needed] In 2009 802.11n was added to 802.11. It operates in both the 2.4 GHz and 5 GHz bands at a maximum data transfer rate of 600 Mbit/s. Most newer routers are able to utilise both wireless bands, known as dualband. This allows data communications to avoid the crowded 2.4 GHz band, which is also shared with Bluetooth devices and microwave ovens. The 5 GHz band is also wider than the 2.4 GHz band, with more channels, which permits a greater number of devices to share the space. Not all channels are available in all regions. A HomeRF group formed in 1997 to promote a technology aimed for residential use, but it disbanded at the end of 2002.[5] Architecture[edit] Stations[edit] All components that can connect into a wireless medium in a network are referred to as stations. All stations are equipped with wireless network interface controllers (WNICs). Wireless stations fall into one of two categories: access points, and clients. Access points (APs), normally routers, are base stations for the wireless network. They transmit and receive radio frequencies for wireless enabled devices to communicate with. Wireless clients can be mobile devices such as laptops, personal digital assistants, IP phones and other smartphones, or fixed devices such as desktops and workstations that are equipped with a wireless network interface. Basic service set[edit] The basic service set (BSS) is a set of all stations that can communicate with each other. Every BSS has an identification (ID) called the BSSID, which is the MAC address of the access point servicing the BSS. There are two types of BSS: Independent BSS (also referred to as IBSS), and infrastructure BSS. An independent BSS (IBSS) is an ad hoc network that contains no access points, which means they can not connect to any other basic service set. Extended service set[edit] An extended service set (ESS) is a set of connected BSSs. Access points in an ESS are connected by a distribution system. Each ESS has an ID called the SSID which is a 32-byte (maximum) character string. Distribution system[edit] A distribution system (DS) connects access points in an extended service set. The concept of a DS can be used to increase network coverage through roaming between cells. DS can be wired or wireless. Current wireless distribution systems are mostly based on WDS or MESH protocols, though other systems are in use.

Types of wireless LANs[edit] The IEEE 802.11 has two basic modes of operation: ad hoc mode and infrastructure mode. In ad hoc mode, mobile units transmit directly peer-to-peer. In infrastructure mode, mobile units communicate through an access point that serves as a bridge to other networks (such as Internet or LAN). Since wireless communication uses a more open medium for communication in comparison to wired LANs, the 802.11 designers also included encryption mechanisms: Wired Equivalent Privacy (WEP, now insecure), Wi-Fi Protected Access (WPA, WPA2), to secure wireless computer networks. Many access points will also offer Wi-Fi Protected Setup, a method for quick (but now insecure) method of joining a new device to an encrypted network. Peer-to-peer[edit]

Peer-to-Peer or ad hoc wireless LAN An ad hoc network (not the same as a WiFi Direct network[6]) is a network where stations communicate only peer to peer (P2P). There is no base and no one gives permission to talk. This is accomplished using the Independent Basic Service Set (IBSS). A WiFi Direct network is another type of network where stations communicate peer to peer (P2P). In a Wi-Fi P2P group the group owner operates as an access point and all other devices are clients. There are two main methods to establish a group owner in the Wi-Fi Direct group. In one approach user sets up a P2P group owner manually. This method is also known as Autonomous Group Owner (autonomous GO). In the second method also called negotiation-based group creation two devices compete based on the group owner intent value. The device with higher intent value becomes a group owner and the second device becomes a client. Group owner intent value can depend on whether the wireless device performs a cross-connection between an infrastructure WLAN service and a P2P group, remaining power in the wireless device, whether the wireless device is already a group owner in another group and/or a received signal strength of the first wireless device. A peer-to-peer (P2P) network allows wireless devices to directly communicate with each other. Wireless devices within range of each other can discover and communicate directly without involving central access points. This method is typically used by two computers so that they can connect to each other to form a network. If a signal strength meter is used in this situation, it may not read the strength accurately and can be misleading, because it registers the strength of the strongest signal, which may be the closest computer.

Hidden node problem: Devices A and C are both communicating with B, but are unaware of each other IEEE 802.11 defines the physical layer (PHY) and MAC (Media Access Control) layers based on CSMA/CA (Carrier Sense Multiple Access with Collision Avoidance). The 802.11 specification includes provisions designed to minimize collisions, because two mobile units may both be in range of a common access point, but out of range of each other.

Bridge[edit] A bridge can be used to connect networks, typically of different types. A wireless Ethernet bridge allows the connection of devices on a wired Ethernet network to a wireless network. The bridge acts as the connection point to the Wireless LAN. Wireless distribution system[edit] Main article: Wireless Distribution System A Wireless Distribution System enables the wireless interconnection of access points in an IEEE 802.11 network. It allows a wireless network to be expanded using multiple access points without the need for a wired backbone to link them, as is traditionally required. The notable advantage of WDS over other solutions is that it preserves the MAC addresses of client packets across links between access points.[7] An access point can be either a main, relay or remote base station. A main base station is typically connected to the wired Ethernet. A relay base station relays data between remote base stations, wireless clients or other relay stations to either a main or another relay base station. A remote base station accepts connections from wireless clients and passes them to relay or main stations. Connections between "clients" are made using MAC addresses rather than by specifying IP assignments. All base stations in a Wireless Distribution System must be configured to use the same radio channel, and share WEP keys or WPA keys if they are used. They can be configured to different

service set identifiers. WDS also requires that every base station be configured to forward to others in the system as mentioned above. WDS may also be referred to as repeater mode because it appears to bridge and accept wireless clients at the same time (unlike traditional bridging). It should be noted, however, that throughput in this method is halved for all clients connected wirelessly. When it is difficult to connect all of the access points in a network by wires, it is also possible to put up access points as repeaters. Roaming[edit]

Roaming among Wireless Local Area Networks There are two definitions for wireless LAN roaming:

Internal Roaming (1): The Mobile Station (MS) moves from one access point (AP) to another AP within a home network because the signal strength is too weak. An authentication server (RADIUS) performs the re-authentication of MS via 802.1x (e.g. withPEAP). The billing of QoS is in the home network. A Mobile Station roaming from one access point to another often interrupts the flow of data among the Mobile Station and an application connected to the network. The Mobile Station, for instance, periodically monitors the presence of alternative access points (ones that will provide a better connection). At some point, based on proprietary mechanisms, the Mobile Station decides to re-associate with an access point having a stronger wireless signal. The Mobile Station, however, may lose a connection with an access point before associating with another access point. In order to provide reliable connections with applications, the Mobile Station must generally include software that provides session persistence.[8] External Roaming (2): The MS (client) moves into a WLAN of another Wireless Internet Service Provider (WISP) and takes their services (Hotspot). The user can independently of his home network use another foreign network, if this is open for visitors. There must be special authentication and billing systems for mobile services in a foreign network.

Applications[edit] Wireless LANs have a great deal of applications. Modern implementations of WLANs range from small in-home networks to large, campus-sized ones to completely mobile networks on airplanes and trains. Users can access the Internet from WLAN hotspots in restaurants, hotels, and now with

portable devices that connect to 3G or 4G networks. Oftentimes these types of public access points require no registration or password to join the network. Others can be accessed once registration has occurred and/or a fee is paid.

WiMAX From Wikipedia, the free encyclopedia

(Redirected from Wimax) Worldwide Interoperability for Microwave Access

WiMAX Forum logo

WiMAX base station equipment with a sector antenna and wireless modemon top WiMAX (Worldwide Interoperability for Microwave Access) is a wireless communications standard designed to provide 30 to 40 megabit-per-second data rates,[1] with the 2011 update providing up to 1 Gbit/s[1] for fixed stations. The name "WiMAX" was created by the WiMAX Forum, which was formed in June 2001 to promote conformity and interoperability of the standard. The forum describes WiMAX as "a standards-based technology enabling the delivery of last mile wireless broadband access as an alternative tocable and DSL".[2] Contents [hide]

1 Terminology 2 Uses
o o o o

2.1 Internet access 2.2 Middle-mile backhaul to fibre networks 2.3 Triple-play 2.4 Deployment

3 Connecting
o o

3.1 Gateways 3.2 External modems

3.3 Mobile phones

4 Technical information
o o o o o o o o o o

4.1 The IEEE 802.16 Standard 4.2 Physical layer 4.3 Media access control layer 4.4 Specifications 4.5 Integration with an IP-based network 4.6 Spectrum allocation 4.7 Spectral efficiency 4.8 Inherent limitations 4.9 Silicon implementations 4.10 Comparison

5 Conformance testing 6 Associations

o o o

6.1 WiMAX Forum 6.2 WiMAX Spectrum Owners Alliance 6.3 Telecommunications Industry Association

7 Competing technologies
o o

7.1 Harmonization 7.2 Comparison with other mobile Internet standards

8 Development 9 Interference 10 Deployments 11 See also 12 Notes 13 References 14 External links

Terminology[edit] WiMAX refers to interoperable implementations of the IEEE 802.16 family of wireless-networks standards ratified by the WiMAX Forum. (Similarly, Wi-Fi, refers to interoperable implementations of the IEEE 802.11 Wireless LAN standards certified by the Wi-Fi Alliance.) WiMAX Forum certification allows vendors to sell fixed or mobile products as WiMAX certified, thus ensuring a level of interoperability with other certified products, as long as they fit the same profile. The original IEEE 802.16 standard (now called "Fixed WiMAX") was published in 2001. WiMAX adopted some of its technology from WiBro, a service marketed in Korea.[3]

Mobile WiMAX (originally based on 802.16e-2005) is the revision that was deployed in many countries, and basis of future revisions such as 802.16m-2011. WiMAX is sometimes referred to as "Wi-Fi on steroids"[4] and can be used for a number of applications including broadband connections, cellular backhaul, hotspots, etc. It is similar to Wi-Fi, but it can enable usage at much greater distances.[5] Uses[edit] The bandwidth and range of WiMAX make it suitable for the following potential applications:

Providing portable mobile broadband connectivity across cities and countries through a variety of devices. Providing a wireless alternative to cable and digital subscriber line (DSL) for "last mile" broadband access. Providing data, telecommunications (VoIP) and IPTV services (triple play). Providing a source of Internet connectivity as part of a business continuity plan. Smart grids and metering

Internet access[edit] WiMAX can provide at-home or mobile Internet access across whole cities or countries. In many cases this has resulted in competition in markets which typically only had access through an existing incumbent DSL (or similar) operator. Additionally, given the relatively low costs associated with the deployment of a WiMAX network (in comparison with 3G, HSDPA, xDSL, HFC or FTTx), it is now economically viable to provide last-mile broadband Internet access in remote locations. Middle-mile backhaul to fibre networks[edit] Mobile WiMAX was a replacement candidate for cellular phone technologies such as GSM and CDMA, or can be used as an overlay to increase capacity. Fixed WiMAX is also considered as a wireless backhaul technology for 2G, 3G, and 4G networks in both developed and developing nations.[6][7] In North America, backhaul for urban operations is typically provided via one or more copper wire line connections, whereas remote cellular operations are sometimes backhauled via satellite. In other regions, urban and rural backhaul is usually provided by microwave links. (The exception to this is where the network is operated by an incumbent with ready access to the copper network.) WiMAX has more substantial backhaul bandwidth requirements than legacy cellular applications. Consequently the use of wireless microwave backhaul is on the rise in North America and existing microwave backhaul links in all regions are being upgraded.[8] Capacities of between 34 Mbit/s and 1 Gbit/s [9] are routinely being deployed with latencies in the order of 1 ms. In many cases, operators are aggregating sites using wireless technology and then presenting traffic on to fiber networks where convenient. WiMAX in this application competes with microwave, Eline and simple extension of the fiber network itself.

Triple-play[edit] WiMAX directly supports the technologies that make triple-play service offerings possible (such as Quality of Service and Multicasting). These are inherent to the WiMAX standard rather than being added on as Carrier Ethernet is to Ethernet. On May 7, 2008 in the United States, Sprint Nextel, Google, Intel, Comcast, Bright House, and Time Warner announced a pooling of an average of 120 MHz of spectrum and merged with Clearwire to market the service. The new company hopes to benefit from combined services offerings and network resources as a springboard past its competitors. The cable companies will provide media services to other partners while gaining access to the wireless network as a Mobile virtual network operator to provide triple-play services. Some analysts[who?] questioned how the deal will work out: Although fixed-mobile convergence has been a recognized factor in the industry, prior attempts to form partnerships among wireless and cable companies have generally failed to lead to significant benefits to the participants. Other analysts point out that as wireless progresses to higher bandwidth, it inevitably competes more directly with cable and DSL, inspiring competitors into collaboration. Also, as wireless broadband networks grow denser and usage habits shift, the need for increased backhaul and media service will accelerate, therefore the opportunity to leverage cable assets is expected to increase. Deployment[edit]

WiMAX access was used to assist with communications[10] in Aceh, Indonesia, after the tsunami in December 2004. All communication infrastructure in the area, other thanamateur radio, was destroyed[citation needed], making the survivors unable to communicate with people outside the disaster area and vice versa. WiMAX provided broadband access that helped regenerate communication to and from Aceh.[citation needed] WiMAX hardware was donated by Intel Corporation to assist the Federal Communications Commission (FCC) and FEMA in their communications efforts in the areas affected byHurricane Katrina.[10][11] In practice, volunteers used mainly self-healing mesh, Voice over Internet Protocol (VoIP), and a satellite uplink combined with Wi-Fi on the local link.[12]


A WiMAX USB modem for mobile internet Devices that provide connectivity to a WiMAX network are known as subscriber stations (SS).

Portable units include handsets (similar to cellular smartphones); PC peripherals (PC Cards or USB dongles); and embedded devices in laptops, which are now available for Wi-Fi services. In addition, there is much emphasis by operators on consumer electronics devices such as Gaming consoles, MP3 players and similar devices. WiMAX is more similar to Wi-Fi than to other 3G cellular technologies. The WiMAX Forum website provides a list of certified devices. However, this is not a complete list of devices available as certified modules are embedded into laptops, MIDs (Mobile Internet devices), and other private labeled devices. Gateways[edit] WiMAX gateway devices are available as both indoor and outdoor versions from several manufacturers including Vecima Networks, Alvarion, Airspan, ZyXEL, Huawei, andMotorola. The list of deployed WiMAX networks and WiMAX Forum membership list [13] provide more links to specific vendors, products and installations. The list of vendors and networks is not comprehensive and is not intended as an endorsement of these companies above others. Many of the WiMAX gateways that are offered by manufactures such as these are stand-alone selfinstall indoor units. Such devices typically sit near the customer's window with the best signal, and provide:

An integrated Wi-Fi access point to provide the WiMAX Internet connectivity to multiple devices throughout the home or business. Ethernet ports to connect directly to a computer, router, printer or DVR on a local wired network. One or two analog telephone jacks to connect a land-line phone and take advantage of VoIP.

Indoor gateways are convenient, but radio losses mean that the subscriber may need to be significantly closer to the WiMAX base station than with professionally installed external units. Outdoor units are roughly the size of a laptop PC, and their installation is comparable to the installation of a residential satellite dish. A higher-gain directional outdoor unit will generally result in greatly increased range and throughput but with the obvious loss of practical mobility of the unit. External modems[edit] USB can provide connectivity to a WiMAX network through what is called a dongle.[14] Generally these devices are connected to a notebook or net book computer. Dongles typically have omnidirectional antennas which are of lower gain compared to other devices, as such these devices are best used in areas of good coverage. Mobile phones[edit] HTC announced the first WiMAX enabled mobile phone, the Max 4G, on November 12, 2008.[15] The device was only available to certain markets in Russia on the Yota network. HTC and Sprint Nextel released the second WiMAX enabled mobile phone, the EVO 4G, March 23, 2010 at the CTIA conference in Las Vegas. The device, made available on June 4, 2010,[16] is

capable of both EV-DO(3G) and WiMAX(pre-4G) as well as simultaneous data & voice sessions. Sprint Nextel announced at CES 2012 that it will no longer be offering devices using the WiMAX technology due to financial circumstances, instead, along with its network partner Clearwire, Sprint Nextel will roll out a 4G network deciding to shift and utilize LTE 4G technology instead. Technical information[edit]

It has been suggested that this article be merged into IEEE 802.16. (Discuss) Proposed since August 2011.

The IEEE 802.16 Standard[edit] WiMAX is based upon IEEE Std 802.16e-2005,[17] approved in December 2005. It is a supplement to the IEEE Std 802.16-2004,[18] and so the actual standard is 802.16-2004 as amended by 802.16e2005. Thus, these specifications need to be considered together. IEEE 802.16e-2005 improves upon IEEE 802.16-2004 by:

Adding support for mobility (soft and hard handover between base stations). This is seen as one of the most important aspects of 802.16e-2005, and is the very basis of Mobile WiMAX. Scaling of the fast Fourier transform (FFT) to the channel bandwidth in order to keep the carrier spacing constant across different channel bandwidths (typically 1.25 MHz, 5 MHz, 10 MHz or 20 MHz). Constant carrier spacing results in a higher spectrum efficiency in wide channels, and a cost reduction in narrow channels. Also known as scalable OFDMA (SOFDMA). Other bands not multiples of 1.25 MHz are defined in the standard, but because the allowed FFT subcarrier numbers are only 128, 512, 1024 and 2048, other frequency bands will not have exactly the same carrier spacing, which might not be optimal for implementations. Carrier spacing is 10.94 kHz. Advanced antenna diversity schemes, and hybrid automatic repeat-request (HARQ) Adaptive antenna systems (AAS) and MIMO technology Denser sub-channelization, thereby improving indoor penetration Intro and low-density parity check (LDPC) Introducing downlink sub-channelization, allowing administrators to trade coverage for capacity or vice versa Adding an extra quality of service (QoS) class for VoIP applications.

SOFDMA (used in 802.16e-2005) and OFDM256 (802.16d) are not compatible thus equipment will have to be replaced if an operator is to move to the later standard (e.g., Fixed WiMAX to Mobile WiMAX). Physical layer[edit]

The original version of the standard on which WiMAX is based (IEEE 802.16) specified a physical layer operating in the 10 to 66 GHz range. 802.16a, updated in 2004 to 802.16-2004, added specifications for the 2 to 11 GHz range. 802.16-2004 was updated by 802.16e-2005 in 2005 and uses scalable orthogonal frequency-division multiple access(Orthogonal frequency-division multiplexing (OFDM) is a method of encoding digital data on multiple carrier frequencies. OFDM has developed into a popular scheme for wideband digital communication, whether wireless or over copper wires, used in applications such as digital television and audio broadcasting )(SOFDMA) as opposed to the fixedorthogonal frequency-division multiplexing (OFDM) version with 256 subcarriers (of which 200 are used) in 802.16d. More advanced versions, including 802.16e, also bring multiple antenna support through MIMO (See WiMAX MIMO). This brings potential benefits in terms of coverage, self installation, power consumption, frequency re-use and bandwidth efficiency. WiMax is the most energy-efficient pre-4G technique among LTE and HSPA+.[19] Media access control layer[edit] The WiMAX MAC uses a scheduling algorithm for which the subscriber station needs to compete only once for initial entry into the network. After network entry is allowed, the subscriber station is allocated an access slot by the base station. The time slot can enlarge and contract, but remains assigned to the subscriber station, which means that other subscribers cannot use it. In addition to being stable under overload and over-subscription, the scheduling algorithm can also be more bandwidth efficient. The scheduling algorithm also allows the base station to control Quality of Service (QoS) parameters by balancing the time-slot assignments among the application needs of the subscriber station. Specifications[edit] As a standard intended to satisfy needs of next-generation data networks (4G), WiMAX is distinguished by its dynamic burst algorithm modulation adaptive to the physical environment the RF signal travels through. Modulation is chosen to be more spectrally efficient (more bits per OFDM/SOFDMA symbol). That is, when the bursts have a high signal strength and a high carrier to noise plus interference ratio (CINR), they can be more easily decoded using digital signal processing (DSP). In contrast, operating in less favorable environments for RF communication, the system automatically steps down to a more robust mode (burst profile) which means fewer bits per OFDM/SOFDMA symbol; with the advantage that power per bit is higher and therefore simpler accurate signal processing can be performed. Burst profiles are used inverse (algorithmically dynamic) to low signal attenuation; meaning throughput between clients and the base station is determined largely by distance. Maximum distance is achieved by the use of the most robust burst setting; that is, the profile with the largest MAC frame allocation trade-off requiring more symbols (a larger portion of the MAC frame) to be allocated in transmitting a given amount of data than if the client were closer to the base station. The client's MAC frame and their individual burst profiles are defined as well as the specific time allocation. However, even if this is done automatically then the practical deployment should avoid high interference and multipath environments. The reason for which is obviously that too much interference causes the network to function poorly and can also misrepresent the capability of the network. The system is complex to deploy as it is necessary to track not only the signal strength and CINR (as in systems like GSM) but also how the available frequencies will be dynamically assigned

(resulting in dynamic changes to the available bandwidth.) This could lead to cluttered frequencies with slow response times or lost frames. As a result the system has to be initially designed in consensus with the base station product team to accurately project frequency use, interference, and general product functionality. The Asia-Pacific region has surpassed the North American region in terms of 4G broadband wireless subscribers. There were around 1.7 million pre-WIMAX and WIMAX customers in Asia - 29% of the overall market - compared to 1.4 million in the USA and Canada.[20] Integration with an IP-based network[edit]

The WiMAX Forum architecture The WiMAX Forum has proposed an architecture that defines how a WiMAX network can be connected with an IP based core network, which is typically chosen by operators that serve as Internet Service Providers (ISP); Nevertheless the WiMAX BS provide seamless integration capabilities with other types of architectures as with packet switched Mobile Networks. The WiMAX forum proposal defines a number of components, plus some of the interconnections (or reference points) between these, labeled R1 to R5 and R8:

SS/MS: the Subscriber Station/Mobile Station ASN: the Access Service Network[21] BS: Base station, part of the ASN ASN-GW: the ASN Gateway, part of the ASN CSN: the Connectivity Service Network HA: Home Agent, part of the CSN

AAA: Authentication, Authorization and Accounting Server, part of the CSN NAP: a Network Access Provider NSP: a Network Service Provider

It is important to note that the functional architecture can be designed into various hardware configurations rather than fixed configurations. For example, the architecture is flexible enough to allow remote/mobile stations of varying scale and functionality and Base Stations of varying size e.g. femto, pico, and mini BS as well as macros. Spectrum allocation[edit] There is no uniform global licensed spectrum for WiMAX, however the WiMAX Forum has published three licensed spectrum profiles: 2.3 GHz, 2.5 GHz and 3.5 GHz, in an effort to drive standardisation and decrease cost. In the USA, the biggest segment available is around 2.5 GHz,[22] and is already assigned, primarily to Sprint Nextel and Clearwire. Elsewhere in the world, the most-likely bands used will be the Forum approved ones, with 2.3 GHz probably being most important in Asia. Some countries in Asia like India and Indonesia will use a mix of 2.5 GHz, 3.3 GHz and other frequencies. Pakistan's Wateen Telecom uses 3.5 GHz. Analog TV bands (700 MHz) may become available for WiMAX usage, but await the complete roll out of digital TV, and there will be other uses suggested for that spectrum. In the USA the FCC auction for this spectrum began in January 2008 and, as a result, the biggest share of the spectrum went to Verizon Wireless and the next biggest to AT&T.[23] Both of these companies have stated their intention of supporting LTE, a technology which competes directly with WiMAX. EU commissioner Viviane Reding has suggested re-allocation of 500800 MHz spectrum for wireless communication, including WiMAX.[24] WiMAX profiles define channel size, TDD/FDD and other necessary attributes in order to have interoperating products. The current fixed profiles are defined for both TDD and FDD profiles. At this point, all of the mobile profiles are TDD only. The fixed profiles have channel sizes of 3.5 MHz, 5 MHz, 7 MHz and 10 MHz. The mobile profiles are 5 MHz, 8.75 MHz and 10 MHz. (Note: the 802.16 standard allows a far wider variety of channels, but only the above subsets are supported as WiMAX profiles.) Since October 2007, the Radio communication Sector of the International Telecommunication Union (ITU-R) has decided to include WiMAX technology in the IMT-2000 set of standards.[25] This enables spectrum owners (specifically in the 2.5-2.69 GHz band at this stage) to use WiMAX equipment in any country that recognizes the IMT-2000. Spectral efficiency[edit] One of the significant advantages of advanced wireless systems such as WiMAX is spectral efficiency. For example, 802.16-2004 (fixed) has a spectral efficiency of 3.7(bit/s)/Hertz, and other 3.54G wireless systems offer spectral efficiencies that are similar to within a few tenths of a percent. The notable advantage of WiMAX comes from combining SOFDMA with smart antenna technologies. This multiplies the effective spectral efficiency through multiple reuse and smart network deployment topologies. The direct use of frequency domain organization simplifies

designs using MIMO-AAS compared to CDMA/WCDMA methods, resulting in more effective systems. Inherent limitations[edit] WiMAX cannot deliver 70 Mbit/s over 50 km (31 mi). Like all wireless technologies, WiMAX can operate at higher bitrates or over longer distances but not both. Operating at the maximum range of 50 km (31 mi) increases bit error rate and thus results in a much lower bitrate. Conversely, reducing the range (to under 1 km) allows a device to operate at higher bitrates. A city-wide deployment of WiMAX in Perth, Australia demonstrated that customers at the cell-edge with an indoor Customer-premises equipment (CPE) typically obtain speeds of around 14 Mbit/s, with users closer to the cell site obtaining speeds of up to 30 Mbit/s.[citation needed] Like all wireless systems, available bandwidth is shared between users in a given radio sector, so performance could deteriorate in the case of many active users in a single sector. However, with adequate capacity planning and the use of WiMAX's Quality of Service, a minimum guaranteed throughput for each subscriber can be put in place. In practice, most users will have a range of 48 Mbit/s services and additional radio cards will be added to the base station to increase the number of users that may be served as required. Silicon implementations[edit]

Picture of a WiMAX MIMO board A number of specialized companies produced baseband ICs and integrated RFICs for WiMAX Subscriber Stations in the 2.3, 2.5 and 3.5 GHz bands (refer to 'Spectrum allocation' above). These companies include, but are not limited to, Beceem, Sequans, and PicoChip. Comparison[edit] Comparisons and confusion between WiMAX and Wi-Fi are frequent because both are related to wireless connectivity and Internet access.[26]

WiMAX is a long range system, covering many kilometres, that uses licensed or unlicensed spectrum to deliver connection to a network, in most cases the Internet. Wi-Fi uses unlicensed spectrum to provide access to a local network. Wi-Fi is more popular in end user devices. Wi-Fi runs on the Media Access Control's CSMA/CA protocol, which is connectionless and contention based, whereas WiMAX runs a connection-oriented MAC. WiMAX and Wi-Fi have quite different quality of service (QoS) mechanisms:

WiMAX uses a QoS mechanism based on connections between the base station and the user device. Each connection is based on specific scheduling algorithms. Wi-Fi uses contention access - all subscriber stations that wish to pass data through a wireless access point (AP) are competing for the AP's attention on a random interrupt basis. This can cause subscriber stations distant from the AP to be repeatedly interrupted by closer stations, greatly reducing their throughput.

Both 802.11 (which includes Wi-Fi) and 802.16 (which includes WiMAX) define Peer-to-Peer (P2P) and ad hoc networks, where an end user communicates to users or servers on another Local Area Network (LAN) using its access point or base station. However, 802.11 supports also direct ad hoc or peer to peer networking between end user devices without an access point while 802.16 end user devices must be in range of the base station.

Although Wi-Fi and WiMAX are designed for different situations, they are complementary. WiMAX network operators typically provide a WiMAX Subscriber Unit which connects to the metropolitan WiMAX network and provides Wi-Fi within the home or business for local devices (e.g., Laptops, WiFi Handsets, smartphones) for connectivity. This enables the user to place the WiMAX Subscriber Unit in the best reception area (such as a window), and still be able to use the WiMAX network from any place within their residence. The local area network inside your home or business would operate as with any other wired or wireless network. If you connect you WiMAX Subscriber Unit directly to a WiMAX enabled computer or laptop that would limit access to a single device. As an alternative for your LAN, you can purchase a WiMAX modem with a built-in wireless Wi-Fi router. Now you can connect multiple devices to create your LAN. Using WiMAX could be an advantage since it is typically faster than most cable modems with download speeds between 3 6 Mbit/s and generally cost less than cable. Conformance testing[edit] TTCN-3 test specification language is used for the purposes of specifying conformance tests for WiMAX implementations. The WiMAX test suite is being developed by a Specialist Task Force at ETSI (STF 252).[27] Associations[edit] WiMAX Forum[edit] The WiMAX Forum is a non profit organization formed to promote the adoption of WiMAX compatible products and services.[28] A major role for the organization is to certify the interoperability of WiMAX products.[29] Those that pass conformance and interoperability testing achieve the "WiMAX Forum Certified" designation,

and can display this mark on their products and marketing materials. Some vendors claim that their equipment is "WiMAX-ready", "WiMAX-compliant", or "pre-WiMAX", if they are not officially WiMAX Forum Certified. Another role of the WiMAX Forum is to promote the spread of knowledge about WiMAX. In order to do so, it has a certified training program that is currently offered in English and French. It also offers a series of member events and endorses some industry events.

WiSOA logo WiMAX Spectrum Owners Alliance[edit] WiSOA was the first global organization composed exclusively of owners of WiMAX spectrum with plans to deploy WiMAX technology in those bands. WiSOA focused on the regulation, commercialisation, and deployment of WiMAX spectrum in the 2.32.5 GHz and the 3.43.5 GHz ranges. WiSOA merged with the Wireless Broadband Alliance in April 2008. [30] Telecommunications Industry Association[edit] In 2011, the Telecommunications Industry Association released three technical standards (TIA-1164, TIA-1143, and TIA-1140) that cover the air interface and core networking aspects of Wi-Max HighRate Packet Data (HRPD) systems using a Mobile Station/Access Terminal (MS/AT) with a single transmitter.[31] Competing technologies[edit] Within the marketplace, WiMAX's main competition came from existing, widely deployed wireless systems such as Universal Mobile Telecommunications System (UMTS),CDMA2000, existing Wi-Fi and mesh networking.

Speed vs. mobility of wireless systems: Wi-Fi, High Speed Packet Access (HSPA), Universal Mobile Telecommunications System (UMTS), GSM In the future, competition will be from the evolution of the major cellular standards to 4G, highbandwidth, low-latency, all-IP networks with voice services built on top. The worldwide move to 4G for GSM/UMTS and AMPS/TIA (including CDMA2000) is the 3GPP Long Term Evolution (LTE) effort. The LTE Standard was finalized in December 2008, with the first commercial deployment of LTE carried out by TeliaSonera in Oslo and Stockholm in December, 2009. Since then, LTE has seen increasing adoption by mobile carriers around the world. In some areas of the world, the wide availability of UMTS and a general desire for standardization has meant spectrum has not been allocated for WiMAX: in July 2005, the EU-wide frequency allocation for WiMAX was blocked.[citation needed] Harmonization[edit] Early WirelessMAN standards, The European standard HiperMAN and Korean standard WiBro were harmonized as part of WiMAX and are no longer seen as competition but as complementary. All networks now being deployed in South Korea, the home of the WiBro standard, are now WiMAX. Comparison with other mobile Internet standards[edit] Main article: Comparison of wireless data standards The following table only shows peak rates which are potentially very misleading. In addition, the comparisons listed are not normalized by physical channel size (i.e., spectrum used to achieve the listed peak rates); this obfuscates spectral efficiency and net through-put capabilities of the different wireless technologies listed below. Comparison of mobile Internet access methods

Common Name

Fa mily

Prim ary Use

Radio Tech

Downst ream (Mbit/s)

Upstr eam (Mbit/ s) HSPA+ is widely deployed. Revision 11 of Notes

21 HSPA+ 3GPP 3G Data CDMA/FDD MIMO 42 84 672

5.8 11.5 22 168

the 3GPP states that HSPA+ is expected to have a throughput capacity of 672 Mbit/s. LTEAdvanced upd

100 Cat3 LTE 3GPP General 4G OFDMA/MIM O/SC-FDMA 150 Cat4 300 Cat5 (in 20 MHz FDD)

50 Cat3/4 75 Cat5 (in 20 MHz FDD)[32]

ate expected to offer peak rates up to 1 Gbit/s fixed speeds and 100 Mb/s to mobile users.

WiMax rel 1



37 (10 MHz TDD)

17 (10 MHz TDD) 46 (20 MHz TDD) 138 (2x20 MHz FDD)

With 2x2 MIMO.[33] With 2x2 MIMO.Enhanc ed with 20 MHz channels in 802.16-2009[33]

83 (20 MHz WiMax rel 1.5 802.162009 WirelessMA MIMON SOFDMA TDD) 141 (2x20 MHz FDD)

Comparison of mobile Internet access methods Prim ary Use Downst ream (Mbit/s) Upstr eam (Mbit/ s) 2x2 MIMO 2x2 MIMO 110 (20 MHz TDD) WirelessMA MIMON SOFDMA FDD) 4x4 MIMO 219 (20 MHz TDD) FDD) 70 (20 MHz TDD) 188 FDD) 4x4 MIMO 140 (20 MHz TDD) (2x20 MHz FDD) Mobile Flash-OFDM FlashOFDM Internet mobility up to 200 mph (350 km/h) HIPERMA Mobile N Internet Flash-OFDM 5.3 10.6 15.9 1.8 3.6 5.4 Mobile range 30 km (18 miles) extended range 55 km (34 miles) OFDM 56.9 Antenna, RF 288.8 (using 4x4 Wi-Fi 802.11 (11n) Mobile Inter net configuration in 20 MHz OFDM/MIMO bandwidth) or 600 (using 4x4 configuration in 40 MHz bandwidth) front end enhancem ents and minor protocol timer tweaks have helped deploy long Also, low mobility users can aggregate multiple channels to get a download throughput of up to 1 Gbit/s[33] Notes

Common Name

Fa mily

Radio Tech

183 (2x20 MHz (2x20 MHz WiMAX rel 2 802.16m

365 (2x20 MHz 376


Comparison of mobile Internet access methods Prim ary Use Downst ream (Mbit/s) Upstr eam (Mbit/ s) range P2P net works compromising on radial coverage, throughput and/or spectra efficiency (310 km & 382 km) Cell Radius: 3 12 km Speed: Mobile Inter net HCSDMA/TDD/M 95 IMO 36 250 km/h Spectral Efficiency: 13 bits/s/Hz/cell Spectrum Reuse Factor: "1" EDGE Evolution GSM Mobile Inter net TDMA/FDD 1.6 0.5 3GPP Release 7 HSDPA is UMTS W-CDMA HSPA(HSDPA+H SUPA) CDMA/FDD UMTS/3G SM General 3G 0.384 CDMA/FDD/M 14.4 IMO 0.384 5.76 widely deployed. Typical downlink rates today 2 Mbit/s, Notes

Common Name

Fa mily

Radio Tech



Comparison of mobile Internet access methods Prim ary Use Downst ream (Mbit/s) Upstr eam (Mbit/ s) ~200 kbit/s uplink; HSPA+ downlink up to 56 Mbit/s. Reported speeds according UMTS-TDD UMTS/3G SM Mobile Internet to IPWireless u CDMA/TDD 16 sing 16QAM modulation similar to HSDPA+HS UPA Rev B note: N is the number of 1.25 MHz carriers used. EV-DO is not EV-DO Rel. 0 EV-DO Rev.A EV-DO Rev.B CDMA200 Mobile 0 Internet 2.45 CDMA/FDD 3.1 4.9xN 0.15 1.8 1.8xN designed for voice, and requires a fallback to 1xRTT when a voice call is placed or received. Notes: All speeds are theoretical maximums and will vary by a number of factors, including the use of external antennas, distance from the tower and the ground speed (e.g. communications on a train may be poorer than when standing still). Usually the bandwidth is shared between several terminals. Notes

Common Name

Fa mily

Radio Tech

The performance of each technology is determined by a number of constraints, including the spectral efficiency of the technology, the cell sizes used, and the amount of spectrum available. For more information, see Comparison of wireless data standards. For more comparison tables, see bit rate progress trends, comparison of mobile phone standards, spectral efficiency comparison table and OFDM system comparison table. Development[edit] The IEEE 802.16m-2011 standard[34] was the core technology for WiMAX 2. The IEEE 802.16m standard was submitted to the ITU for IMT-Advanced standardization.[35] IEEE 802.16m is one of the major candidates for IMT-Advanced technologies by ITU. Among many enhancements, IEEE 802.16m systems can provide four times faster[clarification needed] data speed than the WiMAX Release 1. WiMAX Release 2 provided backward compatibility with Release 1. WiMAX operators could migrate from release 1 to release 2 by upgrading channel cards or software. The WiMAX 2 Collaboration Initiative was formed to help this transition.[36] It was anticipated that using 4X2 MIMO in the urban microcell scenario with only a single 20 MHz TDD channel available system wide, the 802.16m system can support both 120 Mbit/s downlink and 60 Mbit/s uplink per site simultaneously. It was expected that the WiMAX Release 2 would be available commercially in the 20112012 timeframe.[37] Interference[edit] A field test conducted in 2007 by SUIRG (Satellite Users Interference Reduction Group) with support from the U.S. Navy, the Global VSAT Forum, and several member organizations yielded results showing interference at 12 km when using the same channels for both the WiMAX systems and satellites in C-band.[38] Deployments[edit] Main article: List of deployed WiMAX networks As of October 2010, the WiMAX Forum claimed over 592 WiMAX (fixed and mobile) networks deployed in over 148 countries, covering over 621 million subscribers.[39] By February 2011, the WiMAX Forum cited coverage of over 823 million people, and estimate over 1 billion subscribers by the end of the year.[40] South Korea launched a WiMAX network in the 2nd quarter of 2006. By the end of 2008 there were 350,000 WiMAX subscribers in Korea.[41] Worldwide, by early 2010 WiMAX seemed to be ramping quickly relative to other available technologies, though access in North America lagged.[42] Yota, the largest WiMAX network operator in the world in 4Q 2009,[43] announced in May 2010 that it will move new network deployments to LTE and, subsequently, change its existing networks as well.[44]

A study published September 2010 by Blycroft Publishing estimated 800 management contracts from 364 WiMAX operations worldwide offering active services (launched or still trading as opposed to just licensed and still to launch)