Вы находитесь на странице: 1из 36

Module 8: Managing telecommunications and networks

Overview
The text raises two critical issues surrounding telecommunications: managing local area networks (LANs)
and managing bandwidth. Managing bandwidth is related to managing wide area networks (WANs). A
complicating factor in the management of telecommunications and networks is that management lacks
direct control of some parts of the network (especially for WANs) because some of the resources are
purchased from outside vendors. Therefore, LANs are the focus of this module.
As you have learned in previous modules, communication networks are an essential part of information
systems applications. Telecommunications are critical to organizations, but challenging to manage. As a
non-technology manager, it wont be your responsibility to make all of the decisions about network
management. You will work with specialists on most of the issues. However, you will need to have an
understanding of the basic technology, terminology, and management issues to be able to fulfill your role
in the process.
The module begins with a broad overview of telecommunications, which may be a review from MS1 or
other previous IS courses. The idea is to become familiar with the terminology of telecommunications and
the components of the telecommunications architecture in preparation for Topic 8.2 on strategic uses of
telecommunications.
Many (if not most) of the strategic applications you have learned about involve telecommunications.
Telecommunications has been one of the key drivers of the power of information systems in the last
decade. The previous two modules looked at applications of telecommunications (Internet, intranet,
extranet, e-commerce, and EDI) that may be considered to be of strategic significance.
Telecommunications, more than any other factor, has been responsible for the proliferation of mobile
devices, and the network effects in general of the Internet, including the rise of small, single-tasked,
purchasable apps.
Topic 8.3 moves to the specific level of understanding computer networks in organizations, providing you
with the foundation to consider management issues. The remaining four topics describe the issues in the
management of networks.
At the end of this module you should be able to advise on the design, development, and implementation
of IT projects including specific applications software. You will also develop your ability to identify,
analyze, and evaluate enterprise risk factors and evaluate the social costs and benefits of securing
resources to meet the organizations objectives.
8.1 Overview of telecommunications
8.2 Strategic uses of telecommunications
8.3 Network basics
8.4 Trends in network management
8.5 Network security issues
8.6 Planning and managing wireless networks
8.7 Remote computing management issues
Module summary
Print this module
8.1 Overview of telecommunications
Learning objectives
Compare the different channels used in telecommunications networks. (Level 1)
Justify the purpose of communication protocols, and list the most common protocols in use today.
(Level 1)
Required reading
Chapter 7, Sections 7.1, Telecommunications and Networking in Todays Business World, and 7.2,
Communications Networks and Transmissions Media
Reading 8-1: The 7 Layers of the OSI Model
Module Scenario: Know your past but dont relive it.
I dont understand your reluctance, says Joe Reed, Manager of Logistics. He is in charge of a
warehouse the size of three football fields, where physical pick tickets are still used to process shipments.
I use WiFi at home. Its at my kids schools. Even Starbucks offers it in its shops. But our own IT
Manager is reluctant to install it for security reasons everyone else seems to have solved.
You tell him hes wrong. The examples hes given to support WiFi are places that are the most exposed
and prone to security breaches today. You tell him you are drawing up a site map to show what kind of
coverage is needed in the warehouse, and once that is complete and piloted correctly, you will move
through the rest of the organization. You could begin to quote speeds, protocols, and both existing and
developing security measures for WiFi, but you know Joe will only take it as just more stalling.
Well, says Joe, what I know is you are costing this company several thousands of dollars a month in
inefficiencies because of your lack of movement on this issue. It seems pretty obvious to me. With that,
he leaves your office.
You get up and pace the room. Is he right? Am I being too cautious? You want to believe you make
decisions based not only on fact, but on years of experience too. You only have to think back to before
your current enterprise system, when all departments had their own information systems that could not
communicate with each other, but were great solutions individually. It was a time-consuming, money-
wasting era in the companys information-system history, and one you never want to repeat again. But
that was over ten years ago, almost a lifetime in IT. And you know that security with WiFi can be
appropriately handled. You can use wireless intrusion prevention and detection systems, as well as MAC
ID filtering and WEP. So its not about security.
You call up Enterprise Communications, local WiFi experts, and tell them to come by tomorrow for a
meeting. Then you get Joe on the phone in his office, and tell him the project starts full force tomorrow.
What made you change your mind? he asks. Sometimes in IT, we fear the age of the dinosaur.
Occasionally, we need someone to remind us that they are, in fact, dead.
LEVEL 1
A telecommunications network is a collection of nodes and links, connected in such a way that we can
send digital or analog data across it. A link is the channel used to connect the communicating devices.
Types of links include point-to-point, broadcast, and multipoint, where a variety of physical links fibre
optics, satellite are used in the connection. Each of these has advantages and disadvantages, and
must be weighed in the context of the application requirements. A node is a connection point, like a
router, modem, hub, bridge or switch, or a computer or phone. All messages (audio, video, or data)
transmitted across a network are first encoded, or placed in a sequence in a special format to allow for
efficient communications. Decoding is the reverse it is what happens to the message at the receiving
end. The processes of encoding and decoding are determined by the nature of the network and the
protocols that govern it. Examples of telecommunications networks are the Internet, the telephone
network, cable TV network, and satellite network. These are the most basic technical elements of
telecommunications.
As a non-IT manager, it is still important that you are familiar with these terms and concepts.
Telecommunications is an area with significant complexity and relevance within business today, and
unless you understand the basic concepts, you will not be able to engage in a meaningful discussion of
different options. You will be ineffective when it comes to making hardware and software purchase
decisions.
Communications model
Exhibit 8.1-1 shows the most basic model of communications, which includes seven elements: sender,
message, encoder, channel, decoder, receiver, and noise.
Exhibit 8.1-1

The sender, who has a message to transmit, must first encode the message for processing on the
network (the reasons for this will be explained shortly). Once encoded, the message is passed along a
channel to a decoder on the other end of the channel; from there it is passed along to the receiver. Noise
can affect various elements, especially the channel, distorting the message that is sent.
You can think about this model in terms of human communication. When a person wants to explain
something to another person, he or she must first decide on the best medium or channel to explain the
results. Will the message be sent by e-mail, memo, or telephone, or will it be delivered in a face-to-face
conversation? The message must be formatted differently for each medium. E-mails and memos are both
written forms of communication, but they often differ in their length or formality. The encoding of the
message is slightly different for each. The same is true of telephone and face-to-face speech. We can
explain things differently in face-to-face communication because we can use gestures and body language
to enhance our message. For telephone communication, there is only vocal inflection.
Once the message is encoded, it travels across the appropriate channel (computer network, postal
service, telephone lines, or air). The person to whom the results are being communicated must decode
the message through reading or hearing, and understanding the words. Then the message can be
understood by the receiver. Noise interferes with communications. This is especially evident for spoken
communication, where the level of background noise may interfere with the ability of the receiver to hear
the message.
Applied to telecommunications, the sender might be a device (a terminal or PC operating a
communications program such as e-mail) with a message to send. The e-mail program must determine
the type of network to which it is connected (this would be done as part of the setup of the program at
installation, so the process of sending an e-mail just involves looking up the correct information),
formatting the message for that network (encoding), and sending it out. The message travels on various
channels (such as twisted pair wire, fibre-optic cable, or satellite) to the destination, where it is decoded
by the receivers e-mail package and delivered to the inbox. Various other telecommunications devices
assist along the way but fulfill functions similar to those already listed.
Figure 7-1 on page 204 of the textbook shows the elements of a typical telecommunications system. The
personal computers and the server represent the senders and receivers; the network interface cards
(NICs), switches, and routers represent various forms of encoders/decoders; and the lines represent the
communications channels. The text does not discuss noise. However, noise is present on all channels,
and its main effect is the distortion of communications, leading to the need for error checking and a
reduction in usable bandwidth (due to dealing with noise and other errors).
Signals (analog and digital)
The message to be transmitted across the network is composed of a set of signals. The two basic ways of
encoding signals are analog (wave) and digital (pulse). Digital signals are made up of streams of 1s
and 0s, which are implemented as on-off electrical pulses. Analog signals are made up of continuous
waves and describe among other things speech communications. The Sony line of notebook and
desktop computers branded as VAIO uses a logo where the VA is an analog wave, and the IO is a digital
pulse, reflecting its ability to combine both signals into a single machine.
Exhibit 8.1-2

The traditional landline telephone network, which was designed to carry voice signals, was implemented
with analog signals. To convey computer information (which is digital) requires the use of a modem to
convert the digital computer signal into an analog signal (modulation) and then to convert the analog
signal coming from the network into a digital signal for the computer (demodulation), hence the word
modem.
Increasingly, however, the central parts of the telephone network are being replaced with digital
equipment. Private telephone networks, such as those used by large corporations and hotels, are also
often all digital through the use of a public branch exchange (PBX) which is far less expensive than
paying for a line to each telephone in the organization. The encoding and decoding of messages then
may take place at intermediate points in the network, as well as at the ultimate sender and receiver, in
order to correctly traverse different parts of the network.
Telecommunications channels
There are many channels that support telecommunications. They are described in the textbook reading,
and will not be repeated here. Channels can be divided into two principal groups: wired (or conducted)
and wireless (or radiated) media.
Wired media require the establishment of a physical link, in the form of a cable, between the different
access points to the network. Wireless media use the air as the basic channel of communications and do
not require a physical link between the access points.
As a non-IT manager, you do not need to memorize the exact communication speeds of the different
channels or recite their advantages and disadvantages. Telecommunications R&D changes rapidly and
regularly. You will rely on specialists for the details on these issues. What is important is that you know
that there are different channels that differ in speed. In the end, its all about cost: the greater the
speed, the more expense the channel. And speed is important because a single slow channel could
bottleneck communications. Also, in the e-commerce world, customers demand speed and use it as a
measurement of trust. In a global community, never discount the value of speed.
Channels have the ability to communicate different volumes of information, and the amount of
information that can be communicated in a unit of time is commonly referred to as computing
bandwidth.
Determining which channels are most appropriate when setting up a network involves trade-offs among
several factors, including the following:
speed Each of the different channels described in the text has different maximum
communications speeds (see Table 7-2 on page 211 for a summary).
cost The higher the speed, in general, the greater the cost; cost issues also relate to
maintainability and expandability. Also, using the concept of network effects, the cost to connect
the first device for telecommunications may be several thousands of dollars. But each device after
that is only dollars. This applies the concept that the network becomes more valuable as more
devices are added to it.
expandability Wireless media tend to be more expandable at lower cost (adding terminals
doesnt require additional wiring, for example); fibre-optic systems are costly to expand because
encoding/decoding hardware is very expensive.
durability (or susceptibility to natural forces) Wireless media doesnt work well here (for
example, satellites frequently dont work during large storms).
security Wired media are more difficult to tap into than wireless media. Fibre-optic channels are
the most difficult to tap into because breaking the optical fibre results in a loss of signal. Breaking
into a twisted pair or coaxial cable simply splits the signal and sends it to both places (which is
how television splitters work, for example).
distance Different media have different rates of attenuation (the rate at which the signal is
impaired). Depending on the attenuation, more intermediate devices, such as repeaters, are
necessary.
All of these criteria are assessed in terms of the requirements of the application being designed.
Telecommunications protocols
A telecommunications protocol refers to a set of rules and procedures that govern the transmission of
information. In human speech, there are rules governing the pace and volume of speech in different
situations, the notion of turn-taking in conversation, and the handling of errors. For example, protocol
defines what to do if you dont hear someone correctly, or how to handle the situation when two people
speak at once.
Humans are generally capable of flexible and adaptive rules, but when designing a computer
communications network, the rules must be absolute and exhaustive. They must cover every eventuality
for communication and provide a clear solution in each case. This is what makes telecommunications
protocols complex and rigid.
The functions of a telecommunications protocol include:
identifying each device in a communication path
gaining the attention of the device with which you want to communicate
verifying receipt of a message
notifying the sender that an error occurred and a message was not properly received
performing error correction when necessary (resending messages, for example)
Models for communications protocols
Most communications protocols today are divided into models based on layers, implemented in hardware
and software. A layer is a metaphor for the groups of tasks that move information between networked
devices. Beginning with the application layer (7), a message is processed and transformed for the level
below, and is subsequently passed on. At each layer, a set of defined functions is carried out. This means
that to the layers above and below, the message can be treated as a black box and only the prescribed
functions need to be carried out. This has the effect of making disparate systems more able to
communicate, as long as they adhere to the same layers of communications.
The most comprehensive model, the OSI reference model (Open System Interconnection), defines
the functions of telecommunications in terms of seven layers. Reading 8-1 provides a good overview of
the OSI model with details of what functions take place at each of the layers. Review this material before
you proceed. The intension is not to train non-IT managers about the details of communication protocols,
but to introduce some of the subtleties of how networked communications work.
Note that the OSI model is a theoretical model and not generally implemented in commercial products.
Products such as TCP/IP and x.25 implement specific layers of the OSI model in slightly different
combinations. For example, the TCP/IP model (described in the textbook on pages 207-208) implements
them as follows:
The application layer implements the application (7), presentation (6), and session (5) layers of
the OSI model.
The TCP layer implements the OSI transport layer (4).
The IP implements the OSI network layer (3).
The network interface layer implements the OSI data link layer (2).
The physical net defines the OSI physical layer (1).
What OSI has provided, however, is a standard reference point against which other protocols can be
defined.
Activity 8.1-1: OSI model
What is the value of the standard reference point provided by the OSI reference model?
Solution
Telecommunications hardware and software
So far, you have learned about the channels that connect devices in a network, and the rules that govern
communications. What remains to be explained are the different devices (hardware and software) that
are necessary to implement a telecommunications network. The text describes the role of computers that
process the data, terminals that are the primary senders and receivers of information across networks
(PCs, dumb terminals, or other devices, such as smartphones, tablet PCs, and other wireless devices),
and various intermediary processors that govern how information is combined across channels (routers,
switches, hubs).
Activity 8.1-1 solution
Previous modules have explained the role of standards. Standards provide a common basis for
technology development to ensure interoperability. Networks connect because of standards. E-mail is
sent and received because of standards. The Internet exists because of standards. Even though the OSI
model is not an implemented standard, it provides much of this benefit. Because of its widespread
acceptance, vendors of technology products are compelled to define their products capabilities in terms
of this model, which makes decision making easier for buyers.
8.2 Strategic uses of telecommunications
Learning objective
Analyze the strategic importance of telecommunications to the organization. (Level 2)
No required reading
LEVEL 2
Strategic benefits of communications
You have learned about the strategic uses of information systems and also that most strategic
applications involve both computing (processing data) and communications (making the data available to
different people in different places). Since the 1990s, it has been the communications aspect of
computing that has driven the greatest strategic benefits. E-commerce depends on reliable
communications; its key benefits derive from the widespread access to a common telecommunications
platform (that is, networks that can communicate to one another because they use the same protocol).
For example, the more people connected to and using the Internet, specifically Google search, the
greater the potential profit to Google, and a key reason behind the Google Fiber project.
Knowledge management systems (systems that capture, store, and distribute organizational knowledge)
also depend on communications networks to fulfill the capture and distribution aspects of the system. As
more firms operate globally and more workers are located outside the central facilities of firms, there is a
critical need to capture data, information, and knowledge from individuals in dispersed locations, store it
centrally, and then distribute it back to different locations. Wikis and social media applications are current
incarnations of knowledge management systems where tag clouds, and the management of those clouds,
form the categories of knowledge.
SCM (supply chain management) and CRM (customer relationship management) could not exist today
without robust communication. SCM, at a core level, is about the speed of communication between
suppliers and vendors. Without it, you could not create a hub of communication between parties that use
real-time data to determine the availability, scheduling and delivery of goods and services. With CRM,
communication must be fast and unobtrusive. Customers who must wait for the processing of information
between screens quickly tire of the delays, and will often take their business somewhere else.
This is the sentiment that drove Sun Microsystems chief researcher John Gage, to proclaim that the
network is the computer. This quote is a registered trademark of Sun (now a part of Oracle), and
defined a core part of the companys mission in the 1990s. The idea behind the phrase is that the power
of information systems lies in the linkage of systems and devices (computers, personal digital assistants,
car GPS systems, or even smart refrigerators), rather than in the devices themselves. The devices are
merely access points into the network.
Increasingly, applications are extending their functionality to a variety of wireless devices such as
smartphones and tablet PCs. Online banking, organizational applications, and corporate e-mail are among
the business uses of smartphones that are now feeding a new growth industry of single-purpose, low-
cost apps. Users can track the location of friends, make purchases, play games, read, watch, or listen
using a seemingly never-ending list of business and social apps available for Blackberry, iPhone, iPad, and
Android mobile devices.
Interoperability of networking technologies
One of the keys to achieving strategic benefits, and a critical concept in telecommunications, is the notion
of interoperability. Networking technologies have historically been among the most proprietary
technologies. Their capability of working with other networking technologies has been limited. TCP/IP
and other Internet technology standards have dramatically improved interoperability because they
provide a common platform for development. With increased interoperability, it is easier to connect
different parts of the organization in order to enhance communication and efficiency. Interoperability
goes hand in hand with scalability: Interoperability ensures that the devices will connect to the network;
scalability says there is little effort, impact, or cost associated with adding new devices.
8.3 Network basics
Learning objectives
Distinguish between the different ways of classifying networks and the kinds of networks that
these approaches describe. (Level 1)
Assess the benefits and limitations of a networked system. (Level 1)
Required reading
Chapter 5, Sections 5.1, IT Infrastructure, and 5.4, Contemporary Software Platform Trends
Review Chapter 7, Section 7.2, Communications Networks and Transmission Media
LEVEL 1
Types of networks
This topic focuses on the different configurations of telecommunications elements into networks.
Telecommunications elements can be configured in multiple ways. Networks can be defined by their
ownership, their geography, their topology (or layout), or their protocols.
Networks defined by ownership
In terms of ownership, networks can be either
private with all the elements owned by a single entity, such as a company or an industry
consortium, and designed for use by that entity
public owned by a company, but for the express purpose of supplying infrastructure to other
organizations and/or individuals (for example, the telephone company)
In many cases, public network providers (for example, Bell Canada, Rogers, and Telus) operate as
regulated monopolies. In other cases, the public network providers are owned and operated by the
government of the country. Increasingly, networks make use of existing public infrastructure rather than
relying on proprietary, private networks. However, such networks still exist. Most common today is to
build a network using a combination of public and private resources. For example, companies will build
an internal LAN to connect its employees, but take advantage of the Internet, a WAN, for
communications with suppliers and customers.
Networks defined by geography
In terms of geography, networks can be defined as local area networks (LANs) or wide area networks
(WANs). LANs operate within a confined geographical space such as an office building, while WANs
cover long distances. The Internet is, in essence, a wide area network.
LANs are typically private networks, owned and maintained by companies, while WANs typically involve at
least some reliance on the public infrastructure.
Networks defined by topology
Different network topologies are based on different ways of connecting computers to one another. The
most common topologies are the star, bus, and ring. Review the text descriptions and the figures that
depict each topology before proceeding. The existence of different topologies can be traced to different
networking approaches that were developed by different companies. Also, topologies are classified as
logical or physical. Logical topology, or signal topology, refers to how data are sent between devices
regardless of the physical design topology of the network. For example, in most organizations today
Ethernet is the logical topology used. (Ethernet (open) and token ring (IBM) were once neck-and-neck to
become the logical standard, but cost, proliferation of use, availability of adapters, and ease of
installation put Ethernet ahead, where it all but rules today.)
Ethernet is a shared media network, which means that all devices in the network can be shared equally.
This causes an issue when two devices send out information at the same time only one message can
be handled at a time, so a collision occurs. A special protocol (CSMA/CD) detects collisions and helps
resolve them. As more devices are added to Ethernet networks, more collisions occur, so often they are
split into smaller networks connected through the use of switches and hubs. Ethernet and other shared
media networks typically use star, bus, or hybrid physical topologies, described next.
Bus, ring, and star topologies
The physical topology of a network describes how the nodes (devices) in a network are physically
connected to one another. This means the location of the nodes, the cabling between them, and the
wiring. The physical layout of networks forms a geographic shape, the physical topology, determined by
the cabling or telecommunications costs, the level of control required, and the devices on the network.
Seven physical network topologies are defined today: point-to-point (line), bus, star, ring, tree, mesh,
and hybrid (fully connected). Exhibit 8.3-1 shows a diagram of the topologies shapes.
Exhibit 8.3-1

The ring, star, line, and bus topologies are basic topologies. The mesh and tree topologies are hybrids.
The fully-connected topology is for redundancy no switching or broadcasting is required since the
devices are connected directly to each other.
The bus topology is one of the simplest and was the basic approach used for Ethernet networks when
they were first developed. The ring topology was the layout used by Novell, with IBMs token ring
networks. The star topology has long been the approach used in mainframe computing, where the
processor formed the central node in the star. Today, nearly every network operates logically as a star
(or as a tree, which is basically a hierarchy of stars), irrespective of its physical layout. The mesh
topology (a hybrid) is the topology of the Internet. A fully connected topology uses a lot of cabling to
connect each device to one another data can be sent from a single node to all other nodes
simultaneously (often used in situations where redundancy between servers is needed, so usually only
two devices).
The central hub
In a star topology, all nodes (devices) connect to a central hub, a device that provides a single
connection point for different devices. Hubs may be very simple (called passive hubs) and just pass on
information received from one connection point or port to the other ports, or may involve some level of
intelligence in the processing.
A passive hub creates a logical bus network. Each piece of information received is broadcast to all of the
ports in the hub, and each node in the network has to decide whether the message is intended for it or
not. An intelligent hub might be used to create a logical ring network. Whereas the earliest ring networks
involved connecting each computer to its neighbour to create the ring, today, a ring network would have
computers connected through a central hub but behaving as if they were connected in a ring (that is,
each computer only talks to its neighbour).
Networking software allows network administrators to view the topology connected to a central hub,
make drag-and-drop performance improvements, disable problem nodes, and add/create new nodes all
from a screen. Network monitoring software can alert network administrators of potential problems
before they occur. Together they provide a view of the physical network, and central control of its nodes
and links.
Networks defined by protocols
Different network topologies have different strengths and weaknesses. In practice, the choice of topology
is likely dictated by the networking protocols used. Circuit and packet switching are protocols for how
connections are established between points on a network (these reflect data link layer protocols in the
OSI model). Other protocols are designed to improve on the basic packet-switching protocol through
different forms of error correction (frame relay) or through providing guaranteed levels of service and
handling different modes of traffic in a more integrated way (asynchronous transfer mode or ATM).
Integrated Services Digital Networking (ISDN) was one of the first protocols for providing high-speed (or
broadband) data communications over traditional telephone networks. Many North American companies
used ISDN for video conferencing and early Internet connections. ISDN is a set of international standards
endorsed by the International Telecommunications Union (ITU). ISDN implementation in North America
has historically been more limited than in the rest of the world, and it is being supplanted by alternative
broadband services such as DSL and cable, the more cost-effective methods of Internet connectivity.
Digital subscriber line (DSL) technologies represent a protocol for handling data transmission at higher
speeds over telephone wires. DSL lines are less expensive than ISDN lines, and are more readily available
for home Internet connections. Cable modems (and the associated protocols) provide for data
transmission over the existing cable infrastructure, used for Internet connectivity with the cable company
as the ISP.
Businesses usually connect to the Internet via a T1 carrier line with a transmit speed of 1.544 megabits
per second, compared to 30 kilobits per second over a traditional phone line. The higher the transmit
speed, the more information can be sent and received but the higher the cost. T1 lines are used to
connect approximately 50 devices to the Internet. They are also used in businesses to reduce the cost of
individual telephone lines through a PBX. A T3 line is comparable to having 28 T1 lines to pass
information through. Also, a T3 line can support approximately 500 connections. Cost and speed are the
factors for business to consider. Approximate pricing for a T1 carrier line starts at $400 per month, and
$2000 per month for a T3 line. Location and capability of the line determine the actual price.
Benefits of networking
Networking is useful in organizations for a number of reasons. With the proliferation of the Internet and
the web since the mid-1990s, companies have adopted these mediums into their strategies. There are
pure-play businesses that are simply Internet-based businesses, and clicks-and-mortar companies that
have branched out on to the Internet through their existing physical enterprises. Marketing promotes
one-to-one advertising, customized to a particular customers interest, and social media have provided
new ways to reach communities of potential new customers. Wired, wireless, mobile the options for
staying connected are greater than ever. The ability to provide shared communications, such as through
e-mail, blogs, Twitter, and Facebook, is one key benefit. Sharing data is also a critical factor. With a
network, users in different locations can access the same customer database to provide, for example,
better data integration (see Module 2). Sharing data also reduces organizations cost of storage because
only one copy of the data needs to be kept, and in many cases, this too is now stored online.
Sharing network resources, such as printers and scanners, allows organizations to purchase fewer of
these devices than computers. With printers, this can result in users having access to a faster, higher
quality printer as part of a workgroup than they would have if the organization purchased one printer to
go with each computer.
Centralized administration of network resources is also a key benefit to networking. On corporate
networks, software programs check each time a user logs on to make sure the user has the most up-to-
date virus checking programs. If not, the updates can be automatically installed on the users machine.
When files are stored centrally, backup can become the responsibility of the IS department, increasing
the likelihood that regular backups will be made. Networking has also provided virtualization, or the
ability to connect and run remote devices as if they were a part of a LAN. Applications like Citrix on
desktop computers and VMware on servers and end-user client computers extend the life and
functionality of existing machines, reducing the need to constantly update hardware. Plus cloud
computing has virtualized both the platform and the infrastructure of network resources by providing
connectivity outside the operating business, that is Internet-based and accessible.
Limitations of networking
Networking has its limitations, the two biggest being cost and security. With cabling, hubs, and switches,
networks can be expensive to set up and maintain which is where cloud computing offers a low-cost
alternative. While the simplest networks (peer-to-peer, ad-hoc networks) can be set up by users with
little expertise, these networks provide limited benefits. Powerful networks that provide strategic benefits
to organizations must be designed and maintained by specialists to ensure that benefits are achieved at a
reasonable cost, and that the networks continue to perform as designed.
Security is also a concern. A network is only as secure as its weakest node. The greater the
interconnectedness of machines in a network, the greater the risk of security breaches. Dealing with
these security challenges adds further to the cost of designing and maintaining a network. The security
risks also pose a threat to organizations in terms of the consequences should data be lost or stolen.
Cloud computing also offers assistance with security since access security is handled at the server or
cloud side of the equation. Amazon has published a paper called Amazon Web Services: Overview of
Security Processes, March 2013, that details the structure of security built into Amazons cloud services.
Topic 8.5 and Module 9 explain more about network security issues.
Enterprise networking
Most large organizations have fairly complex networks that feature both LAN and WAN functionality,
public and private network facilities, various protocols to support different kinds of connections, and a
multitude of end-user devices for providing connections. Such an enterprise network is depicted in the
textbook (page 137).
Client-server computing
A key aspect of enterprise networking is client-server computing. In totally centralized computing, all
processing takes place on the central server. In totally decentralized computing, all processing takes
place on the end-user devices. Distributed computing allows for dividing up processing among the
central server and the end-user devices to take advantage of the strengths of each approach and
minimize their limitations. Client-server computing refers to the form of distributed computing where
applications divide their activities across the server and the end-user device or client. In many instances
today, the client is a web browser, which can connect to a variety of servers including web servers,
application servers, printer servers, database servers, and so on.
In client-server computing, user interface aspects are controlled by the client machine while core data
processing aspects are controlled by the server. This provides the integration and high processing
performance that can be obtained with the centralized resource.
On the Internet, Peer-to-peer (P2P) networking architecture allows each host to act as both client and
server. This is how BitTorrent sites work. A P2P system can be centralized with a server-based model, or
decentralized and connected node to node (or peer). The benefit of P2P networks is that each node that
attaches to the network actually increases the capacity of the system. These systems are excellent at
delivering content among users.
World Wide Web
Consider the World Wide Web as an example of client-server computing. Your web browser is a client in
this network. When you click a link in a web page, your client formulates a request to the appropriate
server to deliver the contents of the web page. The client (your browser) determines the appropriate link
using a graphical interface and the technologies associated with graphical processing. The request that is
sent is quite small. For example, you can send a simple HTTP (hypertext transfer protocol) request that
says, Please send me the page contained at this address, http://www.dilbert.com.
The page that is sent is fairly simple too. If youve never seen what a web page is actually composed of,
go to one of your favourite sites and choose View>Source from the browser menu to see the HTML
(hypertext markup language) code that makes up a web page. Graphics are separate and also have to be
sent, which is why web pages with lots of pictures can take longer to download (although there are ways
to compress images to reduce the load time with little effect on quality). The server responds to the
HTTP request by sending the relevant HTML file, along with any of the graphics that the HTML file
requests.
Your browser, then, decides how to format that information for you to view. It determines your screen
resolution and can set overrides on how different kinds of text are to be displayed. It also controls your
security settings and determines what other applications it will call on to help display information. For
example, a browser can be set up so that Adobe Acrobat is used to display all files with the extension
.pdf, or it may simply save the file to a pre-determined location on your computer.
Advantages and disadvantages to client-server computing
The separation of presentation functions from processing functions in a client-server networking
environment allows for an optimal division of labour between the client machines and the servers.
Servers are good at processing large amounts of data very quickly. Because they are centralized, they
allow access to a single integrated database of information, rather than having to decentralize and/or
replicate data at each users machine.
The downside to client-server computing is its complexity. Building applications that can separate
processing duties reliably among different machines in a way that is transparent to the user is a lot more
complex than building an application that will be entirely centralized or entirely decentralized. After all,
there are multiple kinds of user devices, such as PCs (Windows or Mac OS based), tablets, and
smartphones that must be taken into account. That complexity leads to higher costs (both in purchase
and support). Also, if applications are badly designed, you will get poor performance.
Citrix, a server and desktop virtualization company, offers client-server solutions for all sizes of
companies. Their solution involves using thin clients (scaled down PCs about the size of a textbook) that
network-connect to a server on which all applications are installed. The thin clients run the applications
directly on the server, with minimal client processing. The advantage is that clients are inexpensive and
easy to replace because they contain no hard drive and all applications run from the server. This also
means operating system and application fixes only need to be installed on the server, not on the clients.
8.4 Trends in network management
Learning objective
Evaluate the implications of the major trends in network management. (Level 2)
No required reading
LEVEL 2
Historical trends
Networks have been in use in organizations since the beginning of the computer era. The earliest
mainframe computers could not function without a network because all of the processing power resided
remotely from the end-user terminal. With the advent of PCs in the late 1970s, the trend was toward
more decentralized computing. The network, and the idea of coordinated resources, took a back seat to
satisfying the needs of local users with stand-alone PC applications. By the 1990s, centralization of
resources returned. PCs were being connected in LANs, and proprietary WANs were being developed
(such as Walmarts satellite network for connecting stores and providing centralized inventory
management). The widespread adoption of the Internet has both enhanced, and subtly changed, the
move toward increased networking in the computing environment. The adoption of cellular phones,
network-enabled smartphones, and tablets has added another layer of complexity in terms of devices
that must be connected.
Network management is a continuously evolving field with new standards, protocols, devices, connection
types, and speeds. Telecommunications technology continues to change, offering more options both in
mobility and network, and presenting new challenges to firms. The trends described here were current at
the time this course was developed. Additional trends will likely be apparent by the time you study this
course.
IS user demands
The ubiquity of the Internet (from a commercial perspective at least) has provided a low-cost platform to
enable high levels of connectivity and communication between users in dispersed locations. It satisfies
two key user demands: low cost and location independence. Other demands from IS users include
Network access from multiple devices, including laptops, desktops, tablets, smartphones,
scanners, printers, production machines, telephone systems, and the list goes on. Whatever device
is required in business is also required to network connect. This also includes the addition of a
network of things (appliances, cars, buildings, etc.) that use WiFi and sometimes wireless
technology to connect and share content on the Internet.
Easy, transparent access to network resources without having to worry about their location, format
(from a technical perspective), and the network protocols through which they connect
Access to information (information is the key driver of usage of networked resources) any time,
any place, from any device, and in any format. The key for the user is the information, not the
system.
Sufficient bandwidth for the various applications used, including voice and text messaging, video
transmission, document exchange, and speedy access to applications, to work alone or
collaborating in groups
What are these historical trends and user demands leading to in the management of corporate networks?
Four trends in network management are evident today:
increasing availability of wireless solutions
less private infrastructure in the network
more centralization of application processes
greater opportunities for remote administration
Topic 8.6 explains the trend toward wireless solutions, while this topic describes the remaining trends.
Less private infrastructure in the network
In the early 1990s, organizations that wanted to build large corporate networks typically had to control
most of the infrastructure. They may have leased lines from public carriers, but the administration and
control of the lines were the responsibility of the private organization.
Today, the Internet provides a common infrastructure that organizations use without having built the
supporting proprietary networks. This has significantly lowered the cost of connectivity and has made
networking substantially easier through adherence to a standard set of communications protocols called
TCP/IP. This lower cost of connectivity has come at a price.
The disadvantages of reliance on a public infrastructure are twofold. First, security has become a greater
challenge. As the number of users sharing the same infrastructure has increased, the vulnerability of
companies to attack by those users has also increased.
Second, because the Internet was not designed to provide the real-time, high performance computing
that companies typically require, performance levels can be problematic. The Internet began as an
academic and military network, and high performance was not an issue. With a private network, a
guaranteed response time could be ensured for any application as part of the contract with the network
provider. With the Internet, the number of users online affects performance levels, as does the amount
of network traffic they generate. You have likely experienced the degradation of performance of the
Internet at times during the day when usage is likely to be highest. For most personal use, this is not a
concern, but for corporate transaction processing, where some organizations have contracts based on
processing speed, the performance losses can be problematic.
The reliance on a public infrastructure like the Internet also makes the management of the infrastructure
more complex. Companies must deal with multiple providers one for routers and switches, another for
leased lines, a third for Internet access, and a fourth for cellular/paging services. Inevitably, when there
are problems, getting to the root of the problem can be time consuming. The hardware provider claims
its an Internet problem, the cellular provider says its not their network, and so on. Sorting through these
different connections to find the root of connectivity problems can be challenging for network
administrators. Still, for most organizations, the lower cost and decreased management time devoted to
designing and maintaining proprietary networks is worth the added management complexity.
Centralization of application processes
The design of networks seems to follow an alternating pattern of centralization and decentralization. As
noted earlier in this module, there are trade-offs to having centralized resources as opposed
decentralized resources. For centralized processing, the principal advantages include:
ease of administration because processing resources are centrally controlled
justification of more costly (and thus higher performance) processors for the central server
stronger security made possible through central control
simpler application design than for most distributed processing systems
straightforward provision of data and process integration
For decentralized processing, the advantages include
generally better design from a user interface standpoint (graphical design)
greater flexibility to accommodate local processing needs
greater sense of data ownership by distributed users, often leading to more concern over data
integrity
Because of the tensions between flexibility and integration, and between control and ownership, there is
a tendency for systems design to move back and forth between relatively centralized and decentralized
processing. The current trend in processing is toward a more centralized approach with respect to
processing and applications, even while networks are increasingly distributed. Applications are
increasingly designed to run over the Internet with processing on Internet servers, running on
applications written in portable languages such as Java. The trend toward thinner clients in a client-server
network means that the requirements on the client machine are lowered and the complexity of design is
reduced.
Cloud computing offers a combination of centralized and decentralized computing. It is centralized in the
sense that data may be stored in a single, centralized data centre, but the actual database in reality is
probably replicated across different virtual machines making it a decentralized design. Plus, a single
web application, hosted from a cloud-based server, may make use of other decentralized services, yet at
the same time its services are provided to thousands of users at a centralized address.
Whether there will be a move back toward more decentralized processing at some future point is
uncertain. For now, centralization of processing provides benefits, such as reducing the total cost of
ownership and providing for application integration and location independence. These benefits suggest
that centralization is currently the best approach and will most likely continue for some time. But as cloud
takes on more of an enterprise role, the distinction between centralized and decentralized may become
less clear.
Cloud computing
Cloud computing represents a new version of a centralized/decentralized environment. It extends SaaS,
IaaS, and PaaS, potentially reduces infrastructure and IT costs, and offers a host of other possibilities, by
serving applications and architecture through private or public clouds. It is not enough to look at these
capabilities and conclude that we are already doing this today. What cloud computing offers is,
potentially, a way to do things (connections, integrations, collaborations) in ways that were never
possible in IT. It has the potential to be a truly disruptive technology, but that will depend on its
adoption. What is interesting is how smaller companies with limited resources are going to have access to
the same computing, storage, and networking opportunities that are traditionally reserved for larger
organizations. Cloud is going to introduce layers to our computing experience, and this will change how
organizations connect, and for what reasons.
Remote network administration
Despite what has been the move toward increasing centralization of processing in applications,
computing resources (such as personal computers and printers) remain distributed throughout the
organization. As more devices are connected to corporate networks from more locations, the challenge of
administering the network (for example, keeping applications up-to-date, supporting users, and
protecting user machines through virus scanning software) is increased.
To address this challenge, network operating system software allows for remote administration of
network resources. For example, it can automatically send software updates to users at the time they log
in to the network or at scheduled intervals. It can automatically run virus scanners when users start up
their machines, and it can automatically make available new virus definitions to users. These functions
reduce the cost and time associated with application upgrades and contribute to an efficient IS
organization.
Remote administration is also useful in a help desk environment, where technicians can take control of a
users machine via the network for the purpose of troubleshooting. While talking with a user over the
phone, the technician can see what the user sees and can demonstrate to the user what needs to be
done to fix the problem. Such applications permit speedier resolution of technical problems and allow the
user to return more quickly to productive work.
Operating systems today have remote administration capabilities built into them. There are also third-
party providers, like Citrix, who offer remote administration services such as GoToAssist, GoToMyPC, and
GoToManage, as cloud-based methods of maintaining support in a more globalized world.
8.5 Network security issues
Learning objective
Evaluate the key security challenges that relate to organizational networking. (Level 1)
No required reading
LEVEL 1
Securing networks, especially the data maintained on network resources, is critical to any business use of
IT. It has been noted, with tongue in cheek, that the only truly secure computer is locked in a room
many feet below ground, not connected to any other computer, and not turned on. There is no doubt
that the increasing connectedness of computers leads to greater risks of both accidental and intentional
threats to data security. And since the introduction of SOX in the United States, greater focus on physical
and logical system security measures are required for SEC compliance.
Key network security issues can be identified in the basic communications model. In that model, a data
message travels from sender to receiver through some channel with appropriate encoding as it passes
through the network. The sender and receiver need to be satisfied with the security of the data, that is
The identity of the sender is confirmed.
The message has not been modified in transit.
The message is received by the recipient and only the recipient.
Security measures must be taken at the endpoints of the model (the sender and receiver) and in the
middle of the model (the channel). Security measures at the endpoints of the model relate to
authentication and access. Measures in the middle of the model relate to network design factors, such as
the channel used and encryption.
Authentication and authorization
Every network must have appropriate procedures to ensure that authorized users, and only authorized
users, have access to the network. Assignment of user names and account privileges (assigned to those
names) ensures that users have appropriate access to resources they need, and not to those they do not.
The use of passwords and other identification schemes (such as biometrics) helps ensure that it is only
valid users who are gaining access. Yet an annual survey by the Computer Security Institute, in
combination with the U.S. Federal Bureau of Investigations, shows that only 51% of organizations use at
least some form of password protection. This seems to be a major limitation in the security protections of
surveyed companies who often view system security as an outside problem. In the latest CSI Computer
Crime and Security Survey 2010/2011 report1, the following list of key findings is highlighted:
Of the approximately half of respondents who experienced at least one security incident last year,
45.6 percent of them reported theyd been the subjects of at least one targeted attack.
When asked what actions were taken following a security incident, 18.1 percent of respondents
stated that they notified individuals whose personal information was breached and 15.9 percent
stated that they provided new security services to users or customers.
When asked what security solutions ranked highest on their wishlists, many respondents named
tools that would improve their visibility better login management, security information and
event management, security data visualization, security dashboards and the like.
Respondents generally said that regulatory compliance efforts have had a positive effect on their
organization's security programs.

Source: CSI Computer Crime and Security Survey 2010/2011, Computer Security Institute, Access
March 13, 2013: http://gocsi.com/survey
Network design
Channels
As noted earlier in this module, some channels are inherently more secure than others. Fibre-optic cables
cannot be tapped into in the same way that twisted-pair wire can be. Twisted-pair wire uses electrical
impulses to send signals. When a second cable is spliced onto an existing cable, the impulses reach the
split and continue on down both paths. But with a fibre-optic cable, signals are sent as pulses of light
through fine strands of glass. When the glass fibres are cut to add on a second cable, the pulse of light
cannot continue and the network is broken. This channel characteristic results in greater inherent
security; however, it also drives part of the cost of expanding a fibre optic network because connecting
additional nodes requires more costly hardware to deal with the signals as they reach the connection
points.
Wireless networks (Topic 8.6) are among the least secure from a channel perspective. The channel used
to transmit a signal in a wireless network is open air. Tapping in to a wireless network does not require
attaching additional connections in any physical sense. Many organizations include wireless intrusion
detections systems (WIDS) to help enforce security.
Checkpoints in the network: The firewall
A firewall is a device (hardware or software) that sits between a computer or internal company
network and the broader networks to which they are connected. All traffic going in to or out of the
network passes through the firewall. The firewall protects the network by limiting the kinds of traffic that
can come into the network and restricting the amount of information communicated outside of the
network. For example, the firewall may block all traffic that comes from a particular IP address or all
traffic that attempts to access a particular port. A port can be thought of as a connection into the
computer; different applications use different ports. E-mail, for example, typically uses port 25, while web
traffic (HTTP) uses port 80. Blocking a port blocks all traffic of that kind.
Encryption
Data travelling across a network channel may be intercepted by network users other than those intended
to receive it. In the same way that wiretaps can be used to listen in on telephone conversations, tools
such as packet sniffers can be used to track and "read" packets of information travelling on a network.
Many measures can be taken to limit the threat of interception, but ultimately, the risk is still there.
Encryption involves encoding messages so that even if they are intercepted, they cannot be
understood. Encryption is different from the kind of encoding described earlier in this module. Whether or
not encryption is used, some form of encoding to allow messages to travel on the network is necessary
for example, the creation of IP packets in an IP network. Encryption is an additional form of encoding
that takes the message and transforms it according to some predefined scheme, using a secured
algorithm, so that only the sender and receiver can see what the message really says. Some common
encryption standards include AES (advanced encryption standard) and 3DES (triple data encryption
standard). Given the relative insecurity of wireless networks, encryption schemes are particularly
important in these environments.
8.6 Planning and managing wireless networks
Learning objectives
Assess the benefits and limitations of WiFi networking. (Level 1)
Compare three technological approaches to wireless networking in the organization and briefly
explain their application. (Level 2)
Required reading
Chapter 7, Section 7.4, The Wireless Revolution
LEVEL 1
Wireless networks are becoming increasingly common for both personal and business computing.
Hotspots public or commercial locations that offer wireless connectivity can be found in stores like
Starbucks and McDonalds, and are a matter of course in hotels. In 2009, The Economist put the number
of worldwide hotspots at over 286,000. In 2012, BusinessWire online reported that Fon, the worlds
largest WiFi network had surpassed 5 million hotspots in 100 countries.
1
A wireless network is generally
referred to as a WiFi (wireless fidelity) network, and is a subset of the IEEE 802.11 (A,B,G,N) standard
that has connection speeds ranging from 11 mbps to 54 mbps. Wireless applications are varied and
include the following:
Text messaging: short messaging services (SMS) that allow text messages to be communicated
from cellular phone to cellular phone
Wireless web applications: that allow users to connect to the Internet from multiple devices
Bluetooth: a communications protocol that wirelessly connects devices such as headsets to cell
phones, or game controllers to gaming machines
Wireless local area networks (WLANs): as mentioned, appearing in a variety of locations that thrive
in the community. Also, many cities have experimented with location hotspotsareas of WiFi
connectivity open for public use.
App stores: online locations specific to the vendor of a particular device, and offering for free and
for purchase a variety of single purpose mobile applicationsfrom entertainment to business
productivity
Social media: applications like Twitter, Facebook, Foursquare, are cross platform and device
independent, and allow users to keep connected to their circle of friends even to the point of
physical location.
This topic focuses primarily on the WLAN application. However, many of the issues are common to all
types of wireless networks. Wireless, like all technologies, offers benefits and has limitations too.
Benefits of wireless networking
There are many advantages to wireless LANs. The four most important are
scalability Adding users does not require establishing a physical connection.
ease of installation Installing a wired network requires running cable through walls and floors,
which can be especially problematic in older buildings. A wireless network requires the positioning
and configuration of access points to provide coverage, but once they are set and tested, devices
can be easily connected.
portability and mobility In many environments, users do not work in a single location at all
times. Using a wireless network allows users to move around the network with their computers
(for example, to meetings or demonstrations) without having to establish a new physical
connection at each point. They offer businesses freedom of movement, unrestricted by cables.
cost Because of the ease of installation and scalability issues, wireless networks are often less
expensive to install and maintain. The only issue becomes WiFi enhancements. As connectivity
speeds increase, wireless access points would have to be replaced to accommodate the faster
speeds. Early business adapters of WiFi found that access points purchased for 802.11b at 11
mbps could not accommodate the faster 802.11g at 54 mbps. Although both are now supported
through a single access point that was not the case when WiFi was beginning in business.
Limitations of wireless networking
The benefits make a strong business case for wireless networking. However, the limitations must be
considered. The two principal limitations are
speed So far, the dominant wireless networking standard IEEE 802.11g operates at speeds of
approximately 54 Mbps. Wired networks frequently operate at speeds of 2 to 20 times that amount
(100 to 1000 Mbps).
security As noted in the previous topic, wireless networks are inherently less secure. Any
installation of a wireless network requires careful attention to security issues to ensure that the
network is not compromised. Key to wireless security is deciding what type of security to install. If
no password is required to connect to a wireless network, that means there is no security. WEP
(Wired Equivalent Privacy) is the oldest wireless security algorithm, and has been depreciated
since approximately 2005. WPA (WiFi Protected Access) became available in 2003 and uses TKIP
(Temporal Key Integrity Protocol) which addresses the encryption part of wireless security.
Together, WPA and TKIP offer acceptable security, but WPA2 is considered superior to WPA
because of the addition of CCMP (Counter Cipher Mode Protocol) to TKIP. CCMP offers enhanced
encryption of the data and was designed for data confidentiality addressing the vulnerability
issues of WEP. CCMP is considered much more secure than both WEP and WPA with TKIP, and is
offered in two versions: WPA2 Personal, and WPA2 Enterprise.
What a manager must consider
As a non-IT manager, you must understand the advantages and limitations of the wireless approach to
decide whether it is worthwhile for your organization to consider. Ask yourself about the importance of
mobility and portability within your organization. Is it valuable enough to consider sacrificing some
speed? Although 54 Mbps is not as fast as wired networks, it is still a respectable speed. The choices you
make will depend on the specifics of your organization and its needs.
LEVEL 2
Wireless options
There are many technologies to support wireless networking:
IEEE 802.11 standard (WiFi)
The IEEE 802.11 standard is the dominant approach for WLANs. The standard was initially proposed by
the Institute for Electrical and Electronics Engineers (IEEE) in 1997. It was designed to use then-available
technologies for providing communications through open air, but there were many problems associated
with this original standard.
Originally, the 802.11 standard was quite slow (only 1 to 2 Mbps, often used in production for limited use
wireless bar code scanners) and had problems dealing with error detection and correction. In 1999, the
802.11b standard was released, which transmitted in the 2.4 GHz spectrum (the unlicensed radio
frequency spectrum where signals in a wireless network travel). Since that time, 802.11 signals have
travelled both within the 2.4 and 5 GHz spectrums.
The three sub-standards for WLAN applications
Continuing developments in the standard have resulted in four common sub-standards: 802.11a (2001),
802.11b (1999), 802.11g (2003), and 802.11n (2009).
802.11b was the first of these to be ratified; consequently, it was the most common for some time. The
speed of 802.11b (up to 11 Mbps) is considered adequate for wireless LANs, but, as previously noted, it
is not as fast as can be achieved with wired networks. 802.11b is still common in home networking and in
business applications, but where most access points today support both 802.11b and 802.11g, most
organizations switch to the faster option.
The 802.11a standard has higher speed than 802.11b (up to 54Mbps), but it is more costly and less
adopted. Unfortunately, 802.11a is not compatible with 802.11b, so hardware designed to work with
802.11a cannot communicate with 802.11b hardware. The either/or choice for businesses meant that
those with already established 802.11b wireless networks didnt migrate to 802.11a.
802.11g is an extension of the 802.11b standard introduced in 2003. Like 802.11a, it offers data rates of
up to 54 Mbps, but unlike 802.11a, it is fully compatible with 802.11b hardware. This means that
companies can upgrade to new 802.11g equipment while still using their 802.11b gear on the same
wireless network. Today, new network installations would likely use 802.11g or 802.11n, and most
existing networks operate with a combination of 802.11b and 802.11g.
802.11n is capable of supporting all three previous standards: 802.11a, 802.11b, and 802.11g. An
802.11n network can support speeds between 54 Mbps up to 600 Mbps. However, to achieve the
maximum throughput, a pure 802.11n 5 GHz spectrum must be created. Many products, including routers
and access cards, are WiFi certified for 802.11n.
WiMAX or 802.16m is a developing protocol for fixed and mobile Internet access. It promises to offer a
fixed speed of up to 1 Gbps. The WiMAX forum states that WiMAX is a standards-based technology
enabling the delivery of last mile wireless broadband access as an alternative to cable and DSL.
WiMAX to WiFi comparison
WiMAX can support a wireless connection up to 15 kms. An 802.11 connection supports a
maximum of 100 m.
WiMAX works in both licensed (5 Ghz) and unlicensed (2.4 Ghz) spectrums. It is a complimentary
product to 802.11 connections.
By 2015, ABI Research predicts that there will be nearly 59 million WiMax2 or mobile WiMax global
subscribers.
2

As a speed comparison, remember that Ethernet networks running at 100 Mbps are not uncommon, and
gigabit Ethernet (1000 Mbps) is being commonly implemented, particularly in larger enterprises. Ten-
gigabit Ethernet, although more costly, is also being developed and implemented, especially in academic
settings. However, the WiFi 802.11n speed of 600 Mbps is a valid rival, as is the WiMAX speed of 1000
Mbps.
Some observers see WiMax as a competitor to LTE (Long-term Evolution) usually stated as 4G LTE. The
market will determine this if WiMax takes hold, and gains significant adoption by carriers.
Bluetooth
Bluetooth is a standard for exchanging data without wires over short distances. Using Bluetooth-
compatible devices, for example, you can set up your smartphone to automatically recognize when it is in
range of your PC and to synchronize immediately. This saves you from having to remember to manually
synchronize, and ensures that you have the most up-to-date information at any time. Bluetooth
technology can also be used to connect headsets to mobile phones, create instant networks between
compatible devices, connect Bluetooth printers to PCs, and perform a variety of other applications.
A network of Bluetooth devices is a called a personal area network (PAN). The devices transfer data
between one another at a 1 Mbps speed in the 2.4 GHz spectrum. Bluetooth was not created for network
connectivity, but for device connectivity. Printers, computers, GPS navigation devices, gaming consoles,
telephones, and digital cameras are all devices that can benefit from Bluetooth. It is possible to enter an
office where all devices connect with each other wirelessly, with power being the only necessary cable. A
single Bluetooth device (master) can communicate with up to seven other devices (slaves) in a piconet
(an ad-hoc network of Bluetooth devices). To qualify as a Bluetooth device, the device must adhere to
the standards of the Bluetooth Special Interest Group (SIG).
Bluetooth vs. WiFi
Bluetooth is primarily aimed at the personal-connectivity world rather than the PC network-infrastructure
world. It operates at speeds up to 1 Mbps and does not have the capacity to support networking at the
level of the 802.11 sub-standards. But the ability to create instant networks has interesting commercial
potential and presents interesting privacy challenges. The creation of these networks is what allows a
Bluetooth-enabled smartphone to recognize and synchronize with your PC.
Infrared
An infrared connection allows you to beam information between two line-of-sight devices without any
other form of network connection. It represents, in some ways, the lowest common denominator of
wireless networking technology and is probably the most widely available and cheapest form of wireless
networking. However, thats about all it has to recommend it. Infrared capabilities are much more limited
than either the 802.11 sub-standards or Bluetooth and are not likely to improve significantly. Remote
television controllers use infrared.
The primary problem with infrared technologies is the requirement for direct-line-of-sight connections.
You cannot beam information between three people because of the need to establish a direct line of
sight. Infrared connections require too many access points to provide the level of connectivity needed for
larger scale applications. Its speed is also very slow (normally only up to about 4 Mbps).
Near field communication
Near field communication (NFC) is a short-range, high frequency wireless communications technology
where data can be passed between devices about 10 cm apart. It is an extension of RFID, where the
smartcard and the reader are in a single device. Current usage of the technology is toward mobile
payment, where the smartphone device can act as a debit/credit card. Future uses include electronic
money, identity documents, and mobile commerce applications. NFC can also configure and initiate other
wireless network connections such as Bluetooth and WiFi.
LEVEL 1
Implementation
Planning for a wireless network requires, in addition to making choices about what networking
approaches to use, planning for the infrastructure change within the organization. It requires a careful
consideration of the costs, the risks, and the uses, and how these relate to the strategic direction of the
business.
Most organizations have a significant investment in wired networking. Moving to a wireless network
requires abandoning much (but not all) of that infrastructure, which adds a dimension to the analysis of
benefits and limitations. The concept of sunk costs still applies, but often the question facing managers is
not so much "is wireless networking worth it?" but "is wireless networking worth it right now?" If the
existing network is functioning reliably and nothing has really changed regarding the demands for
mobility and portability within the organization, it is questionable whether a significant investment in new
infrastructure needs to be made. The analysis always begins as a balance of cost versus benefit.
Certainly, there is a need for IS organizations to experiment with wireless networking in order to become
familiar with its capabilities and challenges within their organizational contexts. The idea is to test
different approaches for different applications and to gain experience with the technology in preparation
for making a major infrastructure change. For managers, the question of wireless networking then
involves if, when, and on what scale elements.

1
Businesswire.com. Fon Tops Five Million WiFi Hotspots Worldwide. Accessed on April 11, 2013.
http://www.businesswire.com/news/home/20120222005850/en/Fon-Tops-Million-WiFi-Hotspots-
Worldwide
2
Slashgear.com. ABI Research predicts 59 million WiMax subscribers by 2015. Accessed on April 11,
2013. http://www.slashgear.com/abi-research-predicts-59-million-wimax-subscribers-by-2015-10101651/
8.7 Remote computing management issues
Learning objective
Evaluate the social costs and benefits, management, and maintenance of remote computing and
telecommuting. (Level 1)
No required reading
LEVEL 1
Topic 8.4 described the trend toward remote administration of networks (running software on servers to
control distributed clients). This topic focuses on a different concept of remote computing, where users
may be distributed in different locations throughout a city, country, or the world. Yet these users must be
supported adequately as they work with the companys information systems, in many cases as if they
were locally connected users.
Types of remote workers
There are many pressures and job requirements that encourage remote computing. Some remotes users
travel from location to location, others are telecommuters, and others still are mobile connectors who
require constant e-mail updates. Here are definitions of remote computing users:
Road warriors
Some remote workers are the so-called road warriors. Often working in consulting or corporate sales,
these users spend as much time as possible on the road at client sites. Because their primary job requires
interaction with customers, it makes sense for them to work as near to the customers as possible. These
users rely on mobile technology, such as laptop computers, tablet PCs, and cellular/smartphones. They
may connect to the companys systems through their cellular telephone service, or perhaps through a
wired or WiFi network connection in the various locations they visit. Various dial-up services allow users
to use a local telephone call as the basis for establishing their network connection, and many carriers
now offer USB mobile connectors for notebook computers. In addition, WiFi hotspots are located in many
cities, and can be found online. Most hotels offer high-speed Internet connections as well.
Cloud-based applications are beginning to interest road warriors because of their centralized location, and
access usually requires only an Internet connection and a browser. This means that the device is
secondary, and this is important to road warriors. Mobile computers that fail on the road are difficult to
support. Organizations can purchase extended warranties that often include fast turnaround for problem
machines, but if you are on the road, the time element may not be an option. By adapting road warriors
to applications like Google Docs, Gmail, and DropBox, a failed computer does not mean lost information.
Any mobile technology with an Internet connection will allow retrieval of information, as well as the
ability to create new material with little interruption.
Telecommuters
Other remote workers are telecommuters. They work part-time or full-time from their homes, or from
satellite offices nearer their homes. Many factors encourage some employees to work at least part-time
from home:
the challenges of balancing work and family For employees with young families, the opportunity
to work from home to be able to supervise lunch or be home after school can reduce the stress of
balancing work and family roles. It results in employees who are more satisfied and less stressed
and who, in theory, will perform better as a result.
the problems of long commutes Long commutes are a challenge from a work and family
perspective, but they also pose more general problems. As oil and gas prices increase, the costs of
commuting rise dramatically; as more people commute into large urban centres, the risk of
accidents increases.
Managing remote workers
Managing remote workers of either type raises interesting challenges. One of the most difficult challenges
is the change in mindset to allow workers to work remotely. Managers often find it difficult to know how
to evaluate employees who they cannot see regularly. How will they know if employees are working or
not working when they are located in their own home? On a related theme, telecommuters are in some
ways less likely to be promoted because they are less visible within the organization. This is a significant
cultural challenge that must be dealt with in situations where employees are remote workers. As a
manager, it requires greater trust in your employees and a transition to managing outcomes rather than
process (in other words, focusing on the deliverables and results, rather than on the inputs and time
spent). As an employee, it requires an effort to stay in contact with the other employees at the office,
and not fall victim to isolation.
In 2013, Yahoo! CEO Marissa Mayer, to the consternation of many employees, banned work-from-home
telecommuters. The decision has been both praised and derided. Many see Yahoo!s decision as archaic,
a throwback to earlier times, and not the decision of a 21st -century, progressive Internet company. But
others believe that Mayer, coming from a Google background where employees were encouraged to work
from the Googleplex and not from home, may be trying to establish a similar culture to the one that she
helped foster at Google. Plus, many believe that she used data to extrapolate how often work-at-home
users were actually checking in, and was not impressed with the findings.
Setup costs for a remote worker
Another challenge is allocating the costs of dedicated connections and equipment used by remote
workers who work from home. Suppose a sales analyst for a large consumer goods firm wants to work
three days a week from home. Perhaps the individual lives a long way away from the firms offices and
wants to minimize commuting time. The analysts job is one that can be done easily from a remote
location. It requires access to corporate computer systems, printing capability (to print reports), phone
and e-mail. Because the individual will work two days a week at the office, meetings can be scheduled
without the need for video conferencing equipment.
The set-up costs for such an arrangement may include a dedicated phone line or high-speed Internet
connection, printing and fax equipment, and perhaps a dedicated PC. A dedicated PC may be preferable
to a home PC because of performance issues, but also for security reasons. Keeping the work PC
separate from home applications reduces the risk of accidental destruction of information.
Who should pay for this equipment? The individual clearly benefits through reduced commuting time and
costs, so it may be tempting to assume the costs should be borne by the individual. But with this
approach, control also goes to the individual. Relying on the individuals resources may result in lower
security if the computer is shared with other family members. If the organization supplies a personal
computer for use by remote workers, rules about what additional software can be installed on the
computer can be more easily enforced. In most cases, some combination of individual and organizational
allocation of costs is typical.
What is usually required is a company policy that clearly specifies the different types of remote workers.
It must clearly state equipment, connections, and the dos and donts of working offsite with company
equipment. It must also explain responsibilities, and how incurred costs are to be handled. For example,
for an Internet connection its not uncommon for the cost to be shared between the company and the
employee.
Security issues
Security is an issue that must be considered with remote computing. As already noted, separating the
home PC from the work PC can enhance the security of corporate data resources. Encryption of data is
another measure that may be particularly valuable in the case of remote workers who often connect to
corporate networks through the Internet.
Through a combination of encryption and authentication, virtual private network (VPN) technology
provides a means of ensuring security when using the public infrastructure to allow remote users to
connect to corporate servers. VPNs essentially create a two-way, encrypted tunnel through which data
can be sent between the users PC and the companys servers. They are a cost-effective way to provide
remote access. Often hand-in-hand with a VPN, an RSA SecurID token is issued as another security level
to ensure the authenticity of the remote user. Many organizations rely on VPNs to provide a secure
remote connection when a user is not communicating on the local or a fully trusted network. For many,
they are becoming an indispensable security tool.
Maintaining and supporting remote workers
Finally, the challenges of maintaining and supporting remote users must be considered. Remote
administration, addressed in Topic 8.4, may be more difficult to implement when users are connecting
remotely because the bandwidth available for applications (such as automatic updating) may not always
be available.
If users work part-time in the central office, remote administration is easier to handle. The application
can be easily designed to sense the users connection type and apply updates only if the user is in the
office. But for users who work full-time at home or away from the office, other options are needed (such
as the delivery of updates on CD-ROM), which increases the complexity of the procedures within the IS
department. It is also possible for IT support to connect to a remote computer, take control of the
session, and ensure that all required updates are applied. But again, this can be time consuming and
requires scheduling between both parties.
Despite these issues, remote computing is here to stay. Employees have many reasons to work away
from the office, either full-time or part-time. Supporting these employees is much easier because of the
Internet, which provides an easy mechanism for connectivity. Finding ways to deal with the challenges
and developing workable policies for approving and supporting remote work are in the best interest of
both the individual and the organization.
New directions in remote computing
New directions in remote computing include desktop virtualization and cloud computing. Companies
offering cloud services, such as Amazon Elastic Compute Cloud (Amazon EC2) or Citrix Open Cloud,
among many others, provide a platform for organizations to configure servers for remote access (IaaS),
on any Internet-connected computer, and with a host of possible browsers. Using Citrixs XenDesktop
product, companies can create virtual desktops on practically any computer. These options can reduce
company costs by putting the upkeep of the server to the cloud provider, and possibly reducing the
remote worker requirements a scaled down computer as opposed to a high-end road warrior machine.
Module 8 self-test
1. Describe the features of a simple network and the network infrastructure for a large company.

Source: Kenneth C. Laudon, Jane P. Laudon, and Mary Elizabeth Brabston, Management
Information Systems: Managing the Digital Firm, Fifth Canadian Edition (Toronto: Pearson Canada,
2011), page 234. Reproduced with permission from Pearson Canada.

Solution
2. Name and describe the principal technologies and trends that have shaped contemporary
telecommunications systems.

Source: Kenneth C. Laudon, Jane P. Laudon, and Mary Elizabeth Brabston, Management
Information Systems: Managing the Digital Firm, Fifth Canadian Edition (Toronto: Pearson Canada,
2011), page 234. Reproduced with permission from Pearson Canada

Solution
3. Define IT infrastructure from both a technology and a services perspective.

Source: Kenneth C. Laudon, Jane P. Laudon, and Mary Elizabeth Brabston, Management
Information Systems: Managing the Digital Firm, Fifth Canadian Edition (Toronto: Pearson Canada,
2011), page 165. Reproduced with permission from Education Canada

Solution
4. It has been said that within the next few years, smartphones will become the single most
important digital device we own. Discuss the implications of this statement.

Source: Kenneth C. Laudon, Jane P. Laudon, and Mary Elizabeth Brabston, Management
Information Systems: Managing the Digital Firm, Fifth Canadian Edition (Toronto: Pearson Canada,
2011), page 234. Reproduced with permission from Pearson Canada

Solution
5. Define a LAN, and describe its components and the functions of each component.

Source: Kenneth C. Laudon, Jane P. Laudon, and Mary Elizabeth Brabston, Management
Information Systems: Managing the Digital Firm, Fifth Canadian Edition (Toronto: Pearson Canada,
2011), page 234. Reproduced with permission from Pearson Canada

Solution
6. Describe how network economics, declining communication costs, and technology standards affect
IT infrastructure.

Source: Kenneth C. Laudon, Jane P. Laudon, and Mary Elizabeth Brabston, Management
Information Systems: Managing the Digital Firm, Fifth Canadian Edition (Toronto: Pearson Canada,
2011), page 166. Reproduced with permission from Pearson Canada

Solution
Module 8 self-test solution
Question 1 solution
A simple network consists of two or more connected computers. Basic network components include
computers, network interfaces, a connection medium, network operating system software, and either a
hub or a switch. The networking infrastructure for a large company relies on both public and private
infrastructures to support the movement of information across diverse technological platforms. It includes
the traditional telephone system, mobile cellular communication, wireless local-area networks, video
conferencing systems, a corporate website, intranets, extranets, and an array of local and wide-area
networks, including the Internet. This collection of networks evolved from two fundamentally different
types of networks: telephone networks and computer networks.
Source: Dale Foster, Instructor's manual to accompany Management Information Systems: Managing the
Digital Firm, Fifth Canadian Edition, (Toronto: Pearson Canada, 2011), Chapter 7, page 250. Reproduced
with the permission of Pearson Canada.
Module 8 self-test solution
Question 2 solution
Client/Server computing, the use of packet switching, and the development of widely used
communications standards such as TCPIP are the three technologies that have shaped contemporary
telecommunications systems.
Client/Server computing has extended to networking departments, workgroups, factory floors, and other
parts of the business that could not be served by a centralized architecture. The Internet is based on
client/server computing. Packet Switching technology allows nearly full use of almost all available lines
and capacity. This was not possible with the traditional dedicated circuit-switching techniques that were
used in the past. Having a set of protocols for connecting diverse hardware and software components
has provided a universally agreed upon method for data transmission. TCPIP is a suite of protocols that
has become the dominant.
Source: Dale Foster, Instructor's manual to accompany Management Information Systems: Managing the
Digital Firm, Fifth Canadian Edition, (Toronto: Pearson Canada, 2011), Chapter 7, page 251. Reproduced
with the permission of Pearson Canada.
Module 8 self-test solution
Question 3 solution
Technical perspective is defined as the shared technology resources that provide the platform for
the firms specific information system applications. It consists of a set of physical devices and
software applications that are required to operate the entire enterprise.
Service perspective is defined as providing the foundation for serving customers, working with
vendors, and managing internal firm business processes. In this sense, IT infrastructure focuses
on the services provided by all the hardware and software. IT infrastructure is a set of firm-wide
services budgeted by management and comprising both human and technical capabilities.
Source: Dale Foster, Instructor's manual to accompany Management Information Systems: Managing the
Digital Firm, Fifth Canadian Edition, (Toronto: Pearson Canada, 2011), Chapter 5, page 164. Reproduced
with the permission of Pearson Canada.
Module 8 self-test solution
Question 4 solution
Cell phones and smartphones are morphing into portable computing platforms that allow users to
perform some computing tasks that previously could only be accomplished on a desktop computer.
Smartphones enable digital capabilities like e-mail, messaging, wireless access to the Internet, voice
communication, and digital cameras. They also allow users to view short video clips, play music and
games, surf the web and transmit and receive corporate data. New generations of mobile processors and
faster mobile networks enable these devices to function as digital computing platforms allowing users to
perform many of the tasks of todays PCs on smartphones. Storage and processing power continue to
increase thereby rivalling those of the typical PC. That allows users to run key applications and access
digital content through smartphone technologies.
Managers and employees will be able to break the tether to the desk and desktop computer because of
smartphones. Users can more easily stay in touch with customers, suppliers, employees, and business
partners and provide more flexible arrangements for organizing work.
On the downside, smartphones can potentially increase the amount of time workers spend "on the job"
by making communication and computing possible anytime, anywhere. That may increase the amount of
techno-stress employees and managers experience by not allowing them any free time or claim to their
own personal space.
Source: Dale Foster, Instructor's manual to accompany Management Information Systems: Managing the
Digital Firm, Fifth Canadian Edition, (Toronto: Pearson Canada, 2011), Chapter 7, page 257. Reproduced
with the permission of Pearson Canada.
Module 8 self-test solution
Question 5 solution
A LAN is a telecommunications network that is designed to connect personal computers and other digital
devices within a half-mile or 500-meter radius. LANs typically connect a few computers in a small office,
all the computers in one building, or all the computers in several buildings in close proximity. LANs
require their own dedicated channels.
Components of a typical LAN consists of: computers (dedicated server and clients), a network operating
system (NOS) residing on a dedicated server computer, cable (wiring) connecting the devices, network
interface cards (NIC), switches or a hub, and a router.
NIC each computer on the network contains a network interface device.
Connection medium for linking network components; can be a telephone wire, coaxial cable, or
radio signal in the case of cell phone and wireless local-area networks (WiFi networks).
NOS routes and manages communications on the network and coordinates network resources.
Dedicated server provides users with access to shared computing resources in the network. The
server determines who gets access to data and in what sequence.
Client computers are connected to one another.
Switches or hub act as a connection point between the computers. Hubs are very simple devices
that connect network components and send data packets to other connected devices. A switch has
more intelligence than a hub and can filter and forward data to a specified destination.
Router a special communications processor used to route data packets through different networks,
ensuring messages are sent to the correct address.
Source: Adapted from Dale Foster, Instructor's manual to accompany Management Information Systems:
Managing the Digital Firm, Fifth Canadian Edition, (Toronto: Pearson Canada, 2011), Chapter 7, pages
251-252. Reproduced with the permission of Pearson Canada.
Module 8 self-test solution
Question 6 solution
Network economics: Metcalfes Law helps explain the mushrooming use of computers by showing that
a networks value to participants grows exponentially as the network takes on more members. As the
number of members in a network grows linearly, the value of the entire system grows exponentially and
theoretically continues to grow forever as members increase.
Declining communication costs: Rapid decline in costs of communication and the exponential growth
in the size of the Internet is a driving force that affects the IT infrastructure. As communication costs fall
toward a very small number and approach zero, utilization of communication and computing facilities
explodes.
Technology standards: Growing agreement in the technology industry to use computing and
communication standards. Technology standards unleash powerful economies of scale and result in price
declines as manufacturers focus on the products built to a single standard. Without economies of scale,
computing of any sort would be far more expensive than is currently the case.
Source: Adapted from Dale Foster, Instructor's manual to accompany Management Information Systems:
Managing the Digital Firm, Fifth Canadian Edition, (Toronto: Pearson Canada, 2011), Chapter 7, pages
166-167. Reproduced with the permission of Pearson Canada.
Module 8 summary
Managing telecommunications and networks
Compare the different channels used in telecommunications networks.
Telecommunications channels can be divided into two groups: wired (conducted) and wireless
(radiated).
Wired
o generally higher transmission rates
o more secure and durable
o higher cost, requires more resources to install and maintain
o greater distances
Wireless
lower transmission rates
less secure, less reliable
less expensive to set up and maintain
local in reach
Justify the purpose of communication protocols, and list the most common protocols in use today.
The basic communications model consists of a sender, encoder, channel, decoder, and receiver
(see Exhibit 8.1-1).
Communications can be either analog (a wave) or digital (1s and 0s).
Telecommunications protocols are the rules and procedures that govern the transportation and
communication of information and data.
Telecommunications protocols will
o identify
o notify
o verify
o check errors
o correct for errors
In order for two devices to communicate they must use the same protocol.
The most common protocol is transmission control protocol/Internet protocol (TCP/IP), which is
used by the Internet.
Analyze the strategic importance of telecommunications to the organization.
wide-spread access to common telecommunications platforms
increasing the reach of knowledge and information
the ability to interoperate with multiple networks and organizations
Distinguish between the different ways of classifying networks and the kinds of networks that these approaches
describe.
Networks can be defined by ownership:
o private networks where all of the components of the network are owned and controlled
by a single entity
o public networks may be owned by a single organization but are used to provide
infrastructure and services to other organizations or the public at large
Networks can be defined by their geography: local area networks (LANs) or wide area networks
(WANs).
Networks can be defined by their structure or topology: star, bus, ring, or hierarchical.
Networks can be defined by their protocols:
o The Internet is a TCP/IP network.
o Most LANs can be defined as Ethernet networks.
Finally, networks can also be defined by their purpose:
o Enterprise networks, for example, are designed to connect all portions of a large and
complex organization.
Assess the benefits and limitations of a networked system.
Key benefits are sharing data and resources and centralizing administration of resources.
Key limitations relate to cost and security.
Evaluate the implications of the major trends in network management.
increasing demands from IS users:
o network access for multiple devices
o ease of use
o access to information at any time
o home networking
o increased access to bandwidth
o device independence scalable applications
o mobility of access pushed to smart phones
increasing access to wireless networks
o WiFi hotspots in cities
o WiMAX infrastructure for distance wireless
less private infrastructure
more remote network administration
more centralized applications and process
Evaluate the key security challenges that relate to organizational networking.
Networks must have appropriate procedures to authenticate and authorize users.
Security must fit the network channel and organizational needs (for example, wireless is less
secure than wired).
Encryption will protect data as it travels across different networks and channels, making it easier
to maintain confidentiality and security of data.
Assess the benefits and limitations of WiFi networking.
The key benefits of a wireless network are
o scalability
o ease of installation
o portability and mobility
o cost
o ubiquity
The limitations of wireless networking (which are being challenged) are
o speed
o security
Compare three technological approaches to wireless networking in the organization and briefly explain their
application.
IEEE 802.11 (WiFi standard)
o Multiple sub-versions, including 802.11a, 802.11b, 802.11g, and 802.11n, offer speeds up
to 200Mbps.
Bluetooth
o standard for connecting peripheral devices without wires
o an addition to data networks, not a replacement
Infrared
o limited capabilities but most devices have infrared capabilities; very inexpensive
Near field communications
o proposals for wireless payments through cell and smart phones where devices are held
approximately 10 cm. away to incur a purchase transaction.
Evaluate the social costs and benefits, management, and maintenance of remote computing and telecommuting.
There are two types of remote workers:
o road warriors: people who either work at a clients site or spend most of their time
maintaining and overseeing numerous locations
o telecommuters: people who typically work from home or satellite offices
Telecommuting is an attempt to address the issues of
o work-family balance
o stress on the environment because of long commutes
o advocating a more sustainable urban development plan
The major challenges in managing remote workers are
o Traditional methods of evaluating performance, especially around the need to see
production, dont work as well. It requires greater trust in the worker and more reliance on
outcomes than on process.
o The setup costs of remote workers are higher up front, and there is debate over who should
be responsible for the infrastructure costs.
o Security can be lower, especially in remotely connecting to the network.
o Maintenance and support costs tend to be higher.
o New directions like cloud and virtualization offer new methods of remote computing not tied
to a specific device.