Академический Документы
Профессиональный Документы
Культура Документы
Introduction: Pervasive and its Environment: The essence of that vision was the creation of environments saturated with computing and communication capability, yet gracefully integrated with human users. The most important characteristics of pervasive environments are: Heterogeneity: Computing will be carried out on a wide spectrum of client devices, each with different configurations and functionalities. Prevalence of "Small" Devices: Many devices will be small, not only in size but also in computing power, memory size, etc. High Mobility: Users can carry devices from one place to another without stopping the services. User-Oriented: Services would be related to the user rather than a specific device, or specific location. Highly Dynamic Environment: An environment in which users and devices keep moving in and out of a volatile network. Pervasive Computing(Ubiquitous computing): Pervasive computing integrates computation into the environment, rather than having computers which are distinct objects. It encompasses wide range of research topics, including distributed computing, mobile computing, sensor networks, human-computer interaction, and artificial intelligence. The aim of ubiquitous computing is to design computing infrastructures in such a manner that they integrate seamlessly with the environment and become almost invisible. Local Area Networks: What is a network? Its simply two or more devices that communicate with one another over some type of electronic connection. The connection itself can be copper wire, fiber optic cable, or radio waves. There are all sorts of networks in use today, including the broadcast and cable television networks, the public telephone network, several cellular telephone networks, and the Internet. A local area network (LAN) is a network of computers, located physically close to one another. (The Internet, by the way, is a WAN, or wide area network, that connects millions of LANs.) A LAN consists of two or more computers, each equipped with a communications 1
device called a network interface or network adapter. The network interfaces are connected to one another by some type of communications medium, which provides a pathway for electrical signals that connect all of the computers on a LAN. The most widely used, cost-effective, and highest-performance network medium in use today is twisted-pair Ethernet cable, often called CAT5 or CAT6 cable. (CAT is short for categorythere are several grades of cable that can be used for Ethernet LANs.) A relatively new technology called wireless Ethernet uses radio signals instead of copper cable as the communications medium.
Networks Topologies:
There are five different types of topologies. They are a) Bus b) Star c) Ring d) Mesh e) Tree. When networks are design using multiple topologies it is called Hybrid Networks, this concept is usually utilized in complex networks were larger number of computer clients are required. Bus Topology: Bus topology is one the easiest topologies to install, it does not require lots of cabling. There are two most popular. Bus topology based networks works with very limited devices. It performs fine as long as computer count remain within 12 15, problems occurs when number of computer increases. Bus topology uses one common cable (backbone) to connect all devices in
the network in linear shape. RingTopology: Ring topologies are similar to bus topologies, except they transmit in one direction only from station to station. Typically, a ring architecture will use separate physical ports and wires for transmit and receive. Token Ring is one example of a network technology that uses a ring topology. 2
StarTopology: This is the most commonly used network topology design you will come across in LAN computer networks. In Star, all computers are connected to central device called hub, router or switches using Unshielded Twisted Pair (UTP) or Shielded Twisted Pair cables. In star topology, we require more connecting devices like routers, cables unlike in bus topology where entire
network is supported by single backbone. TreeTopology: Just as name suggest, the network design is little confusing and complex to understand at first but if we have better understanding of Star and Bus topologies then Tree is very simple. Tree topology is basically the mixture of many Star topology designs connected together using bus topology. Devices like Hub can be directly connected to Tree bus and each hub performs as root of a tree of the network devices. Tree topology is very dynamic in nature and it holds potential of expandability of networks far better than other topologies like Bus and Star.
MeshTopology: Mesh topology is designed over the concept of routing. Basically it uses router to choose the shortest distance for the destination. In topologies like star, bus etc, message is broadcasted to entire network and only intended computer accepts the message, but in mesh the message is only sent to the destination computer which finds its route itself with the help of router. Internet is based on mesh topology. Routers plays important role in mesh topology, routers are responsible to route the message to its destination address or computer. When every device is connected to every other device it is known as full mesh topology and if every device is connected indirectly to each other then it is called partial mesh topology.
Router:
A router is a device that forwards data packets across computer networks. Routers perform the data "traffic directing" functions on the Internet. A router is a microprocessorcontrolled device that is connected to two or more data lines from different networks. When a data packet comes in on one of the lines, the router reads the address information in the packet to determine its ultimate destination. Then, using information in its routing table, it directs the packet to the next network on its journey. A data packet is typically passed from router to router 4
through the networks of the Internet until it gets to its destination computer. Routers also perform other tasks such as translating the data transmission protocol of the packet to the appropriate protocol of the next network, and preventing unauthorized access to a network by the use of a firewall.
Bridges:
A bridge device filters data traffic at a network boundary. Bridges reduce the amount of traffic on a LAN by dividing it into two segments. Bridges operate at the data link layer (Layer 2) of the OSI model. Bridges inspect incoming traffic and decide whether to forward or discard it. An Ethernet bridge, for example, inspects each incoming Ethernet frame - including the source and destination MAC addresses, and sometimes the frame size - in making individual forwarding decisions. Bridges serve a similar function as switches, that also operate at Layer 2. Traditional bridges, though, support one network boundary, whereas switches usually offer four or more hardware ports. Switches are sometimes called "multi-port bridges" for this reason.
Hub:
A hub is a small rectangular box, often made of plastic, that receives its power from an ordinary wall outlet. A hub joins multiple computers (or other network devices) together to form a single network segment. On this network segment, all computers can communicate directly with each other. Ethernet hubs are by far the most common type, but hubs for other types of networks such as USB also exist. A hub includes a series of ports that each accept a network cable. Small hubs network four computers. They contain four or sometimes five ports, the fifth port being reserved for "uplink" connections to another hub or similar device. Larger hubs contain eight, 12, 16, and even 24 ports.
the original protocol for wireless LANs which has a data rate of Up to 2Mbps in the 2.4GHz band. The second family is 802.11a which has a data rate of Up to 54Mbps in the 5GHz band and it is high speed and has better support than 802.11b in the areas of multimedia voice, video and large-image applications in a crowded network. The next suit, introduced in 2000, is 802.11b and it is an extension of the original 802.11 standard and it has a data rate of Up to 11Mbps in the 2.4GHz band and it is capable of covering a wider range of area than 802.11a with less access points. 802.11g suit, which is compatible with 203.11b and expected to be replacing it, was released in 2003 and it has a data rate of Up to 54Mbps in the 2.4GHz band and it has also improved the speed of communications and the security of the wireless LAN. Nowadays, a new standard is introduced and it merged the previous 3 standards with another 5 less used standards, d, e, h, I and j, in one suit called IEEE 802.11-2007 standard. Two advanced standards waiting to be released are 802.11n and 802.11s. 802.11n main advantage is that it supports Multiple-input and multiple-output (MIMO) which allows receivers and transmitters to have multiple antennas to increase the performance of the communications and it is predicted that it will have a data rate of up to 500 Mbps. 802.11s standard studies started at 2003 and its main purpose is to implement devices to work in a mesh network which uses nodes to find paths to data even if there are missing or broken network devices. [1, 2, 5, 6]*
How it works
The Wi-Fi network is a wireless network which uses radio waves. There are some similarities between the radios used for Wi-Fi network and ones used for TV, mobile phones and Walk-Talky. They can send and receive radio waves. Beside, they could convert 1's and 0's into radio waves and vice versa. On the other hand, they transmit at frequencies of 2.4 GHz or 5GHz, which is higher than the frequencies used for cell phones, walkie-talkies and Basically, the wireless network is constructed by two units. The first one is the wireless transmitter (wireless adapter) which could be either built-in or plugged into the PC card slot or USB port. The second unit, which is more important, is the wireless router that contains five parts, a port to connect to your cable or DSL modem, a router, an Ethernet hub, a firewall and a wireless access point. This network works in the following way. First, the data from the computer is converted into radio signals by the wireless adapters and then it is transmitted to the wireless router using the antenna. After that, the router takes and decodes the coming signals. Finally, the information is sent to the internet using a physical, wired Ethernet connection. This process can operate in the reverse way; the information received by the router form the internet is translated into radio signals and are sent to the wireless adapter that in turns, converts them into data used by the computer. In this case and because of wireless adapters, the router can be used by many devices to connect the internet. As mentioned above, Wi-Fi radios can transmit on any of two frequency band, 2.4 GHz and 5 GHz. In addition, they can alternate (hop) very quickly between these two different frequencies. This frequency hopping makes the interference less and allows multiple devices use the same connection concurrently. For the security, there are different methods to make the wireless networks private such that no body can use somebody's signal or his own network. The most important and secure one is called Media Access Network (MAC) method which doesnt use a password to allow users accessing the network. Instead, it utilizes the MAC address. 6
Every computer has a unique MAC address. This address gives the permission to only machines having this specific MAC address for accessing the network. When installing the router, the allowed addresses need to be specified so that it can access the network.
Network Scalability
Terabit network applications are characterized by unpredictable client traffic demands combined with stringent requirements on Quality-of-Service (QoS). Traditionally, traffic planners could consider capacity growth in three-, five- and ten-year increments. Today, the rapid and explosive growth in web video, mobile messaging, wi-fi and wi-max applications, means time frames as short as six months must also be considered. Thus, graceful scalability is a prime terabit network requirement.
Multi-Protocol Support
As new services proliferate, terabit network operators are looking to new "de-layered" and transparent network infrastructures to support all customer services across all customer locations, while providing reduced transmission and operations overhead for a variety of protocols.
Figure 1. Layered Terabit Network Service Architecture Overview CON packet forwarding overhead is greatly reduced through use of Multi-Protocol Label Switching (MPLS) technology. Internet Protocol (IP) packets have a field in their header containing the address to which the packet is to be routed. Traditional routing networks process this information at every router in a packet's path through the network. Using MPLS, however, when the data packet enters the first router, the header analysis is done just once and a new label is attached to the packet. Subsequent CON MPLS routers can then forward the packet by inspecting only the new label. In MPLS terminology, the CON routers are classified into two categories: highperformance packet classifiers called Edge Routers or Label Edge Routers (LERs) that apply (and remove) the requisite MPLS labels, and core routers that perform routing based only on Label Switching and are also called Label Switch Routers (LSRs). MPLS technology supports both traffic prioritization and QoS, and it can be used to carry many different kinds of traffic, including IP packets, ATM, SONET, and Ethernet. IP will likely be the near-universal technology used to implement the service layer, and Dense Wavelength Division Multiplexing (DWDM) will be used to increase bandwidth over existing fiber-optic backbones. Finally, the CON will link to the service platform which will in turn support execution of a variety of distributed applications, network management processes and signaling and control functions, as well as access to a diversity of information content types.
Traffic Grooming
Multiplexing frames at network ingress points compromises efficiency when the network has many entry points. Accommodating frames inside faster and longer frames requires a tradeoff between load flexibility and efficient use of link capacity. Newer optical grooming technologies support traffic flows that minimize the number of add/drop operations. Admission control enables client traffic to be controlled based on a mutually agreed-upon Service Level Agreement (SLA). Traffic management depends on queuing and scheduling procedures for the incoming traffic flows that were authorized by admission control. LCAS/VC offers network providers flexibility inside virtual circuits to accommodate client traffic fluctuations and add/drop of circuits without changing the network physical structure.
(NMSs) to implement optical connections from one location to another. Turn-around time to provision a new connection can take as long as six weeks, and the configuration process can take several hours, especially if more than one carrier is involved. While this may be acceptable for LHN where the end nodes are cities and change infrequently, it is by no means responsive enough for MAN solutions where end nodes are enterprise branches or connections between enterprises. Optical links to support MANs require a dynamic automated provisioning system that offers short turnaround times, flexible scalability, fine traffic granularities, and is amenable to frequent changes. Recently, dynamic provisioning protocols have emerged that let carriers establish connections not only within a single carriers territory, but can also provide dynamic provisioning across multiple carriers on an end-to-end basis.
Ubiquitous computing
Ubiquitous computing is giving architecture many benefits that we will continue to see embedded in our buildings. Ubiquitous computing is the wave of the future providing us with many new architectural functions as well as challenges. For now, lets focus on the benefits. The following are the top seven benefits brought about by ubiquitous computing as they impact architecture and occupants in everyday life: 14
1) INVISIBLE: Smart environments will be embedded with computing technologies that will be mostly out-of-sight. Architecture will gain many more capabilities with less visual clutter. 2) SOCIALIZATION: Interactions with architecture will be more social in nature. Smart buildings will illicit a more social response from occupants as computers user interfaces embed themselves within architecture. 3) DECISION-MAKING: Smart environments will help occupants to make better choices as they go about their everyday lives. At key moments within architectural experiences, a good architectural design will make smart environments helpful. Such architecture will be more proactive than passive. 4) EMERGENT BEHAVIOR: Buildings are now becoming more and more kinetic in form and function. Their movements and constructed designs come together dynamically to yield behaviors that make them more adaptive. Buildings will learn how to learn in order to run efficiently and aesthetically. 5) INFORMATION PROCESSING: Since architecture will be gaining a type of nervous system, information processing will be gaining a whole new meaning. Architecture will go from crunching data to making sense of data; therefore, eliminating our need to constantly input adjustments. 6) ENHANCING EXPERIENCE: As computers ubiquitously embed themselves in our environments, sensors and actuators will create smart environments where architectural space will be goal oriented. Therefore, more occupant needs will be better met. 7) CONVERGENCE: Much of our environment will be supplemented with interconnected digital technologies. Such interconnectivity will allow for a new type of sharing that will serve to eliminate many mundane tasks. Also, fewer errors will occur as systems pull data from shared digital locations (instead of having numerous copies to keep up-to-date
15
What Is Ubiquitous Computing? The word "ubiquitous" can be defined as "existing or being everywhere at the same time," "constantly encountered," and "widespread." When applying this concept to technology, the term ubiquitous implies that technology is everywhere and we use it all the time. Because of the pervasiveness of these technologies, we tend to use them without thinking about the tool. Instead, we focus on the task at hand, making the technology effectively invisible to the user. Ubiquitous technology is often wireless, mobile, and networked, making its users more connected to the world around them and the people in it. Why Is Ubiquitous Computing Important? Ubiquitous computing is changing our daily activities in a variety of ways. When it comes to using today's digital tools users tend to communicate in different ways be more active conceive and use geographical and temporal spaces differently have more control In addition, ubiquitous computing is global and local social and personal public and private invisible and visible an aspect of both knowledge creation and information dissemination
Ambient intelligence(computing):
In computing, ambient intelligence (AmI) refers to electronic environments that are sensitive and responsive to the presence of people. Ambient intelligence is a vision on the future of consumer electronics, telecommunications and computing that was originally developed in the late 1990s for the time frame 20102020. In an ambient intelligence world, devices work in concern to support people in carrying out their everyday life activities, tasks and rituals in easy, natural way using information and intelligence that is hidden in the network connecting these devices (see Internet of Things). As these devices grow smaller, more connected and more integrated into our environment, the technology disappears into our surroundings until only the user interface remains perceivable by users. The ambient intelligence paradigm builds upon pervasive computing, ubiquitous computing, profiling practices, and human-centric computer interaction design and is characterized by systems and technologies that are: embedded : many networked devices are integrated into the environment context aware : these devices can recognize you and your situational context personalized : they can be tailored to your needs adaptive : they can change in response to you anticipatory : they can anticipate your desires without conscious mediation.
16
Ambient intelligence is closely related to the long term vision of an intelligent service system in which technologies are able to automate a platform embedding the required devices for powering context aware, personalized, adaptive and anticipatory services.
Overview
An (expected) evolution of computing from 1960-2010. More and more people make decisions based on the effect their actions will have on their own inner, mental world. This experience-driven way of acting is a change from the past when people were primarily concerned about the use value of products and services, and is the basis for the experience economy. Ambient intelligence addresses this shift in existential view by emphasizing people and user experience. The interest in user experience also grew in importance in the late 1990s because of the overload of products and services in the information society that were difficult to understand and hard to use. A strong call emerged to design things from a user's point of view. Ambient intelligence is influenced by user-centered design where the user is placed in the center of the design activity and asked to give feedback through specific user evaluations and tests to improve the design or even co-create the design together with the designer (participatory design) or with other users (end-user development). In order for AmI to become a reality a number of key technologies are required: Unobtrusive hardware (Miniaturisation, Nanotechnology, smart devices, sensors etc.) Seamless mobile/fixed communication and computing infrastructure (interoperability, wired and wireless networks, service-oriented architecture, semantic web etc.) Dynamic and massively distributed device networks, which are easy to control and program (e.g. service discovery, auto-configuration, end-user programmable devices and systems etc.). Human-centric computer interfaces (intelligent agents, multimodal interaction, context awareness etc.) Dependable and secure systems and devices (self-testing and self repairing software, privacy ensuring technology etc.)
Example scenario
Ellen returns home after a long day's work. At the front door she is recognized by an intelligent surveillance camera, the door alarm is switched off, and the door unlocks and opens. When she enters the hall the house map indicates that her husband Peter is at an art fair in Paris, and that her daughter Charlotte is in the children's playroom, where she is playing with an interactive screen. The remote children surveillance service is notified that she is at home, and subsequently the on-line connection is switched off. When she enters the kitchen the family memo frame lights up to indicate that there are new messages. The shopping list that has been composed needs confirmation before it is sent to the supermarket for delivery. There is also a message notifying that the home information system has found new information on the semantic Web about economic holiday cottages with sea sight in Spain. She briefly connects to the playroom to say hello to Charlotte, and her video picture automatically appears on the flat screen that is currently used by Charlotte. Next, she connects to Peter at the 17
art fair in Paris. He shows her through his contact lens camera some of the sculptures he intends to buy, and she confirms his choice. In the mean time she selects one of the displayed menus that indicate what can be prepared with the food that is currently available from the pantry and the refrigerator. Next, she switches to the video on demand channel to watch the latest news program. Through the follow me she switches over to the flat screen in the bedroom where she is going to have her personalized workout session. Later that evening, after Peter has returned home, they are chatting with a friend in the living room with their personalized ambient lighting switched on. They watch the virtual presenter that informs them about the programs and the information that have been recorded by the home storage server earlier that day.
will neither understand nor accept comments like 'server currently down for maintenance' - if a service is not available when they need it, they will assume that it does not work, and will stop using the application or switch to another service provider. Both issues can be resolved by system topologies that employ parallelism and redundancy to guarantee scalability and availability. An example of such a topology is shown in Figure 1.
Figure 1 Scalability and availability can be achieved by running multiple instances of every component that might become a bottleneck. Typically the gateways perform tasks that require significant computing power. WAP gateways, for example, may have to execute the WTLS proto~ the direction of the clients, and the SSL protocol in the direction o' servers, for many parallel sessions, requiring computation intensive decryption and encryption of data. Voice gateways use voice-recognition engines and thus require even more computing power. A scalable system will use a cluster of gateways for each device type, to which additional machines can be added as required. From the various gateways, a potentially large number of requests f. to the servers that host pervasive computing Web applications. Typically network dispatcher is used to route incoming requests to the appropriate servers, balancing the load between them. To support efficient handling HTTPS, the dispatchers support a mode in which requests originating from a particular client are always sent to the same server to avoid repeating SSL handshakes. To assure high availability, pairs of network dispatchers can be used, in which one is active and a back-up monitors the heartbeat of the active dispatcher to take over if a failure occurs. To allow for central authentication, authorization, and enforcement ~ access policies, authentication proxies are used, located in the demilitarized zone between two firewalls, so that all incoming requests can flow application servers only via the authentication proxies. They check each 19
incoming request to see whether the client from which it originates is already known, and whether it is allowed to access the desired target function of the Web application according to a centrally defined policy. To do so, it needs access to the credentials required for authentication and to the policies for authorization. If a request from a new client arrives, the authentication proxy performs client authentication before letting any request pass through to the application servers. An authentication proxy may consume significant computing power, e.g. when SSL server authentication has to be performed for a large number of sessions. Thus, a cluster of authentication proxies is required for larger systems. Requests initiated by authenticated clients flow from the authentication proxies to the application servers behind the inner firewall. The application code and the presentation functions that make up the Web application front end is running on these servers. Here, the requests coming from the clients are received and processed. To implement a scalable Web application, a cluster of application servers is usually used to which additional machines can be added when the load increases. Typically, the front end of a Web application interacts with a back end that hosts persistent data and/or legacy systems.
Pervasive computing applications, however, add an additional level complexity. As devices are very different from each other assume that one controller will fit all device classes. In the MVC the controller encapsulates the dialog flow of an application will be different for different classes of devices, such as W voice-only phones, PCs, or PDAs. Thus, we need different controller for different classes of devices. To support multiple controllers, the servlet's role to that of a simple dispatcher that invokes tl ate controller depending on the type of device being used. To avoid duplication of code for invocation of model functions controllers, we employ the command pattern.1 In our case, a c a bean with input and output properties. An invoker of a con the input properties for the command and then executes the After the command has been executed, the result can be obtained and getting the command's output properties. Instead of invoke functions directly, the controllers create and execute command that encapsulate the code for model invocation. To invoke a view JSP, the controller puts the executed command request object or the session object associated with the request on the desired lifetime. As commands are beans, their output can easily be accessed and displayed within JSP, as shown in Figure 2
Figure 2 21
type, the desired language and the desired reply content type, e.g. HTML,WML,or VoiceXML.. Examples of gateways in the device connectivity layer are voice gateways with remote voiceXMLbrowsers, WAP gateways, and gateways for con-necting PDAs. An important function that the device connectivity layer must provide is support of session cookies to allow the application server to associate a session with the device. The secure access component is the only system component allowed to invoke application functions. It checks all incoming requests and calls application functions according to security policies stored in a database or directory. A particular security state - part of the session state is reached by authentication of the client using user- ID and password, public-key client authentication, or authentication with a smart card, for example. If the requirements for permissions defined in the security policy are met by the current security state of a request's session, then the secure access layer invokes the requested application function, e.g. a function that accesses a database and returns a bean. Otherwise, the secure access component can redirect the user to the appropriate authentication page. Typically, the secure access component will be implemented as an authentication proxy within a demilitarized zone as shown earlier. Finally, the output generated by the application logic is delivered back to the user in a form appropriate for the device he or she is using. In the Figure , the information to be displayed is prepared by the application logic and passed to the content-delivery module encapsulated in beans. The content-delivery module then extracts the relevant part of the information from the bean and renders it into content that depends on the device type and desired reply content type, for example by calling appropriate JSPs. The content-delivery module delivers the content generated in the previous step via the device connectivity infrastructure that converts canonical responses (HTTP responses) to device-specific responses, using-appropriate gateways. For example, if a user accesses the system via a telephone, the voice gateway receives the HTTP response with VoiceXML content and leads an appropriate 'conversation' with the user, finally resulting in a new request being sent to the server
23