Вы находитесь на странице: 1из 8

Virtual private LAN service

virtual private LAN service (VPLS)


Virtual private LAN service (VPLS) is a technology that makes it possible to connect local area networks (LANs) over the Internet, so that they appear to subscribers like a single Ethernet LAN. A VPLS uses multiprotocol label switching (MPLS) to create the appearance of a virtual private network (VPN) at each subscriber location. A VPLS moves each subscriber's Ethernet packets seamlessly to other locations by tunneling them through the provider network, independent of traffic from other Internet users. Fault-tolerance ensures that each packet arrives intact at its intended destination. A VPLS is easy to use because subscribers do not have to connect directly to the Internet. Instead, they connect as if to an Ethernet network. A VPLS can provide point-to-point and multipoint services, as well as any-to-any capability. It is possible to build a VPLS over a wide geographic area, and the technology allows for subscribers to change locations easily. The service is also scalable. A VPLS can serve anywhere from a few subscribers up to hundreds of thousands.

Nano technology

A nanometer is one billionth of a meter. If you blew up a baseball to the size of the earth, the atoms would become visible, about the size of grapes. Some 3- 4 atoms fit lined up inside a nanometer. Nanotechnology is about building things atom by atom, molecule by molecule. The trick is to be able to manipulate atoms individually, and place them where you wish on a structure. Nanotechnology uses well known physical properties of atoms and molecules to make novel devices with extraordinary properties. The anticipated pay off for mastering this technology is beyond any human accomplishment thus far. Nature uses molecular machines to create life.Scientists from several fields including chemistry, 1

biology, physics, and electronics are driving towards the precise manipulation of matter on the atomic scale. How do we get to nanotechnology? Several approaches seem feasible. Ultimately a combination may be the key. The goal of early nanotechnology is to produce the first nano-sized robot arm capable of manipulating atoms and molecules into a useful product or copies of itself. Nanotechnology finds applications as nanotubes, in nanomedicine and so on. Soon you have trillions of assemblers controlled by nano super computers working in parallel assembling objects quickly. Ultimately, with atomic precision, everything could be made. Its all a matter of software.

JIRO

Jiro(pronounced gyro as in gyroscope) is a Sun product that provides the infrastructure and plumbing for building distributed applications. Distributed services such as event handling, logging, transaction management, security, as well as basic communication services for distributed objects are included.It is written in Java and uses the Jini name and lookup services. It is a product implementation of the Federated Management Architecture, which is a three tier architecture consisting of clients, mangement services and managed resources. This technology provides a middle layer of components and services that facilitate connectivity between managed resources and management applications. These resources can be anywhere on the corporate network, connected with any combination of routers and hubs, and use any standard management protocol such as Simple Network Management Protocol (SNMP).The code is platform-independent, enabled with a Java Virtual Machine.The Jiro management layer supplies basic management functions such as fault notification, scheduling, distributed logging, and transaction rollback. A product based on Jiro technology is an implementation based on the Federated Management Architecture (FMA) Specification, which describes extensions to 2

the Java? language environment. FMA Specification defines a component model called the FederatedBeans model.The success of the initiative surrounding the Jiro technology will be availability of a wide variety of FederatedBeans components from different suppliers. The FMA platform supports automated communication between networked Java Virtual Machines, thus promoting applications that are federations of the constituent components. Data warehousing

Data Warehouse A data warehouse (DW) is a database used for reporting. The data is offloaded from the operational systems for reporting. The data may pass through an operational data store for additional operations before it is used in the DW for reporting. A data warehouse maintains its functions in three layers: staging, integration, and access. Staging is used to store raw data for use by developers (analysis and support). The integration layer is used to integrate data and to have a level of abstraction from users. The access layer is for getting data out for users.

Kerberos

Kerberos was developed at the Massachusetts Institute of Technology (MIT) during a project intended to integrate computers into the universitys undergraduate curriculum. The project, called Athena, started in 1983 with UNIX timesharing computers, having several terminals connected to each one, but without a network connection. If a student or staff member wanted to use any of the computers, he or she sat down at one of these terminals. As soon as the terminals and old computers were substituted by newer workstations with network connection, the projects goal was to allow any user to sit down at the workstation of his or her choice accessing his data over the network (which is a very common scenario for every network today). The problem of network eavesdropping became apparent. Since the network has been accessible from all over the campus, nothing prevented students from running network monitoring tools and learning other 3

users and root passwords. Another big problem was some PC/ATs which were lacking even fundamental internal security. To protect the users data in the network environment as it had been protected in the timesharing environment Kerberos was invented. Kerberos is an authentication system that uses symmetric key cryptography to protect sensitive information on an open network. It is a ticket based system that issues a ticket encrypted with the users password when he or she logs in. The user decrypts the ticket and uses it to obtain tickets for other network services he or she wants to use. Because all information in tickets is encrypted, it is not susceptible to eavesdropping or misappropriation. MIT developed Kerberos to protect network services provided by Project Athena. The protocol was named after the Greek mythological character Kerberos (or Cerberus), known in Greek mythology as being the monstrous three-headed guard dog of Hades. Process migration A process is an operating system abstraction representing an instance of a running computer program. A process is a key concept in operating systems. It consists of data, a stack, register contents, and the state specific to the underlying Operating System (OS), such as parameters related to process, memory, and file management. A process can have one or more threads of control. Threads, also called lightweight processes, consist of their own stack and register contents, but share a processs address space and some of the operating-system-specific state, such as signals. The task concept was introduced as a generalization of the process concept, whereby a process is decoupled into a task and a number of threads. A traditional process is represented by a task with one thread of control.While running overloaded machine aborts all process running to stabilize machine. Instead of aborting all process we can transfer active process on another machine. The Wireless USB is the first the high speed Personal Wireless Interconnect. Wireless USB will build on the success of wired USB, bringing USB technology into 4

Wireless USB

the wireless future. Usage will be targeted at PCs and PC peripherals, consumer electronics and mobile devices. To maintain the same usage and architecture as wired USB, the Wireless USB specification is being defined as a high-speed host-to-device connection. This will enable an easy migration path for today's wired USB solutions. Intrusion Prevention Systems Intrusion Prevention Systems (IPS), also known as Intrusion Detection and Prevention Systems (IDPS), are network security appliances that monitor network and/or system activities for malicious activity. The main functions of intrusion prevention systems are to identify malicious activity, log information about said activity, attempt to block/stop activity, and report activity. Intrusion prevention systems are considered extensions of intrusion detection systems because they both monitor network traffic and/or system activities for malicious activity. The main differences are, unlike intrusion detection systems, intrusion prevention systems are placed in-line and are able to actively prevent/block intrusions that are detected. More specifically, IPS can take such actions as sending an alarm, dropping the malicious packets, resetting the connection and/or blocking the traffic from the offending IP address. An IPS can also correct Cyclic Redundancy Check (CRC) errors, unfragment packet streams, prevent TCP sequencing issues, and clean up unwanted transport and network layer options. Wireless Mesh Networks Wireless Mesh Networks (WMNs), has emerged recently. In WMNs, nodes are comprised of mesh routers and mesh clients. Each node operates not only as a host but also as a router, forwarding

Genetic programming Genetic programming (GP) is an automated methodology inspired by biological evolution to find computer programs that best perform a user-defined task. It is therefore a particular machine learning technique that uses an evolutionary algorithm to optimize a population of computer programs according to a fitness landscape determined by a program's ability to perform a given computational task. The first experiments with GP were reported by Stephen F. Smith (1980) and Nichael L. Cramer (1985), as described in the famous book Genetic Programming: On the Programming of Computers by Means of Natural Selection by John Koza (1992). Computer programs in GP can be written in a variety of programming languages. In the early (and traditional) implementations of GP, program instructions and data values were organized in treestructures, thus favoring the use of languages that naturally embody such a structure (an important example pioneered by Koza is Lisp). Other forms of GP have been suggested and successfully implemented, such as the simpler linear representation which suits the more traditional imperative languages [see, for example, Banzhaf et al. (1998)]. The commercial GP software Discipulus, for example, uses linear genetic programming combined with machine code language to achieve better performance. Differently, the MicroGP uses an internal representation similar to linear genetic programming to generate programs that fully exploit the syntax of a given assembly language. GP is very computationally intensive and so in the 1990s it was mainly used to solve relatively simple problems. However, more recently, thanks to various improvements in GP technology and to the well known exponential growth in CPU power, GP has started delivering a number of outstanding results. At the time of writing, nearly 40 human-competitive results have been gathered, in areas such as quantum computing, electronic design, game playing, sorting, searching and many more. These results include the replication or infringement of several post-year-2000 inventions, and the production of two patentable new inventions. Developing a theory for GP has been very difficult and 6

so in the 1990s genetic programming was considered a sort of pariah amongst the various techniques of search. However, after a series of breakthroughs in the early 2000s, the theory of GP has had a formidable and rapid development. So much so that it has been possible to build exact probabilistic models of GP (schema theories and Markov chain models) and to show that GP is more general than, and in fact includes, genetic algorithms. Genetic Programming techniques have now been applied to evolvable hardware as well as computer programs. Meta-Genetic Programming is the technique of evolving a genetic programming system using genetic programming itself. Critics have argued that it is theoretically impossible, but more research is needed DOS(Denial of services) Denial of service attacks come in an almost endless variety of forms but have the core similarity of their purpose. This purpose is to deny legitimate use of the services provided by their victim . This is achieved by exhausting the systems resources such as bandwidth, and memory . Unfortunately due to the limited nature of resources on the internet and the end to end focus of the networks design this is fairly easily achieved . There are several different main kinds of methods that attackers use. The most straight forward method is sending a stream of packets to the victim to use all of the systems resources which is known as flooding [1]. Another common method is to send a smaller number of altered packets to confuse the protocol or application . The most prevalent form of denial of service attack is the TCP/SYN Flooding method which makes up 90% of all denial of service attacks . This attack takes advantage of the three way handshake procedure that the TCP protocol uses . Normally the procedure goes something like the Page 1 following. The client sends a SYN message to let the server know the client wants to connect. Then the server sends a SYN/ACK message back letting the client know that it received the clients SYN message and is reserving resources for it. Finally the client sends the server an ACK message to complete the connection .In a 7

TCP/SYN flooding attack the misbehaving client or clients sends a flood of SYN messages to the server with spoofed IPs (fake IP info) but never respond to the SYN/ACK message the server responds with (to the spoofed IPs). This results in the server holding half open connections and reserving resources for each fraudulent SYN message eventually consuming them all. Now that the basic nature of a denial of service extent has been explained we will go into distributed denial of service attacks