Вы находитесь на странице: 1из 52

1

ACKNOWLEDGEMENT

It plunges me in exhilaration in taking privilege in expressing our heart felt gratitude to all those who helped, encouraged and foreseeing successful completion of my project. Ecstasies to work under gregarious guidance of Miss.Archana Panda to whom extremely in debted for his valuable and timely suggestions. I wish to convey my sincere thanks to Mr. P.L.Mohanty (Dean Academics) and all those who all directly or indirectly contributed their assistance in finishing out this project successfully.

Client Monitoring over the Network using RFB Protocol


CONTENTS 1. Abstract 2. Introduction 3. Existing System 4. Proposed System 5. SRS (Software Requirement Specification) 6. Design DFD Diagrams

7. Design Implementation 8. Output 9. Networking 10.Bibliography

ABSTRACT

RFB (Remote Frame Buffer) protocol enables remote users to share the desktop of the server either exclusively or on a sharing basis.

RFB (Remote Frame Buffer) is a simple protocol for remote access to graphical user interfaces. Because it works at the frame buffer level it is applicable to all windowing systems and applications, including X11, Windows 3.1/95/NT and Macintosh. The remote endpoint where the user sits (i.e. the display plus keyboard and/or pointer) is called the NETWORK-SIMULATOR. The endpoint where changes to the frame buffer originate (i.e. the windowing system and applications) is known as the RFB server.

INTRODUCTION NETWORK-SIMULATOR truly a thin client protocol. The emphasis in the design of the RFB protocol is to make very few requirements of the client. In this way, clients can run on the widest range of hardware, and the task of implementing a client is made as simple as possible. Modules:1) Client Module i) VNC installation 2) Server Module i) Connection ii) Add Terminals iii) Remove Terminals iv) View Logger v) Software installation vi) Shutdown the client vii) Clip board 3) Socket Implementation 4) Client to client communication

Module description:1) Client Module:- In the client the VNC software will be installed .It sends the RFB protocol to the server. 2)Server Module :- In the server we have a window where we can add terminals, connect to the existing terminals , remove terminals ,View log files , we can install software ,shut down the remote client system and copy the text

6
3) Socket Implementation:- This module consists of client server implementation. view the clients communications only. 4) Client to client communication:- This module consists of client to client communication. The server can

Existing System

In the present scenario of distributing information using a Client Server environment there exist communication mechanisms such as TELNET etc. These are only character based in nature and do not provide an easy way for communication across network. Also the communication may not always be platform independent. We need a protocol that can well support GUI-based clientserver interaction and allows multiple clients to share the desktop of the server using RFB Protocol & can install software too & chatting with Server/Clients.

Proposed System The RFB (Remote Frame Buffer) protocol was proposed by Tristan Richardson and Kenneth R. Wood at ORL, Cambridge in January 1998. The protocol is based around a single graphics primitive: put a rectangle of pixel data at a given x, y position. The RFB relies on sending encoded pixels to the client that contains the information of the Server Desktop. The client then decodes the pixels and draws them on a graphical application running on its machine. Events occurring at the client side are trapped and sent to the Server where the changes are reflected. For this purpose the RFB protocol suggests the use of a Frame Buffer. The Frame Buffer actually contains the desktop information, which is updated when the client generates an event. This updated buffer must be sent to the Server, which updates its own desktop accordingly. Similarly when the Server generates events that affect the desktop, the updated buffer is redrawn at the Client. This provides a synchronized desktop sharing facility.

7
SRS (Software Requirement Specification) SOFTWARE REQUIREMENT SPECIFICATION -------------------------------------------------------------------HARDWARE SPECIFICATIONS HARDWARE REQUIREMENTS: PIV 2.8 GHz Processor and Above RAM 512MB and Above HDD 20 GB Hard Disk Space and Above

SOFTWARE REQUIREMENTS: WINDOWS XP service pack 2 JDK 1.5 awt , swing Socket Programming Microsoft access

The proposed system should have the following features. Consistency The state of the user interface is to be preserved i.e., a client disconnects from a given server and subsequently reconnects to that same server the previous information has to be displayed. Furthermore, a different client endpoint can be used to connect to the same RFB server. At the new endpoint, the user will see exactly the same graphical user interface as at the original endpoint. In effect, the interface to the users applications becomes completely mobile. Wherever suitable network connectivity exists, the user can access their own personal applications, and the state of these applications is preserved between accesses from different locations. This requirement provides the user with a familiar, uniform view of the computing infrastructure wherever they go. Input Protocol The input side of the protocol is based on a standard workstation model of a keyboard and multibutton pointing device. Input events are simply sent to the server by the client whenever the user

8
presses a key or pointer button, or whenever the pointing device is moved. These input events can also be synthesized from other non-standard I/O devices. For example, a pen-based handwriting recognition engine might generate keyboard events. Display Protocol The protocol is based around a single graphics primitive: put a rectangle of pixel data at a given x, y position. At first glance this might seem an inefficient way of drawing many user interface components. However, allowing various different encoding for the pixel data gives us a large degree of flexibility in how to trade off various parameters such as network bandwidth, client drawing speed and server processing speed. A sequence of these rectangles makes a frame buffer update (or simply update). An Update represents a change from one valid frame buffer state to another, so in some ways is similar to a frame of video. The rectangles in an update are usually disjoint but this is not necessarily the case. The update protocol is demand-driven by the client. That is, an update is only sent from the server to the client in response to an explicit request from the client. This gives the protocol an adaptive quality. The slower the client and the network are, the lower the rate of updates becomes. With typical applications, changes to the same area of the frame buffer tend to happen soon after one another. With a slow client and/or network, transient states of the frame buffer can be ignored, resulting in less network traffic and less drawing for the client. Representation of pixel data Initial interaction between the NETWORK-SIMULATOR and server involves a negotiation of the format and encoding with which pixel data will be sent. This negotiation has to make the job of the client as easy as possible. The bottom line is that the server must always be able to supply pixel data in the form the client wants. However if the client is able to cope equally with several different formats or encoding, it may choose one, which is easier for the server to produce. Pixel format refers to the representation of individual colors by pixel values. The most common pixel formats are 24-bit or 16-bit true color, where bit-fields within the pixel value translate directly to red, green and blue intensities and 8-bit color map where an arbitrary mapping can be used to translate from pixel values to the RGB intensities. Encoding refers to how a rectangle of pixel data will be sent on the wire. A header giving the X, Y position of the rectangle on the screen, the width and height of the rectangle, and an encoding type, which specifies the encoding of the pixel data, prefixes every rectangle of pixel data. The data itself then follows using the specified encoding. Adding new encoding types can extend the protocol.

Raw encoding The simplest encoding type is raw pixel data. In this case the data consists of n pixel values where n is the width times the height of the rectangle. The values simply rep-resent each pixel in left-to-right scans line order. All RFB clients must be able to cope with pixel data in this raw encoding, and RFB servers should only produce raw encoding unless the client specifically asks for some other encoding type. Protocol Messages The RFB protocol can operate over any reliable transport, either byte-stream or message-based. There are two stages to the protocol; an initial handshaking phase followed by the normal protocol interaction. The initial handshaking consists of Protocol Version, Authentication, Client Initialization and Server Initialization messages, as described below. Note that both client and server send a Protocol Version message. The protocol proceeds to the normal interaction stage after the Server Initialization message. At this stage, the client can send whichever messages it wants, and may receive messages from the server as a result. All these messages begin with a message-type byte, followed by any message specific data. integers. All multiple byte integers (other than pixel values themselves) are in big endian order (most significant byte first).

SAMPLE CODE RFB Protocol Implementation //To read From InputStream class myInputStream extends FilterInputStream { public myInputStream (InputStream in) { super (in); } public int read (byte [] b, int off, int len) throws IOException { System.out.println ("read (byte [] b, int off, int len) called"); return super.read (b, off, len); } }

10

The Main Class to receive or send the Messages to Server class rfbProto { //PROTOCOL VERSION static final String versionMsg = "RFB 003.003\n"; //AUTHENTICATION static final int ConnFailed = 0, NoAuth = 1, VirtualPadAuth = 2; //RESPONSE CODE static final int VirtualPadAuthOK = 0, VirtualPadAuthFailed = 1, VirtualPadAuthTooMany = 2;

//SERVER RESPONSE static final int FramebufferUpdate = 0, SetColourMapEntries = 1, Bell = 2, ServerCutText = 3; //CLIENT REQUEST static final int SetPixelFormat = 0, FixColourMapEntries = 1, SetEncodings = 2,FramebufferUpdateRequest = 3, KeyEvent = 4, PointerEvent = 5, ClientCutText = 6; //ENCODING TYPES static final int EncodingRaw = 0, EncodingCopyRect = 1, EncodingRRE=2, EncodingCoRRE = 4, EncodingHextile = 5; static final int HextileRaw = (1 << 0); = (1 << 1); = (1 << 2); = (1 << 3); static final int HextileBackgroundSpecified static final int HextileForegroundSpecified static final int HextileAnySubrects String host; int port; Socket sock; DataInputStream is;

static final int HextileSubrectsColoured= (1 << 4);

11
OutputStream os; boolean inNormalProtocol = false; virtualpad v; // Constructor. Just make TCP connection to RFB server. rfbProto(String h, int p, VirtualPad v1) throws IOException { v = v1; host = h; port = p; sock = new Socket(host, port); is = new DataInputStream(new BufferedInputStream(sock.getInputStream(),16384)); os = sock.getOutputStream(); } void close() { try { sock.close(); } catch (Exception e) { e.printStackTrace(); } } INITIAL HANDSHAKING MESSAGES Protocol Version Handshaking begins by the server sending the client a Protocol Version message. This lets the client know which is the latest RFB protocol version number supported by the server. The client then replies with a similar message giving the version number of the protocol, which should actually be used (which may be different to that quoted by the server). It is intended that both clients and servers may provide some level of backwards compatibility by this mechanism. Servers in particular should attempt to provide backwards compatibility, and even forward compatibility to some extent. For example if a client demands version 3.1 of the protocol, a 3.0 server can probably assume that by ignoring requests for encoding types it doesnt understand, everything will still work OK. This will probably not be the case for changes in the major version number.

12

FramebufferUpdateRequest Notifies the server that the client is interested in the area of the framebuffer specified by x-position, y-position, width and height. The server usually responds to a Framebuffer-UpdateRequest by sending a FramebufferUpdate. Note however that a single Frame-bufferUpdate may be sent in reply to several FramebufferUpdateRequests. The server assumes that the client keeps a copy of all parts of the framebuffer in which it is interested. This means that normally the server only needs to send incremental updates to the client. However, if for some reason the client has lost the contents of a particular area, which it needs, then the client sends a FramebufferUpdateRequest with incremental set to zero (false). This requests that the server send the entire contents of the specified area as soon as possible. The area will not be updated using the copy rectangle encoding. If the client has not lost any contents of the area in which it is interested, then it sends a FramebufferUpdateRequest with incremental set to non-zero (true). If and when there are changes to the specified area of the framebuffer, the server will send a Framebuffer-Update. Note that there may be an indefinite period between the FramebufferUpdateRequest and the FramebufferUpdate. In the case of a fast client, the client may want to regulate the rate at which it sends incremental FramebufferUpdateRequests to avoid hogging the network.

PROJECT DESIGN Data Flow Diagrams

13

DESK SHARE

14

15

16

17

18

19

20

SDLC:
INTRODUCTION After analyzing the requirements of the task to be performed, the next step is to analyze the problem and understand its context. The first activity in the phase is studying the existing system and other is to understand the requirements and domain of the new system. Both the activities are equally important, but the first activity serves as a basis of giving the functional specifications and then successful design of the proposed system. Understanding the properties and requirements of a new system is more difficult and requires creative thinking and understanding of existing running system is also difficult, improper understanding of present system can lead diversion from solution. ANALYSIS MODEL The model that is basically being followed is the WATER FALL MODEL, which states that the phases are organized in a linear order. First of all the feasibility study is done. Once that part is over the requirement analysis and project planning begins. If system exists one and modification and addition of new module is needed, analysis of present system can be used as basic model.

21
The design starts after the requirement analysis is complete and the coding begins after the design is complete. Once the programming is completed, the testing is done. In this model the sequence of activities performed in a software development project are: Requirement Analysis Project Planning System design Detail design Coding Unit testing System integration & testing Here the linear ordering of these activities is critical. End of the phase and the output of one phase is the input of other phase. The output of each phase is to be consistent with the overall requirement of the system. WATER FALL MODEL was being chosen because all requirements were known beforehand and the objective of our software development is the computerization/automation of an already existing manual working system.
Changed Requirements Communicated Requirements

Requirements Engineering

Requirements Specification

Design

Design Specification

Programming Process

Executable Software Modules

Maintenance

Product

Product Input Output

Integration

Integrated Software Product

Delivery

Delivered Software Product

22

WATERFALL MODEL

Purpose: The main purpose for preparing this document is to give a general insight into the analysis and requirements of the existing system or situation and for determining the operating characteristics of the system. Scope: This Document plays a vital role in the development life cycle (SDLC) and it describes the complete requirement of the system. It is meant for use by the developers and will be the basic during testing phase. Any changes made to the requirements in the future will have to go through formal change approval process. DEVELOPERS RESPONSIBILITIES OVERVIEW: The developer is responsible for: Developing the system, which meets the SRS and solving all the requirements of the Demonstrating the system and installing the system at client's location after the acceptance Submitting the required user manual describing the system interfaces to work on it and also Conducting any user training that might be needed for using the system. Maintaining the system for a period of one year after installation.

system? testing is successful. the documents of the system.

23

FUNCTIONAL REQUIREMENTS: OUTPUT DESIGN Outputs from computer systems are required primarily to communicate the results of processing to users. They are also used to provides a permanent copy of the results for later consultation. The various types of outputs in general are: External Outputs, whose destination is outside the organization. Internal Outputs whose destination is with in organization and they are the Users main interface with the computer. Operational outputs whose use is purely with in the computer department. Interface outputs, which involve the user in communicating directly with

OUTPUT DEFINITION The outputs should be defined in terms of the following points: Type of the output Content of the output Format of the output Location of the output Frequency of the output Volume of the output Sequence of the output It is not always desirable to print or display data as it is held on a computer. It should be decided as which form of the output is the most suitable.

24

For Example Will decimal points need to be inserted Should leading zeros be suppressed.

Output Media: In the next stage it is to be decided that which medium is the most appropriate for the output. The main considerations when deciding about the output media are: The suitability for the device to the particular application. The need for a hard copy. The response time required. The location of the users The software and hardware available.

Keeping in view the above description the project is to have outputs mainly coming under the category of internal outputs. The main outputs desired according to the requirement specification are: The outputs were needed to be generated as a hot copy and as well as queries to be viewed on the screen. Keeping in view these outputs, the format for the output is taken from the outputs, which are currently being obtained after manual processing. The standard printer is to be used as output media for hard copies. INPUT DESIGN Input design is a part of overall system design. The main objective during the input design is as given below: To produce a cost-effective method of input. To achieve the highest possible level of accuracy. To ensure that the input is acceptable and understood by the user.

INPUT STAGES: The main input stages can be listed as below: Data recording

25
Data transcription Data conversion Data verification Data control Data transmission Data validation Data correction

INPUT TYPES: It is necessary to determine the various types of inputs. Inputs can be categorized as follows: External inputs, which are prime inputs for the system. Internal inputs, which are user communications with the system. Operational, which are computer departments communications to the system? Interactive, which are inputs entered during a dialogue.

INPUT MEDIA: At this stage choice has to be made about the input media. To conclude about the input media consideration has to be given to; Type of input Flexibility of format Speed Accuracy Verification methods Rejection rates Ease of correction Storage and handling requirements Security Easy to use Portability Keeping in view the above description of the input types and input media, it can be said that most of the inputs are of the form of internal and interactive. As

26
Input data is to be the directly keyed in by the user, the keyboard can be considered to be the most suitable input device. ERROR AVOIDANCE At this stage care is to be taken to ensure that input data remains accurate form the stage at which it is recorded up to the stage in which the data is accepted by the system. This can be achieved only by means of careful control each time the data is handled. ERROR DETECTION Even though every effort is make to avoid the occurrence of errors, still a small proportion of errors is always likely to occur, these types of errors can be discovered by using validations to check the input data. DATA VALIDATION Procedures are designed to detect errors in data at a lower level of detail. Data validations have been included in the system in almost every area where there is a possibility for the user to commit errors. The system will not accept invalid data. Whenever an invalid data is keyed in, the system immediately prompts the user and the user has to again key in the data and the system will accept the data only if the data is correct. Validations have been included where necessary. The system is designed to be a user friendly one. In other words the system has been designed to communicate effectively with the user. The system has been designed with pop up menus.

USER INTERFACE DESIGN It is essential to consult the system users and discuss their needs while designing the user interface: USER INTERFACE SYSTEMS CAN BE BROADLY CLASIFIED AS: 1. User initiated interface the user is in charge, controlling the progress of the user/computer dialogue. In the computer-initiated interface, the computer selects the next stage in the interaction. 2. Computer initiated interfaces

27

In the computer initiated interfaces the computer guides the progress of the user/computer dialogue. Information is displayed and the user response of the computer takes action or displays further information.

USER_INITIATED INTERGFACES User initiated interfaces fall into tow approximate classes: 1. 2. Command driven interfaces: In this type of interface the user inputs commands Forms oriented interface: The user calls up an image of the form to his/her or queries which are interpreted by the computer. screen and fills in the form. The forms oriented interface is chosen because it is the best choice.

COMPUTER-INITIATED INTERFACES The following computer initiated interfaces were used: 1. 2. The menu system for the user is presented with a list of alternatives and the user Questions answer type dialog system where the computer asks question and chooses one; of alternatives. takes action based on the basis of the users reply. Right from the start the system is going to be menu driven, the opening menu displays the available options. Choosing one option gives another popup menu with more options. In this way every option leads the users to data entry form where the user can key in the data.

ERROR MESSAGE DESIGN: The design of error messages is an important part of the user interface design. As user is bound to commit some errors or other while designing a system the system should be designed to be helpful by providing the user with information regarding the error he/she has committed. This application must be able to produce output at different modules for different inputs.

28

Output Screens

Authentication Mode

29

30

31

32

33

Options Frame

34

35

Remote Desktop

36

Client Clipboard

37

Pointer Event

38

Key Event (Remote Desktop F1 Key)

39

PROJECT TESTING TESTING Testing is a process, which reveals errors in the program. It is the major quality measure employed during software development. During testing, the program is executed with a set of conditions known as test cases and the output is evaluated to determine whether the program is performing as expected. In order to make sure that the system does not have errors, the different levels of testing strategies that are applied at differing phases of software development are: 1. Unit Testing Unit Testing is done on individual modules as they are completed and become executable. It is confined only to the designer's requirements. Each module can be tested using the following two strategies: I) Black Box Testing: In this strategy some test cases are generated as input conditions that fully execute all functional requirements for the program. This testing has been uses to find errors in the following categories: a) Incorrect or missing functions b) Interface errors c) Errors in data structure or external database access d) Performance errors e) Initialization and termination errors. In this testing only the output is checked for correctness. The logical flow of the data is not checked.

ii. White Box testing In this the test cases are generated on the logic of each module by drawing flow graphs of that module and logical decisions are tested on all the cases. It has been uses to generate the test cases in the following cases:

40
a) Guarantee that all independent paths have been executed. b) Execute all logical decisions on their true and false sides. c) Execute all loops at their boundaries and within their operational bounds. d) Execute internal data structures to ensure their validity. 2. Integrating Testing Integration testing ensures that software and subsystems work together as a whole. It tests the interface of all the modules to make sure that the modules behave properly when integrated together. 3. System Testing Involves in-house testing of the entire system before delivery to the user. Its aim is to satisfy the user the system meets all requirements of the client's specifications. 4. Acceptance Testing It is a pre-delivery testing in which entire system is tested at client's site on real world data to find errors. Validation The system has been tested and implemented successfully and thus ensured that all the requirements as listed in the software requirement specification are completely fulfilled. In case of erroneous input corresponding error messages are displayed. COMPILING TEST It was a good idea to do our stress testing early on, because it gave us time to fix some of the unexpected deadlocks and stability problems that only occurred when components were exposed to very high transaction volumes. EXECUTION TEST

41
This program was successfully loaded and executed. Because of good programming there were no execution errors. OUTPUT TEST The successful output screens are placed in the output screens section above.

What is Networking? Computers running on the Internet communicate to each other using either the Transmission Control Protocol (TCP) or the User Datagram Protocol (UDP), as this diagram illustrates:

When you write Java programs that communicate over the network, you are programming at the application layer. Typically, you don't need to concern yourself with the TCP and UDP layers. Instead, you can use the classes in the java.net package. These classes provide system-independent network communication. However, to decide which Java classes your programs should use, you do need to understand how TCP and UDP differ. TCP When two applications want to communicate to each other reliably, they establish a connection and send data back and forth over that connection. This is analogous to making a telephone call. If you want to speak to Aunt Beatrice in Kentucky, a connection is established when you dial her phone number and she answers. You send data back and forth over the connection by speaking to one another over the phone lines. Like the phone company, TCP guarantees that data sent from one end of the connection actually gets to the other end and in the same order it was sent. Otherwise, an error is reported. TCP provides a point-to-point channel for applications that require reliable communications. The Hypertext Transfer Protocol (HTTP), File Transfer Protocol (FTP), and Telnet are all examples of applications that require a reliable communication channel. The order in which the data is sent and

42
received over the network is critical to the success of these applications. When HTTP is used to read from a URL, the data must be received in the order in which it was sent. Otherwise, you end up with a jumbled HTML file, a corrupt zip file, or some other invalid information. Definition: TCP (Transmission Control Protocol) is a connection-based protocol that provides a reliable flow of data between two computers. UDP The UDP protocol provides for communication that is not guaranteed between two applications on the network. UDP is not connection-based like TCP. Rather, it sends independent packets of data, called datagrams, from one application to another. Sending datagrams is much like sending a letter through the postal service: The order of delivery is not important and is not guaranteed, and each message is independent of any other. Definition: UDP (User Datagram Protocol) is a protocol that sends independent packets of data, called datagrams, from one computer to another with no guarantees about arrival. UDP is not connection-based like TCP. For many applications, the guarantee of reliability is critical to the success of the transfer of information from one end of the connection to the other. However, other forms of communication don't require such strict standards. In fact, they may be slowed down by the extra overhead or the reliable connection may invalidate the service altogether. Consider, for example, a clock server that sends the current time to its client when requested to do so. If the client misses a packet, it doesn't really make sense to resend it because the time will be incorrect when the client receives it on the second try. If the client makes two requests and receives packets from the server out of order, it doesn't really matter because the client can figure out that the packets are out of order and make another request. The reliability of TCP is unnecessary in this instance because it causes performance degradation and may hinder the usefulness of the service. Another example of a service that doesn't need the guarantee of a reliable channel is the ping command. The purpose of the ping command is to test the communication between two programs over the network. In fact, ping needs to know about dropped or out-of-order packets to determine how good or bad the connection is. A reliable channel would invalidate this service altogether. The UDP protocol provides for communication that is not guaranteed between two applications on the network. UDP is not connection-based like TCP. Rather, it sends independent packets of data

43
from one application to another. Sending datagrams is much like sending a letter through the mail service: The order of delivery is not important and is not guaranteed, and each message is independent of any others. Note: Many firewalls and routers have been configured not to allow UDP packets. If you're having trouble connecting to a service outside your firewall, or if clients are having trouble connecting to your service, ask your system administrator if UDP is permitted.

Understanding Ports Generally speaking, a computer has a single physical connection to the network. All data destined for a particular computer arrives through that connection. However, the data may be intended for different applications running on the computer. So how does the computer know to which application to forward the data? Through the use of ports. Data transmitted over the Internet is accompanied by addressing information that identifies the computer and the port for which it is destined. The computer is identified by its 32-bit IP address, which IP uses to deliver data to the right computer on the network. Ports are identified by a 16-bit number, which TCP and UDP use to deliver the data to the right application. In connection-based communication such as TCP, a server application binds a socket to a specific port number. This has the effect of registering the server with the system to receive all data destined for that port. A client can then rendezvous with the server at the server's port, as illustrated here:

Definition: The TCP and UDP protocols use ports to map incoming data to a particular process running on a computer. In datagram-based communication such as UDP, the datagram packet contains the port number of its destination and UDP routes the packet to the appropriate application, as illustrated in this figure:

44

Port numbers range from 0 to 65,535 because ports are represented by 16-bit numbers. The port numbers ranging from 0 - 1023 are restricted; they are reserved for use by well-known services such as HTTP and FTP and other system services. These ports are called well-known ports. Your applications should not attempt to bind to them. Networking Classes in the JDK Through the classes in java.net, Java programs can use TCP or UDP to communicate over the Internet. The URL, URL Connection, Socket, and Server Socket classes all use TCP to communicate over the network. The Datagram Packet, Datagram Socket, and Multicast Socket classes are for use with UDP. What Is a URL? If you've been surfing the Web, you have undoubtedly heard the term URL and have used URLs to access HTML pages from the Web. It's often easiest, although not entirely accurate, to think of a URL as the name of a file on the World Wide Web because most URLs refer to a file on some machine on the network. However, remember that URLs also can point to other resources on the network, such as database queries and command output. Definition: URL is an acronym for Uniform Resource Locator and is a reference (an address) to a resource on the Internet. The following is an example of a URL which addresses the Java Web site hosted by Sun Microsystems:

45

As in the previous diagram, a URL has two main components:


Protocol identifier Resource name

Note that the protocol identifier and the resource name are separated by a colon and two forward slashes. The protocol identifier indicates the name of the protocol to be used to fetch the resource. The example uses the Hypertext Transfer Protocol (HTTP), which is typically used to serve up hypertext documents. HTTP is just one of many different protocols used to access different types of resources on the net. Other protocols include File Transfer Protocol (FTP), Gopher, File, and News. The resource name is the complete address to the resource. The format of the resource name depends entirely on the protocol used, but for many protocols, including HTTP, the resource name contains one or more of the components listed in the following table: Host Name Filename Port Number Reference The name of the machine on which the resource lives. The pathname to the file on the machine. The port number to which to connect (typically optional). A reference to a named anchor within a resource that usually identifies a specific location within a file (typically optional).

For many protocols, the host name and the filename are required, while the port number and reference are optional. For example, the resource name for an HTTP URL must specify a server on the network (Host Name) and the path to the document on that machine (Filename); it also can specify a port number and a reference. In the URL for the Java Web site java.sun.com is the host name and the trailing slash is shorthand for the file named /index.html.

Sequence of socket calls for connection-oriented protocol: System Calls Socket - create a descriptor for use in network communication. On success, socket system call returns a small integer value similar to a file descriptor Name.

46
Bind - Bind a local IP address and protocol port to a socket When a socket is created it does not have nay notion of endpoint address. An application calls bind to specify the local; endpoint address in a socket. For TCP/IP protocols, the endpoint address uses the socket address in structure. Servers use bind to specify the well-known port at which they will await connections. Connect - connect to remote client After creating a socket, a client calls connect to establish an actual connection to a remote server. An argument to connect allows the client to specify the remote endpoint, which include the remote machines IP address and protocols port number. Once a connection has been made, a client can transfer data across it. Accept () - accept the next incoming connection Accept creates a new socket for each new connection request and returns the descriptor of the new socket to its caller. The server uses the new socket only for the new connections it uses the original socket to accept additional connection requests once it has accepted connection, the server can transfer data on the new socket. Return Value: This system-call returns up to three values An integer return code that is either an error indication or a new socket description The address of the client process The size of this address Listen - place the socket in passive mode and set the number of incoming TCP connections the system will en-queue. Backlog - specifies how many connections requests can be queued by the system while it wants for the server to execute the accept system call it us usually executed after both the socket and bind system calls, and immediately before the accept system call. send, sendto, recv and recvfrom system calls These system calls are similar to the standard read and write system calls, but additional arguments are requested. close - terminate communication and de-allocate a descriptor. The normal UNIX close system call is also used to close a socket.

STRATEGIC APPROACH TO SOFTWARE TESTING

47
The software engineering process can be viewed as a spiral. Initially system engineering defines the role of software and leads to software requirement analysis where the information domain, functions, behavior, performance, constraints and validation criteria for software are established. Moving inward along the spiral, we come to design and finally to coding. To develop computer software we spiral in along streamlines that decrease the level of abstraction on each turn. A strategy for software testing may also be viewed in the context of the spiral. Unit testing begins at the vertex of the spiral and concentrates on each unit of the software as implemented in source code. Testing progress by moving outward along the spiral to integration testing, where the focus is on the design and the construction of the software architecture. Talking another turn on outward on the spiral we encounter validation testing where requirements established as part of software requirements analysis are validated against the software that has been constructed. Finally we arrive at system testing, where the software and other system elements are tested as a whole.

UNIT TESTING

MODULE TESTING

Component Testing

SUB-SYSTEM TESING

Integration Testing

SYSTEM TESTING

User Testing

ACCEPTANCE TESTING

8.3. Unit Testing

48
Unit testing focuses verification effort on the smallest unit of software design, the module. The unit testing we have is white box oriented and some modules the steps are conducted in parallel. 1. WHITE BOX TESTING This type of testing ensures that All independent paths have been exercised at least once All logical decisions have been exercised on their true and false sides All loops are executed at their boundaries and within their operational bounds All internal data structures have been exercised to assure their validity.

To follow the concept of white box testing we have tested each form .we have created independently to verify that Data flow is correct, All conditions are exercised to check their validity, All loops are executed on their boundaries. 2. BASIC PATH TESTING Established technique of flow graph with Cyclomatic complexity was used to derive test cases for all the functions. The main steps in deriving test cases were: Use the design of the code and draw correspondent flow graph. Determine the Cyclomatic complexity of resultant flow graph, using formula: V(G)=E-N+2 or V(G)=P+1 or V(G)=Number Of Regions Where V(G) is Cyclomatic complexity, E is the number of edges, N is the number of flow graph nodes, P is the number of predicate nodes. Determine the basis of set of linearly independent paths. 3. CONDITIONAL TESTING In this part of the testing each of the conditions were tested to both true and false aspects. And all the resulting paths were tested. So that each path that may be generate on particular condition is traced to uncover any possible errors. 4. DATA FLOW TESTING

49
This type of testing selects the path of the program according to the location of definition and use of variables. This kind of testing was used only when some local variable were declared. The definition-use chain method was used in this type of testing. These were particularly useful in nested statements.

5. LOOP TESTING In this type of testing all the loops are tested to all the limits possible. The following exercise was adopted for all loops: All the loops were tested at their limits, just above them and just below them. All the loops were skipped at least once. For nested loops test the inner most loop first and then work outwards. For concatenated loops the values of dependent loops were set with the help of connected loop. Unstructured loops were resolved into nested loops or concatenated loops and tested as above.

Each unit has been separately tested by the development team itself and all the input have been validated.

50

Conclusions
In this project our aim is to monitor the clients by accessing the client desktop and see the clients log files. The server has the control over the client by locking the client system mouse pointer.

51 BIBLOGRAPHY
References [AIS93] Rakesh Agrawal, Thomas Imielinski, and Arun Swami. Mining associations between sets of items in massive databases. In Proc. of the ACM SIGMOD Intl Conference on Management. [Bes95] A Bestravos. Using speculation to reduce server load and service time on www. In Proceedings of 4th ACM International Conference of Information and Knowledge Management, 1995. [BP98] Sergey Brin and L. Page. The anatomy of large-scale hypertextual web search engine. In Proceedings of the Seventh International World Wide Web Conference, 1998. [CHMC00] Igor Cadez, David Heckerman, Christopher Meek, Padhraic Smyth, and Steven White. Visualization of navigational patterns on web site using model based clustering. Technical Report MSR-TR-0018, Microsoft Research, Microsoft Corporation, 2000. [CPMC98] Ed Chi, James Pitkow, Jock Mackinlay, Peter Pirolli, Rich Gossweiler, and Stuart Card. Visualizing the evolution of web ecologies. In CHI 98, 1998. [CTS99] Robert Cooley, Pang-Ning Tan, and Jaideep Srivastava. Websift: The web site information filter system. In Proceedings of the Web Usage Analysis and User Profiling Workshop, 1999. [DH99] J Dean and M. R. Henzinger. Finding related pages in world wide web. In Proceedings of the Eighth International World Wide Conference, 1999. [KB00] Ronny Kohavi and Carla Brodley. Knowledge discovery and data mining cup part of SIGKDD 2000. In http://www.ecn.purdue.edu/KDDCUP/, 2000. [Lin00] Frank Linton. Owl: A recommender system for organization-wide learning. Journal of International Forum of Educational Technology & Society, 2000. [MR94] Douglas Mongomery and George Runger. Applied Statistics and Probability for Engineers. John Wiley & Sons Inc., 1994. [Pap91] Athanasios Papoulis. Probability, Random Variables, and Stochastic Processes. McGraw Hill, 1991. [PM96] V.N. Padmanabham and J.C. Mogul. Using predictive prefetching to improve world wide web latency. Computer Communication Review, 1996.

52
[PP99] James Pitkow and Peter Pirolli. Mining longest repeating subsequence to predict wolrd wide web surfing. In Second USENIX Symposium on Internet Technologies and Systems, Boulder, C0, 1999. [PPR96] Peter Pirolli, James Pitkow, and Ramana Rao. Silk from a sows ear: Extracting usable structures from the web. In CHI-96, 1996. [Sar00] Ramesh R. Sarukkai. Link prediction and path analysis using markov chains. In Ninth International World Wide Web Conference, 2000. [SKS98] S. Schechter, M. Krishnan, and M. D. Smith. Using path profiles to predict http requests. In Seventh International World Wide Web Conference, 1998. [YL99] Yiming Yang and Xin Liu. A re-examination of text categorization methods. In 22nd annual international ACM SIGIR conference on Research and development in information retrieval, 1999.

Вам также может понравиться