Вы находитесь на странице: 1из 39

CONTENTS

1. INTRODUCTION
PROBLEM IN EXISTING SYSTEM SOLUTION OF THESE PROBLEMS HARDWARE & SOFTWARE SPECIFICATIONS ORGANIZATION PROFILE

2. PROJECT ANALYSIS
STUDY OF THE SYSTEM PROJECT FEATURES

3. TECHNOLOGIES
OOPS AND JAVA JDBC ABOUT ORACLE

4. PROJECT DESIGN
UML DIAGRAMS DATA FLOW DIAGRAMS DATA BASE TABLES OUTPUT SCREENS

5. PROJECT CODING
CODE EXPLANATION

6. PROJECT TESTING
COMPILING TESTING EXECUTION TESTING OUTPUT TEST 7. FUTURE SCOPE

8. CONCLUSION 9. BIBLOGRAPHY

INTRODUCTION

Objective Proposed System Software Requirement Specification System Environment User Characteristics Objectives
Central Ticketing Solution is the software for the ticketing system computerizations for Railways. There are 81 stations and 19 other booking offices which on an average issue up to 2,000 tickets per day. The basic operations handled by the system are reserved and unreserved ticketing. The system is designed to work in client server architecture. The client systems used are thin clients with an embedded operating system. These client systems can work both in the stand-alone and network mode. Each data processing system of the plurality of data processing systems is configured to facilitate collection, trending, and tracking processes related to the trouble ticket data stored in the central data storage facility via a graphical user interface configured in accordance with the common data storage scheme.

All the client systems (in LAN) get connected to the central server installed in the zonal headquarters. The central server (a high-end-server) generates all kinds of MIS-related reports pertaining to daily, periodic and monthly updates. The area of focus during the design of the system is to make the system a very secure one as the data generated by the system is of utmost importance. The data generated pertains to the cash collected in the counter, the system level operational data, the reservation status data, etc.Data (fare data, train related data, station level data, concessions, surcharges) change at the station level is bound to happen. A remote programming and monitoring facility is provided to ease the implementation of data change in the counter system.

Proposed System

The counter system is an intelligent special purpose-ticketing terminal, with capabilities of issuing all types of journey and non-journey tickets for passengers. Independent transaction-wise reports can be generated from each of these counter systems. All the counter systems are networked in a station through LAN. Passengers can book tickets from any counter. This feature achieves a real universal counter operation without compromising security in any manner. The counter systems are connected to the central server using either a dial-up or leased line. In addition to generating various accounting reports, the central server also has the capability of monitoring the client systems and programming the client's systems with software and data updates as and when required.

ADVANTAGES
Duplication of work is avoided. Paper work is drastically reduced. Retrieval and Access of data is easy.

Software Requirement Specification


The User Interface Should is user friendly to the user who uses the home page by which he/she can easily register. The Operations should take place transparently.

System Environment Client Hardware Platform: PIII or above with


RAM of 128 or above MB And 20GB or above of HD.

Software Platform: Browser Server Hardware Platform: PIV or above with


RAM of 128 or above MB and 20GB or above of HD.

Software Platform: ASP.NET using C#

STUDY OF THE SYSTEM

The counter system is an intelligent special purpose-ticketing terminal, with capabilities of issuing all types of journey and non-journey tickets for passengers. Independent transaction-wise reports can be generated from each of these counter systems. All the counter systems are networked in a station through LAN. Passengers can book tickets from any counter. This feature achieves a real universal counter operation without compromising security in any manner. The counter systems are connected to the central server using either a dial-up or leased line. In addition to generating various accounting reports, the central server also has the capability of monitoring the client systems and programming the client's systems with software and data updates as and when required. The counter system has a provision of issuing tickets from any source station to any destination depending upon the data available in the database. The system will also be able to issue seats and berths for tickets under this option depending upon the network to the central server. In a standalone mode, the system will be able to allot seats / berths only for the ticket issue station. However the system will be able to issue unreserved tickets from any source to any destination. The major points considered during the design and development stage of this application: High security Centralized administration Ease of installation with control from central server User-friendlier interface

Need for Computerization

Duplication of work avoided Paper work is drastically reduced Retrieval and access of data is easy

MODULES OF THIS PROJECT

Administration module Ticketing module Communication module Auditing module

Administration module: Administrator can modify, add, delete, view the details of all the trains and can also view the details of passengers. Ticketing module: User has to enter all the required details of the passenger and have to issue the ticket. Communication module: If user have any query or any doubts they can send mail to the given address or they can contact on given contact number for further clarification. Auditing module: User can view all the passengers who are travelling on that particular day.

UML DIAGRAMS

A Diagram is the graphical presentation of a set of elements, most often rendered


as a connected graph of vertices (things) and arcs (relationships).For this reason, and the UML includes nine such diagrams.

The Unified Modelling Language (UML) is probably the most widely known and used notation for object-oriented analysis and design. It is the result of the merger of several early contributions to object-oriented methods. The Unified Modelling Language (UML) is a standard language for writing software blueprints? The UML may be used to visualize, specify, construct, and document the artefacts. A Modelling language is a language whose vocabulary and rules focus on the conceptual and physical representation of a system. Modelling is the designing of software applications before coding. Modelling is an Essential Part of large software projects, and helpful to medium and even small projects as well. A model plays the analogous role in software development that blueprints and other plans (site maps, elevations, physical models) play in the building of a skyscraper. Using a model, those responsible for a software development project's success can assure themselves that business functionality is complete and correct, end-user needs are met, and program design supports requirements for scalability, robustness, security, extendibility, and other characteristics, before implementation in code renders changes difficult and expensive to make.

The underlying premise of UML is that no one diagram can capture the different elements of a system in its entirety. Hence, UML is made up of nine diagrams that can be used to model a system at different points of time in the software life cycle of a system. The nine UML diagrams are:

Use case diagram: The use case diagram is used to identify the primary elements and processes that form the system. The primary elements are termed as "actors" and the processes are called "use cases." The use case diagram shows which actors interact with each use case.

Class diagram: The class diagram is used to refine the use case diagram and define a detailed design of the system. The class diagram classifies the actors defined in the use case diagram into a set of interrelated classes. The relationship or association between the classes can be either an "is-a" or "hasa" relationship. Each class in the class diagram may be capable of providing certain functionalities. These functionalities provided by the class are termed "methods" of the class. Apart from this, each class may have certain "attributes" that uniquely identify the class.

Object diagram: The object diagram is a special kind of class diagram. An object is an instance of a class. This essentially means that an object represents the state of a class at a given point of time while the system is running. The object diagram captures the state of different classes in the system and their relationships or associations at a given point of time.

State diagram: A state diagram, as the name suggests, represents the different states that objects in the system undergo during their life cycle. Objects in the system change states in response to events. In addition to this, a state diagram also captures the transition of the object's state from an initial state to a final state in response to events affecting the system.

Activity diagram: The process flows in the system are captured in the activity diagram. Similar to a state diagram, an activity diagram also consists of activities, actions, transitions, initial and final states, and guard conditions.

Sequence diagram: A sequence diagram represents the interaction between different objects in the system. The important aspect of a sequence diagram is that it is time-ordered. This means that the exact sequence of the interactions between the objects is represented step by step. Different objects in the sequence diagram interact with each other by passing "messages".

Collaboration diagram: A collaboration diagram groups together the interactions between different objects. The interactions are listed as numbered interactions that help to trace the sequence of the interactions. The collaboration diagram helps to identify all the possible interactions that each object has with other objects.

Component diagram: The component diagram represents the high-level parts that make up the system. This diagram depicts, at a high level, what components form part of the system and how they are interrelated. A component diagram depicts the components culled after the system has undergone the development or construction phase.

Deployment diagram: The deployment diagram captures the configuration of the runtime elements of the application. This diagram is by far most useful when a system is built and ready to be deployed.

Now that we have an idea of the different UML diagrams, let us see if we can somehow group together these diagrams to enable us to further understand how to use them.

A data flow diagram is graphical tool used to describe and analyze movement of data through a system. These are the central tool and the basis from which the other components are developed. The transformation of data from input to output, through processed, may be described logically and independently of physical components associated with the system. These are known as the logical data flow diagrams. The physical data flow diagrams show the actual implements and movement of data between people, departments and workstations. A full description of a system actually consists of a set of data flow diagrams. Using two familiar notations Yourdon, Gane and Sarson notation develops the data flow diagrams. Each component in a DFD is labeled with a descriptive name. Process is further identified with a number that will be used for identification purpose. The development of DFDs is done in several levels. Each process in lower level diagrams can be broken down into a more detailed DFD in the next level. The lop-level diagram is often called context diagram. It consists a single process bit, which plays vital role in studying the current system. The process in the context level diagram is exploded into other process at the first level DFD.

The idea behind the explosion of a process into more process is that understanding at one level of detail is exploded into greater detail at the next level. This is done until further explosion is necessary and an adequate amount of detail is described for analyst to understand the process.

Larry Constantine first developed the DFD as a way of expressing system requirements in a graphical from, this lead to the modular design.

A DFD is also known as a bubble Chart has the purpose of clarifying system requirements and identifying major transformations that will become programs in system design. So it is the starting point of the design to the lowest level of detail. A DFD consists of a series of bubbles joined by data flows in the system.

DFD SYMBOLS:In the DFD, there are four symbols A square defines a source (originator) or destination of system data An arrow identifies data flow. It is the pipeline through which the information flows A circle or a bubble represents a process that transforms incoming data flow into outgoing data flows. An open rectangle is a data store, data at rest or a temporary repository of data

Process that transforms data flow.

Source or Destination of data

Data flow

Data Store

CONSTRUCTING A DFD:

Several rules of thumb are used in drawing DFDs:

Process should be named and numbered for an easy reference. Each name should be representative of the process.

The direction of flow is from top to bottom and from left to right. Data Traditionally flow from source to the destination although they may flow back to the source. One way to indicate this is to draw long flow line back to a source. An alternative way is to repeat the source symbol as a destination. Since it is used more than once in the DFD it is marked with a short diagonal. When a process is exploded into lower level details, they are numbered. The names of data stores and destinations are written in capital letters. Process and dataflow names have the first letter of each work capitalized

A DFD typically shows the minimum contents of data store. Each data store should contain all the data elements that flow in and out.

Questionnaires should contain all the data elements that flow in and out. Missing interfaces redundancies and like is then accounted for often through interviews.

SAILENT FEATURES OF DFDs

The DFD shows flow of data, not of control loops and decision are controlled considerations do not appear on a DFD.

The DFD does not indicate the time factor involved in any process whether the dataflows take place daily, weekly, monthly or yearly. The sequence of events is not brought out on the DFD. TYPES OF DATA FLOW DIAGRAMS Current Physical Current Logical New Logical New Physical

CURRENT PHYSICAL:

In Current Physical DFD proecess label include the name of people or their positions or the names of computer systems that might provide some of the overall system-processing label includes an identification of the technology used to process the data. Similarly data flows and data stores are often labels with the names of the actual physical media on which data are stored such as file folders, computer files, business forms or computer tapes.

CURRENT LOGICAL:

The physical aspects at the system are removed as mush as possible so that the current system is reduced to its essence to the data and the processors that transform them regardless of actual physical form.

NEW LOGICAL:

This is exactly like a current logical model if the user were completely happy with he user were completely happy with the functionality of the current system but had problems with how it was implemented typically through the new logical model will differ from current logical model while having additional functions, absolute function removal and inefficient flows recognized.

NEW PHYSICAL:

The new physical represents only the physical implementation of the new system.

RULES GOVERNING THE DFDS

PROCESS No process can have only outputs. No process can have only inputs. If an object has only inputs than it must be a sink. A process has a verb phrase label.

DATA STORE Data cannot move directly from one data store to another data store, a process must move data.

Data cannot move directly from an outside source to a data store, a process, which receives, must move data from the source and place the data into data store A data store has a noun phrase label.

SOURCE OR SINK The origin and /or destination of data.

Data cannot move direly from a source to sink it must be moved by a process A source and /or sink has a noun phrase land

DATA FLOW A Data Flow has only one direction of flow between symbol. It may flow in both directions between a process and a data store to show a read before an update. The later is usually indicated however by two separate arrows since these happen at different type. A join in DFD means that exactly the same data comes from any of two or more different processes data store or sink to a common location. A data flow cannot go directly back to the same process it leads. There must be atleast one other process that handles the data flow produce some other data flow returns the original data into the beginning process. A Data flow to a data store means update ( delete or change). A data Flow from a data store means retrieve or use. A data flow has a noun phrase label more than one data flow noun phrase can appear on a single arrow as long as all of the flows on the same arrow move together as one package.

3.1. DATA FLOW DIAGRAMS

Data flow diagrams represent the flow of data through a system. A DFD is composed of: Data movement shown by tagged arrows. Transformation or process of data shown by named bubbles. Sources and destination of data represented by named rectangles. Static storage or data at rest denoted by an open rectangle that is named. The DFD is intended to represent information flow but it is not a flow chart and it is not intended to indicate decision-making, flow of control, loops and other procedural aspects of the system. DFD is a useful graphical tool and is applied at the earlier stages of requirements analysis. It may be further refined at preliminary design states and is used as mechanism for creating a top level structural design for software. The DFD drawn first at a preliminary level is further expanded into greater details: The context diagram is decomposed and represented with multiple bubbles. Each of these bubbles may be decomposed further and documented as more detailed DFDs

UML Diagram ClassificationStatic, Dynamic, and Implementation A software system can be said to have two distinct characteristics: a structural, "static" part and a behavioral, "dynamic" part. In addition to these two characteristics, an additional characteristic that a software system possesses is related to implementation. Before we categorize UML diagrams into each of these three characteristics, let us take a quick look at exactly what these characteristics are.

Static: The static characteristic of a system is essentially the structural aspect of the system. The static characteristics define what parts the system is made up of.

Dynamic: The behavioral features of a system; for example, the ways a system behaves in response to certain events or actions are the dynamic characteristics of a system.

Implementation: The implementation characteristic of a system is an entirely new feature that describes the different elements required for deploying a system.

The UML diagrams that fall under each of these categories are:

Static
o o

Use case diagram Class diagram Object diagram

Dynamic
o

o o o o

State diagram Activity diagram Sequence diagram Collaboration diagram Component diagram Deployment diagram

Implementation
o o

DATA FLOW DIAGRAMS

The Dataflow Diagrams allows you to create and maintain business functions, data stores, data flows and externals that are stored in the Repository. Dataflow diagramming involves the creation of diagrams to show how data flows through your organization. Dataflow diagrams are drawn to represent data dependencies, system components or even the context of a project. Each dataflow diagram represents a single business function for an application system .This function may be a mission statement for an entire organization, or a small series of activities for an isolated part of the organizations business. DFDs show the flow of data from external entities into the system, showed how the data moved from one process to another, as well as its logical storage. There are only four symbols: 1) Squares representing external entities, which are sources or destinations of data.

External Entity

2) Rounded rectangles representing process, which take data as input, do something to it, and output it.

Process

3) Arrows representing the data flows , which can either be electric data or physical items.
Data

4)Open ended rectangles representing data stores , including electronic stores such as databases or XML files and physical stores such as or filing cabinets or stacks of paper.

Data Store

LEVELS OF ABSTRACTION:-

Level 0:-

The Highest level DFD is Level 0.It shows the entire application as a single process surrounded by its data stores and is sometimes known as context diagram.

Level 1:It shows the whole application again but with the main processes, the data flows between them and their I individual links the data stores.

Level 2:Each process from level 1 is expanded into its own level 2 diagram and then into lower level diagram to show further detail. Process no 1 at level 1 would be expanded into processes 1.1, 1.2, 1.3, etc... At level 2. Data stores remain the same at all levels of abstraction but new stores may be introduced at any level. These are usually temporary stores such as views and cursors which are required in lower level processes.

NORMALIZATION

Normalization is a process that helps analysts or database designers to design table structures for an application. The focus of normalization is to attempt to reduce redundant table data to the very minimum. Through the normalization process, the collection of data in a single table is replaced, by the same data being distributed over multiplied tables with a specific relationship being setup between the tables. By this process RDBMS schema designers try their best to reduce table data to the very minimum. Note:-It is essential to remember that redundant data cannot be reduced to zero in any database management system. Having said this, when the process of normalization is applied to table data and this data is spread across several associated (i.e. a specific relationship has been established) tables, it takes a query much longer to run and retrieve user data from the set of tables. Normalization is carried out for the following reasons: To structure the data between tables so that data maintenance is simplified. To allow data retrieval at optimal speed To simplify data maintenance through updates, inserts and deletes. To reduce the need to restructure tables as new application requirements arise. To improve the quality of design for an application by rationalization of table data. Normalization is a technique that: Decomposes data into two-dimensional tables. Eliminates any relationship in which table data does fully depend upon the primary key of a record. Eliminates any relationship that contains transitive dependencies.

A description of the three forms of Normalization is as mentioned below.

First Normal Form

When a table is decomposed into two-dimensional tables with all repeating groups of data eliminated, the table data said to be in its first normal form. The repetitive portion of data belonging to the record is termed as repeating groups. A table is in 1st normal form if:
o o o There are no repeating groups. All the key attributes are defined. All attributes are dependent on a primary key.

To convert a table to its First Normal Form:

The normalized data in the first table is the entire table. A Key that will uniquely identify each record should be assigned to the table. This key has to be unique because it should be capable of identifying any specific row from the table for extracting information for use. This key is called the tables primary key.

Second Normal Form

A table is said to be in its second normal form when each record in the table is in the first normal form and each column in the record is fully dependent on its primary key.
A table is in 2nd normal form if:

Its in 1st normal form. It includes no partial dependencies(where an attribute is dependent on only a part of a primary key)

The steps to convert a table to its Second Normal Form:

Find and remove fields that are related to the only part of the key. Group the remove items in another table. Assign the new table with the key i.e. part of a whole composite key.

To convert the table into the second normal form remove and place these fields in a separate t able, with the key being that part of the original key they are dependent on.
Third Normal Form

Table data is said to be in third normal format when all transitive dependencies are removed from this data. The table is in 3rd normal form if: Its in 2nd normal form. It contains no transitive dependencies (where a non-key attributes is dependent on another non-key attribute).

A general case of transitive dependencies is as follows: A, B, C are three columns in table. If C is related to B If B is related to A Then C is indirectly related to A This is when Transitive dependency exists. To convert such data to its third normal form remove this transitive dependency by splitting each relation in two separate relations. This means that data in columns A, B, C must be places in three separate tables, which are linked using a foreign key. To convert the table into the third normal form remove and place these fields in a separate table, with the attribute it was dependent on as key. These tables are all now in their 3rd normal form, and ready to be implemented. There are other normal forms such as Boyce-Codd normal form, and 4th normal form, but these are very rarely used for business applications. In most cases, tables that are in their 3rd normal form are already conforming to these types of table formats anyway.

DATABASE TABLES

reservation train

Reservation table

Sno
1 2 3 4 5 6 7 8 9 10 10

Fieldname
TICNO

Data Type
NUMBER(10)

Remarks
Ticketing Number Passenger Name Date Of Birth Age Passenger Address Phone Number Email Date Of source Source City Date Of Destination Destination City

NAME

VARCHAR2(10)

DOB AGE ADDRESS

VARCHAR2(10) VARCHAR2(20) VARCHAR2(10)

PHNO EMAIL DOS SCITY DOD

VARCHAR2(20) VARCHAR2(20) VARCHAR2(20) VARCHAR2(20) VARCHAR(20)

DCITY

VARCHAR2(20)

11 12 13

TNAME SEATNO BERTH

VARCHAR2(30) VARCHAR2(10) VARCHAR2(10)

Train Name Seat Number Berth

Train Table

Sno 1 2 3 4

Fieldname tno tname fare timings

Data Type varchar(10) varchar(30) varchar(10) varchar(10)

Remarks Train Number Train Name Fare Timings

E-R Diagrams

The Entity Relationship Diagrams is a modelling tool used for defining the information needs of a business as an entity relationship model. Entity relationship modelling involves identifying the things of importance in an organization (entities),the properties of those things (attributes)and how they are related to one another (relationships).The resulting information model is independent of any data storage or access method. Large entity relationship diagrams drawn for entire complex systems or smaller diagrams drawn for subset of systems .One entity many detailed subset models to be defined.

E-R Diagrams are nothing but relationship between entities. An Entity is an object or a thing in the real world with an independent existence.

Attributes are descriptive properties possessed by each member of an entity set.

Relationship is an association among several entities.

Testing
Testing is the process of detecting errors. Testing performs a very critical role for quality assurance and for ensuring the reliability of software. The results of testing are used later on during maintenance also.

Psychology of Testing
The aim of testing is often to demonstrate that a program works by showing that it has no errors. The basic purpose of testing phase is to detect the errors that may be present in the program. Hence one should not start testing with the intent of showing that a program works, but the intent should be to show that a program doesnt work. Testing is the process of executing a program with the intent of finding errors.

Testing Objectives
The main objective of testing is to uncover a host of errors, systematically and with minimum effort and time. Stating formally, we can say,
Testing is a process of executing a program with the intent of finding an error. A successful test is one that uncovers an as yet undiscovered error. A good test case is one that has a high probability of finding error, if it exists. The tests are inadequate to detect possibly present errors.

The software more or less confirms to the quality and reliable standards.

Levels of Testing
In order to uncover the errors present in different phases we have the concept of levels of testing. The basic levels of testing are as shown below

Client Needs

Acceptance Testing

Requirements

System Testing

Integration Testing Design Unit Testing

Code

System Testing
The philosophy behind testing is to find errors. Test cases are devised with this in mind. A strategy employed for system testing is code testing.

Code Testing
This strategy examines the logic of the program. To follow this method we developed some test data that resulted in executing every instruction in the program and module i.e. every path is tested. Systems are not designed as entire nor are they tested as single systems. To

ensure that the coding is perfect two types of testing is performed or for that matter is performed or that matter is performed or for that matter is performed on all systems.

Types Of Testing
Unit Testing Link Testing

UnitTesting

Unit testing, focuses verification effort on the smallest unit of software i.e. the module. Using the detailed design and the process specifications testing is done to uncover errors within the boundary of the module. All modules must be successful in the unit test before the start of the integration testing begins. In this project each service can be thought of a module. There are three basic modules. Giving different sets of inputs has tested each module. When developing the module as well as finishing the development so that each module works without any error. The inputs are validated when accepting from the user. In this application developer tests the programs up as system. Software units in a system are the modules and routines that are assembled and integrated to form a specific function. Unit testing is first done on modules, independent of one another to locate errors. This

enables to detect errors. Through this errors resulting from interaction between modules initially avoided.

Link Testing
Link testing does not test software but rather the integration of each module in system. The primary concern is the compatibility of each module. The Programmer tests where modules are designed with different parameters, length, type etc.

Integration Testing
After the unit testing we have to perform integration testing. The goal here is to see if modules can be integrated properly, the emphasis being on testing interfaces between modules. This testing activity can be considered as testing the design and hence the emphasis on testing module interactions. In this project integrating all the modules forms the main system. When integrating all the modules I have checked whether the integration effects working of any of the services by giving different combinations of inputs with which the two services run perfectly before Integration.

System Testing
Here the entire software system is tested. The reference document for this process is the requirements document, and the goal is to see if software meets its requirements. Here entire VOIP has been tested against requirements of project and it is checked whether all requirements of project have been satisfied or not.

Acceptance Testing
Acceptance Test is performed with realistic data of the client to demonstrate that the software is working satisfactorily. Testing here is focused on external behavior of the system; the internal logic of program is not emphasized. In this project VOIP I have collected some data and tested whether project is working correctly or not.

Test cases should be selected so that the largest number of attributes of an equivalence class is exercised at once. The testing phase is an important part of software development. It is the process of finding errors and missing operations and also a complete verification to determine whether the objectives are met and the user requirements are satisfied.

Black Box Testing


This test involves the manual evaluation of the flow from one Module to the other and check accordingly for the process Flow.

DFD
DIAGRAMS

DFD Level-0:
Login

Administrat or

Central Ticketing System

View train details Add train details Update train details Delete train details View Passenger details

Reserve Unreserve viewuserviewtrain Ticket details details ticket

User

FD LEVEL -1:
Administrator:
P4 Admin D2 Tra Updat e

Train

D1 Ad

Admin

P1 Verify

P2 View train

P5 Add

P3 ss View passeng er

P6 Delet e

D3 D3

Passenger

User:
D1 Train D2 Reservation

User

P1 View train

P3 Reserv e ticket

P4 Unreser ve ticket

P2 View passeng er D2 22 Reservation

E-R DIAGRAMS
Uname Passwor d Sour ce Desti n Fare s

Tic no Ddat e Name

Sdat e

Ticket

Administrator

User

Train
Ddate Age Sourc e Fares Destinati on Timin gs Scity Dcity Sdate

UML DIAGRAMS

CLASS DIAGRAM:
Administrator name : char password : char login() view() add() delete() update() User name : char age : number seat no : number berth : char 1 1..* view() reserve() unreserve() * 1

1 Train tno : char tname : char timings : char fares : number reserve() unreserve() Ticket ticno : char source : char destination : char fares : number date : char reserve() unreserve()

Вам также может понравиться