Вы находитесь на странице: 1из 35

CONTENTS

1. PROBLEM DEFINITION 2. OBJECTIVE 3. DATA BASE CONCEPT 4. ORACLE DATA BASE 5. ENVIRONMENT AND TOOLS USED VC++ ODBC 6.IMPLEMENTATION & RESULT

PROBLEM DEFINITION
The Network Management System (NMS) for Satellite Communication Network in the proposed Multi-transponder, Multi-services (Voice, Fax and Data), SCPC (Single Channel Per Carrier) DAMA (Demand Assigned Multiple Access) Hybrid network will monitor, control and manage the satellite resources in C and S Bands optimally. The network based on Circuit switching will allocate satellite resources quickly and transparently to the users on a call-by-call basis. After use, resources are immediately returned to the central pool for re-use. This ability to share voice and data trunks is based on the assumption that not all users will require simultaneous access to the communications channels. Thus, by using DAMA, many nodes (Satellite Terminals) can be served using only a fraction of satellite resources required as compared to a network operating in preassigned mode. The problem given was creation of Database, which is one of the major components of NMS, and development of application program to interface with the database so that records can be added, edited, deleted and provides information to other components of NMS at the run time. Records should be retrieved efficiently by passing some criteria to offer the information to NMS as fast as possible. Whichever option the user would choose; a form related to it should open. After the user had made the appropriate entries the action should appear on the table i.e. if the user wants to insert a new record he can click the Add button on the menu form and the form for insertion should open. When the user clicks on the submit button on the form the record should be inserted in the table. Similarly when the user clicks on the delete button, he should be asked which record he wants to delete and then the specific record should be deleted from the table. Similarly for edit the edited record should be updated in the table.

OBJECTIVE
Objective of assignment is to Create Database for the NMS which requires various informations like Satellite Parameters, Modems ,Ports, Terminals, Port Health ,Resources etc.. CallsLogbook is also needed so that various Statistics and Billing can be generated by the NMS. These tables had already been created in Microsoft Aceess Database on Windows OS. The RDBMS (Relational Database Management System) was to be used which should be multi-user, Client-Server based powerful, fast and which can handle even large volume of data efficiently.

DATA BASE CONCEPT A database may be defined as a collection of interrelated data stored together without harmful or unnecessary redundancy to serve multiple applications; the data are stored so that they are independent of programs, which use the data; a common and controlled approach is used in adding new data and in modifying and retrieving existing data within the database. The data is structured so as to provide a foundation for future application development. One system is said to contain a collection of data Bases if they are entirely separate in structure. A data Base is an integrated collection of automated data files related to one another in the support of a common purpose. A data element is a place in a file used to store an item of information that is uniquely identifiable by its purpose and contents. A data value is a information stored in a data element. A file is a set of records where the records have the same data elements in the same format. A data element dictionary is a table of data elements including at least the names, data types, and lengths of data element in the subject data Base. A schema is the expression of the data Base in terms of the files it stores, the data elements in each file, the key data elements used for record identification, and the relationships between files. A primary key data element in a file is the data element used to uniquely describe and locate a desired record. The key can be a combination of more than one data element.

The main events that occur when an application program reads a record by means of other events also occur, depending on the details of the software. 1) Application program A issues a call to the database management system to read a record. The program states the programmers name for the data type and gives the value of the key of the segment and record in question. 2) The data base management system obtains the subschema (or program description) that is used by application program A and looks up the description of the data in question. 3) The data base management system obtains the schema (or global logical data description) and determines which logical data type or types are needed. 4) The data base management system examines the database description and determines which physical record or records are to be read.

5) The data base management system issues a command to the computer operating system, instructing to read the requisite records. 6) The operating system interacts with the physical storage where the data are kept. 7) The required data are transferred between the storage and the system buffers. 8) Comparing the subschema and schema, the data base management system derives from the data the logical record needed by the application program. Any data transformations between the data as declared in the subschema and the data as declared in the schema are made by the data base management system. 9) The data base management system transfers the data from the system buffers to the work area of application program A. 10) The database management system provides status information to the application program on the outcome of its call, including any error indications. 11) The application program can then operate with the data in its work area.

2 1 1 1

1 2

3 1

9 5 7

Fig.1-1: The sequence of events when an application program needs a record, using a database management.

The design of a relational database involves nine steps taken, for the most part, in succession: i) Identify the basis for the data base requirements: To design a database, there must be a mission and a purpose. To automate something, it needs to define the functional and performance requirements for the database, and the definition of these requirements should proceed from understanding the functions to be supported. The basis for a database specifies the problem that is to be solved, the availability of resources that can be used in the solution, and some likely approaches to the solution. Starting from an existing solution: If the user is using a computer to solve the problem, then it is a good place to start. An existing system is an excellent basis for a database design. Starting from scratch: If a user comes with a idea of initiating a complete new procedure and solving a brand new problem, we should look around for an existing solution to a similar problem. In the case we dont find one should start with the scratch. The best approach is to develop some manual procedures, use them for a while, and allow to influence the basis for a new database. Write down the basis: On having the basis for database write it down. Make it understandable to every one who understands the problem. Keep the written information around while the system is being developed, and makes updates to it as the basis for the system changes.

ii) Define the database functional and performance requirements. This is a next step is a refinement of the first. In basis after defining the requirements for the database, now need to address functional requirements and performance requirements. Functional requirements specify the kind of data the database will contain. In these requirements, it needs to document everything about the function that is supported. Be specific in identifying pieces of information the database must know about. Performance requirements specify frequencies, speeds, quantities, and sizes how often, how fast, how many, and how big the database must support. The statement of a database requirement should be clear and unambiguous. It should address itself to one specific aspect of the functions or performance of the system. Each statement should stand alone, and each statement should completely define the requirement. It should be worded so that users and programmers alike can understand it.

iii) Identify the data items. After writing down the requirements for the database, next step is to begin to translate those requirements into identifiable elements of data that are suitable for automation. To identify a data item, rummage around in the work already done, looking for potential data items. From basis and requirements, extracts any reference to anything that looks like a data item and writes its name and anything else known about that item onto one of 3 *5 cards. Now start by pulling nouns out of work. Each noun is a potential data item. iv) Separate the data items from the files. This step involves designer judgment, intuition, and guesswork. Look at the data items collected. Which of these items seems to be individual data elements, and which seems more like logically organized aggregates of data elements? Shuffle them up and sort them out, and move them around. The 3*5 card method works well here. It should become obvious which items are elements and which are not. v) Build the data element dictionary. Collect everything that is known and that can be determined about each data element. At the very least , there is need to know its size and data type. If the design is based on an existing system, the documentation can contribute to dictionary. While automating the manual system, look at the entry forms( time cards, posting ledgers and so on) to see how data elements are used. Sometimes these forms have accompanied procedures, and use these procedures to find the description of manual entries, thus learning the requirements for the data elements. The point is emphasize here is that we must clearly define the physical property of all the data elements that will be in database. Get as much of this definition process done as possible before proceeding with further design. vi) Gather data elements into files. While separating the data elements from the aggregates, the aggregates are left over. Those aggregate must be dealt with, and chances and they are going to be files. Look at requirements to see which of the above items should be files in database. None of the requirements calls for the permanent retention of specific schedule records. The information that might be used to report a schedule condition is information that would be recorded about a project or an assignment. vii) Identify the retrieval characteristics of each file. Once the files are laid out, need to specify the method of retrieval that are required by each. And there will be several alternatives, and a design might consist of any combination of them. The purpose for identifying the retrieval requirements of the database is so that can decide which key should be primary key index values, which key should be secondary index values, and which files should be related.

viii) Identify the relationships between files. ix) Develop the schema for the DBMS being used. There is always a tenth step, which is to reiterate the first nine. Let the solution to the problem modify the problem, and let each successive solution Enhance understanding of the problem. As it does, retrace steps through the designing process, and change the results. This step is called refinement and is often left out. Primary objective of Data Base organization The Data Base is the Foundation Stone of Future Application Development: It should make the application development easier, cheaper, faster, and more flexible. The Data Can Have Multiple Uses: Different users who perceives the same data differently can employ them in different ways. Clarity: Users can easily know and understand what data are available to them. Ease of use: Users can gain access to data in a simple fashion. Complexity is hidden from the user by database management system. Flexible usage: The data can be used or searched in flexible ways with different access paths. Low cost: Low cost of storing and using data, and minimization of high cost of making changes. Accuracy and consistency: Privacy: Protection from loss and damage: Performance: Consistency:

SQL-THE RELATIONAL DATABASE STANDARD The SQL language provides a high-level declarative language interface, So the user only specifies what the result is to be, leaving the actual Optimization and decisions on how to execute the query to the DBMS. SQL includes some features from relational algebra. The name SQL is derived from Structured Query Language. Originally SQL was called SEQEL (for Structured English Query language) and was design and implemented at IBM research as the interface for an experimental relational database system called SYSTEM R.SQL. SQL is a comprehensive database language; it has statements for data definition, query, and update. Hence, it is both a DDL and DML. In addition it has facilities for defining views on the database, for specifying security and authorization, for defining integrity constraints, and for specifying transaction controls. It also has rules for embedding SQL statements into a general purpose programming language such as C or Pascal.

An SQL Schema is identified by a schema name, and includes an authorization identifier to indicate the user or account who owns the schema, as well as descriptors for each element in the schema. Schema elements include the tables, constraints, views, domains, and other constructs that describe the schema. A Schema is created via the CREATE SCHEMA statements, which can include all the schema elements definitions.

THE CREATE TABLE command It is used to specify a new relation by giving it a name and specifying its attributes and constraints. The attributes are specified first, and each attribute is given a name, and a data type to specify its domain of values, and any attribute constraints such as NOTNULL. The key, entity integrity, and referential integrity constraints can be specified- within the CREATE TABLE statements-after the attributes are declared. CREATE TABLE COMPANY>EMPLOYEE.. DROP SCHEMA and DROP TABLE commands If a whole schema is not needed any more, THE DROP SCHEMA command can be used. There are two drop behavior options: CASCADE and RESTRICT. DROP SCHEMA COMPANY CASCADE; DROP TABLE DEPENDENT CASCADE;

AGGREGATE FUNCTIONING and GROUPING Grouping and aggregation concept is used in many database applications, SQL has features that incorporate these concept. The first of these is no of built in functions: COUNT, SUM, MAX, MIN, AVG. The COUNT function returns number of tuples or value as specified in a query. The functions SUM, MAX, MIN, AVG are applied to set or multiset of numeric values and return, respectively, the sum, maximum value, minimum value, and average of those values.

The Alter Table The definition of a base table can be changed by using the ALTER TABLE command, which is a schema evolution command. The possible alter table actions include adding or dropping a column(attribute), changing a column definition, and adding and dropping table constraints. ALTER TABLE COMPANY>EMPLOYEE ADD JOB VARCHAR(12); ALTER TABLE COMPANY>EMPLOYEE DROP ADDRESS CASCADE; INSERT DELETE and UPDATE STATEMENTS The INSERT Command It is used to add a single tuple to a relation. The values should be listed in the same order in which the corresponding attributes were specified in the CREATE TABLE command. INSERT INTO EMPLOYEE VALUES ( RICHARDS, K, Marini, 6523435); The DELETE Command Its remove tuples from a relation. It includes a WHERE clause, similar to that used in SQL query, to select the tuples to be deleted. Tuples are explicitly deleted one table at a time. DELETE FROM EMPLOYEE WHERE LNAME=BROWN; THE UPDATE Command The UPDATE command is used to modify attributes values of one or more selected tuples. As in the DELETE command , a WHERE clause in the UPDATE command selects the tupple to be modified from a single relation. However, updating a primary key value may propagate to the foreign key values of tuples in other relations if such a referential triggered action is specified in the referential integrity constraints of the , DDL. UPDATPROJECT SET PLOCATION=bellaire, DNUM=5 WHERE P Number =10; Specification of View in SQL The command to specify a view is CREATE VIEW. The view is given a (virtual) table name(or view name), a list of attribute names, and a query to specify the contents of the views. If none of the view attributes results from applying functions or arithmetic operations, there is no need to specify attributes name for the view, as they would be the same as name is defined attributes of the table. A view update is feasible when only one possible update on the base relations can accomplish the desired update effect on the view. Whenever an update on the view can be mapped to more than one update on the underlying base relations, must have a certain procedures to choose the desired update effect on the view.

UPDATE DEPT_INFO SET TOTAL_SAL=1000 WHERE DNAME=RESEARCH; CREATE VIEW AS SELECT FROM WHERE WORKS_ON! FNAME, LNAME, PNAME EMPLOYEE, PROJECT SSN=ESSN

ORACLE DATABASE An Oracle database is a collection of data treated as a unit. The purpose of a Data Base is to store and retrieve related information. A database server is the key to solving the problems of information management. In general, a server reliably manages a large amount of data in a multi user environment so that many users can concurrently access the same data. All this is accomplished while delivering high performance. A database server also prevents unauthorized access and provides efficient solutions for failure recovery. The database has logical structures and physical structures. Because the physical and logical structures are separate, the physical storage of data can be managed without affecting the access to logical storage structures. Logical Database Structures The logical structures of an Oracle database include schema objects, data blocks, extents, segments, and table spaces . Schemas and Schema Objects A schema is a collection of database objects. A schema is owned by a database user and has the same name as that user. Schema objects are the logical structures that directly refer to the databases data. Schema objects include structures like tables, views, and indexes. (There is no relationship between a table space and a schema. Objects in the same schema can be in different table spaces, and a table space can hold objects from different schemas.) Some of the most common schema objects are defined in the following section. Tables Tables are the basic unit of data storage in an Oracle database. Database tables hold all user-accessible data. Each table has columns and rows. Oracle stores each row of a database table containing data for less than 256 columns as one or more row pieces. A table that has an employee database, for example, can have a column called employee number, and each row in that column is an employees number.

Views Views are customized presentations of data in one or more tables or other views. A view can also be considered a stored query. Views do not actually contain data. Rather, they derive their data from the tables on which they are based, referred to as the base tables of the views. Like tables, views can be queried, updated, inserted into, and deleted from, with some restrictions. All operations performed on a view actually affect the base tables of the view. Views provide an additional level of table security by restricting access to a predetermined set of rows and columns of a table. They also hide data complexity and store complex queries. Indexes Indexes are optional structures associated with tables. Indexes can be created to increase the performance of data retrieval. Just as the index in this manual helps one quickly locate specific information, an Oracle index provides an. When processing a request, Oracle can use some or all of the available indexes to locate the requested rows efficiently. Indexes are useful when applications a salary greater than 1000 dollars) or a specific row. Indexes are created on one or more columns of a table. After it is created, an index is automatically maintained and used by Oracle. Changes to table data (such as adding new rows, updating rows, or deleting rows) are automatically incorporated into all relevant indexes with complete transparency to the users. indexes can be partitioned. Clusters Clusters are groups of one or more tables physically stored together because they share common columns and are often used together. Because related rows are physically stored together, disk access time improves. Like indexes, clusters do not affect application design. Whether or not a table is part of a cluster is transparent to users and to applications. Data stored in a clustered table. Data Blocks, Extents, and Segments The logical storage structures, including data blocks, extents, and have fine-grained control of disk space use. segments, enable Oracle to

Oracle Data Blocks At the finest level of granularity, Oracle database data is stored in data blocks. One data block corresponds to a specific number of bytes of physical database space on disk. The standard block size is specified by the initialization parameter DB_BLOCK_SIZE. In addition, specify of up to five other block sizes. A database uses and allocates free database space in Oracle data blocks.

Extents Extents The next level of logical database space is an extent. An extent is a specific number of contiguous data blocks, obtained in a single allocation, used to store a specific type of information. Segments Above extents, the level of logical database storage is a segment. A segment is a set of extents allocated for a certain logical structure. The following table describes the different types of segments. Segment Description Data segment Each non clustered table has a data segment. All table data is stored in the extents of the data segment. For a partitioned table, each partition has a data segment. Each cluster has a data segment. The data of every table in the cluster is stored in the clusters data segment. Index segment each index has an index segment that stores all of its data. For a partitioned index, each partition has an index segment. Temporary segment Temporary segments are created by Oracle when a SQL statement needs a temporary work area to complete execution. When the statement finishes execution, the extents in the temporary segment are returned to the system for future use. Oracle dynamically allocates space when the existing extents of a segment become full. In other words, when the extents of a segment are full, Oracle allocates another extent for that segment. Because extents are allocated as needed, the extents of a segment may or may not be contiguous on disk. Table spaces A database is divided into logical storage units called table spaces, which group related logical structures together. For example, table spaces commonly group together all application objects to simplify some administrative operations. Rollback segment If operating in automatic undo management mode, then the database server manages undo space use "Automatic Undo Management" management. However, if are operating in manual undo management mode, then one or more rollback segments for a database are created by the database administrator to temporarily store undo information. The information in a rollback segment is used during database recovery: To generate read-consistent database information To roll back uncommitted transactions for users Segment Description Databases, Table spaces, and Data files The relationship between databases, table spaces, and data files (data files are described in the next section) is illustrated in Figure 11. Rollback segment If operating in automatic undo management mode, then the database server manages undo space using table spaces. Oracle Corporation recommends that use "Automatic Undo Management" management. However, if operating in manual undo management mode, then one or more rollback segments for a database are created by the database administrator to temporarily store undo information. The information in a rollback segment is used during database recovery: 1) To generate read-consistent database information

2) To roll back uncommitted transactions for users Figure 11 Databases, Table spaces, and Data files This figure illustrates the following: 1) Each database is logically divided into one or more table spaces. 2) One or more data files are explicitly created for each table spaces to physically store the data of all logical structures in a table spaces. 3) The combined size of the data files in a table spaces is the total storage capacity of the table spaces. (The SYSTEM table spaces has 2 megabit (Mb) storage capacity, and USERS table spaces has 4 Mb). 4) The combined storage capacity of a databases table spaces is the total storage capacity of the database (6 Mb). Online and Offline Table spaces A table spaces can be online (accessible) Or Offline (not accessible). A table spaces is generally online, so that users can access the information in the table spaces. However, sometimes a table spaces is taken offline to make a portion of the database unavailable while allowing normal access to the remainder of the database. This makes many administrative tasks easier to perform. DATABASE DATA1.ORA 1 Mb System Table space DATA2.ORA 1MB USERS Table space DATA3.ORA 4 Mb

Fig. No. 1-2 Data files Every Oracle database has one or more physical data files. The data files contain the database data. The data of logical database structures, such as tables and indexes, is physically stored in the data files allocated for a database. The characteristics of data files are: 1) A data file can be associated with only one database. 2) Data files can have certain characteristics set to let them automatically extend when the database runs out of space. 3) One or more data files form a logical unit of database storage called a table spaces, as discussed earlier in this chapter.

Data in a data file is read, as needed, during normal database operation and stored in the memory cache of Oracle. For example, assume that a user wants to access some data in a table of a database. If the requested information is not already in memory cache for the database, then it is read from the appropriate data files and stored in memory.

Introduction to Data Blocks, Extents, and Segments Oracle allocates logical database space for all data in a database. The units of database space allocation are data blocks, extents, and segments. At the finest level of granularity, Oracle stores data in data blocks (also called logical blocks, Oracle blocks, or pages). One data block corresponds to a specific number of bytes of physical database space on disk. The next level of logical database space is an extent. An extent is a specific number of contiguous data blocks allocated for storing a Specific type of information. The level of logical database storage above an extent is called a segment. A segment is a set of extents, each of which has been allocated for a specific data structure and all of which are stored in the same table spaces. For example, each tables data is stored in its own data segment, while each indexs data is stored in its own index segment. If the table or index is partitioned, each partition is stored in its own segment. Oracle allocates space for segments in units of one extent. When the existing extents of a segment are full, Oracle allocates another extent for that segment. Because extents are allocated as needed, the extents of a segment may or may not be contiguous on disk. A segment and all its extents are stored in one table spaces. Within a table spaces, a segment can include extents from more than one file; that is, the segment can span data files. However, each extent can contain data from only one data file. Although you can allocate additional extents, the blocks themselves are allocated separately. If allocating an extent to a specific instance, blocks are immediately allocated to the free list. However, if the extent is not allocated to a specific instance, then the blocks themselves are allocated only when the high water mark moves.

ENVIRONMENT and TOOLS USED Visual C++ The Visual C++ package comprises many separate pieces such as editors, compiler, linker, make utility, a debugger, and various other tools designed for the task of developing C/C++ programs for Microsoft windows. This package also includes a development environment named Developer Studio. Developer Studio ties all the other Visual C++ tools together into an integrated whole, letting you view and control the entire development process through a consistent system of windows, dialogs, menus, tool bars, shortcut keys, and macros. To use an analogy, the environment is like a control room with a monitors, dials, and levers from which a single person can operate the machine of a sprawling factory. The environment is roughly everything seen in Visual C++. Everything runs behind the scenes under its management. These are the services provided by the Visual C++ Windows that provides views of different aspects of the development process, from lists of classes and source files to compiler messages. Menu access to an extensive system of online help

A text editor for creating and maintaining a source files, an intelligent dialog editor for designing dialog boxes, and a graphics editor for creating other interface elements such as bit maps icons, mouse cursors and toolbars. Wizards that create starter files for a program, giving a head start on the mundane task of setting up a new project. Visual c++ provides Wizards for various types of windows programs, including standard application with optional database and automation support, dynamic link libraries, dialog based applications, extentions for a Web server using the internet server API(ISAPI), and ActiveX controls. ClassWizard, an assistant that helps create and maintain classes for MFC applications. An excellent debugger. Drop in executable components maintained by the gallery that add instant features to the program. Logical and convenient access to commands through menus and toolbars. Customize existing menus and toolbars in Visual C++ or create new once. The ability to add own environment tools through macros and add in dynamic link libraries. Visual C++ displays information about a project in the work space and output dockable windows. The ODBC Standard The Microsoft Open Database Connectivity (ODBC) standard defines not only the rules of SQL grammar but also the C-language programming interface to any SQL database. It's now possible for a single compiled C or C++ program to access any DBMS that has an ODBC driver. The ODBC Software Development Kit (SDK), included with Visual C++, contains 32-bit drivers for DBF files, Microsoft Access MDB databases, Microsoft Excel XLS files, Microsoft FoxPro files, ASCII text files, Microsoft SQL Server databases and Oracle. Other database companies, including Oracle, Informix, Progress, Ingres, and Centura Software, provide ODBC drivers for their own DBMS's. If developing an MFC program with the dBASE/Xbase driver, for example, run the same program with an Access database driver. No recompilation is necessarythe program simply loads a different DLL. Not only can C++ programs use ODBC but other DBMS programming environments can also take advantage of this new standard. Write a C++ program to update a SQL Server database, and then use an off-the-shelf ODBC-compatible report writer to format and print the data. ODBC thus separates the user interface from the actual database-management process. You no longer have to buy your interface tools from the same company that supplies the database engine. Some people have criticized ODBC because it doesn't let programmers take advantage of the special features of some particular DBMS's. Well, that's the whole point! Programmers only need to learn one application programming interface (API), and they can choose their software components based on price, performance, and support. No longer will developers be locked into buying all their tools from their database suppliers.

The ODBC Architecture ODBC's unique DLL-based architecture makes the system fully modular. A small top-level DLL, ODBC32.DLL, defines the API. ODBC32.DLL loads database-specific DLLs, known as drivers, during program execution. With the help of the Windows Registry (maintained by the ODBC Administrator module in the Windows Control panel), ODBC32.DLL tracks which databasespecific DLLs are available and thus allows a single program to access data in several DBMSs simultaneously. A program could, for example, keep some local tables in DBF format and use other tables controlled by a database server. Figure 1-3 shows the 32-bit ODBC DLL hierarchy. Note from this figure that many standard database formats can be accessed through the Microsoft Access Jet database engine, a redistributable module packaged with Visual C++. If, for example, while accessing a DBF file through the Jet engine, we are using the same code that Microsoft Access uses. ODBC SDK Programming If programming directly at the ODBC C-language API level, one must know about three important ODBC elements: the environment, the connection, and the statement. All three are accessed through handles. First we need an environment that establishes the link between our program and the ODBC system. An application usually has only one environment handle. Next we need one or more connections. The connection references a specific driver and data source combination. we might have several connections to subdirectories that contain DBF files, and we might have connections to several SQL servers on the same network. A specific ODBC connection can be hardwired into a program, or the user can be allowed to choose from a list of available drivers and data sources.

Figure 1-3. 32-bit ODBC architecture. ODBC32.DLL has a built-in Windows dialog box that lists the connections that are defined in the Registry (under HKEY_LOCAL_MACHINE-\SOFTWARE\ODBC). Once we have a connection, we need a SQL statement to execute. The statement might be a query, such as this: SELECT FNAME, LNAME, CITY FROM AUTHORS WHERE STATE = 'UT' ORDER BY LNAME Or the statement could be an update statement, such as this: UPDATE AUTHORS SET PHONE = '801 232-5780' WHERE ID = '357-86-4343' Because query statements need a program loop to process the returned rows, our program might need several statements active at the same time. Many ODBC drivers allow multiple active statement handles per connection.

Look again at the SQL statement above. Suppose there were 10 authors in Utah. ODBC lets us define the query result as a block of data, called a rowset, which is associated with an SQL statement. Through the ODBC SDK function SQLExtendedFetch, program can move forward and backward through the 10 selected records by means of an ODBC cursor. This cursor is a programmable pointer into the rowset. What if, in a multiuser situation, another program modified (or deleted) a Utah author record while program was stepping through the rowset? With an ODBC Level 2 driver, the rowset would probably be dynamic and ODBC could update the rowset whenever the database changed. A dynamic rowset is called a dynaset. The Jet engine supports ODBC Level 2, and thus it supports dynasets. Visual C++ includes the ODBC cursor library module ODBCCR32.DLL, which supports static rowsets (called snapshots) for Level 1 drivers. With a snapshot, a SELECT statement causes ODBC to make what amounts to a local copy of the 10 author records and build an in-memory list of pointers to those records. These records are guaranteed not to change once you've scrolled through them; in a multiuser situation, you might need to requery the database periodically to rebuild the snapshot. The MFC ODBC ClassesCRecordset and Cdatabase With the MFC classes for Windows, we use C++ objects instead of window handles and device context handles; with the MFC ODBC classes, we use objects instead of connection handles and statement handles. The environment handle is stored in a global variable and is not represented by a C++ object. The two principal ODBC classes are CDatabase and CRecordset. Objects of class CDatabase represent ODBC connections to data sources, and objects of class CRecordset represent scrollable rowsets. The Visual C++ documentation uses the term "recordset" instead of "rowset" to be consistent with Microsoft Visual Basic and Microsoft Access. we seldom derive classes from CDatabase, but we generally derive classes from CRecordset to match the columns in database tables.

Figure 1-4. MFC ODBC class database relationships.

IMPLEMENTATION The first step was creation of a Database, created using Oracle Enterprise console manager. Which provide facility for Administer the complete Oracle environment, including databases, IAS servers, application and services.

Diagnose, Modify, and Tune multiple databases. Schedule tasks on multiple systems, at varying time interval. Monitor database conditions through out the network. Administer multiple network nodes, and services from many locations. Share tasks with other administrators. Group related services together to facilitate administration tasks. Launch integrated oracle and third party tools. Once the database is created, tables are created using table wizard,

Now the table are created giving table name and choosing schema the table to be part of and the table spaces to create the table in.

Now the columns are defined, field name its data type and it is added to the table. Column data type and column name with its size is declared. Now the primary key is defined if any. In the next step in the form of dialog boxes. There is no primary key so the option is selected if any primary key exist then it is to be defined here.

after that Null and unique constraint are defined, and then after foreign constraint are defined, check condition are defined, and any partition if exist is declared.

here the table is created successfully after pressing finish. The above table can also be created with following SQL statements CREATE TABLE OUTLN.PORT (PORTIDNUMBER(5)NOT NULL,TERMINALIDNUMBER(5),UNIQUE,SERVICETYPENUMBER(3)NOTNULL,EN CRYPTIONVARCHAR(1)NOTNULL,STATUSNUMBER(3)NOTNULL,PRIMARYKEY(P ORTID),UNIQUE(PORTID)USING INDEX TABLESPACEUSERS; This statement is directly executed on SQL plus tools provided by oracle.

Similarly others table are created. Once the tables were created, the dialogs had to be created for the tables. These were created in the Visual C++. When the Visual C++ opens, we select new from the file menu and clicking new from the projects tab of the new dialog box, which lists the new c++ wizards. To run the AppWizard that creates a project for a typical window application, select icon labeled MFC App wizard (exe). Give name to project . Set the location of the project and click ok. When ok is clicked, App Wizard presents a series of up to six steps in the form of dialog boxes. In each step, the left side of the dialog box displays a picture that gives a visual unique of the settings that the dialog is prompting for.

In the step one In step one

Specify the type of application choosing either Single-document interface(SDI), Multiple document interface(MDI), or dialog based interface. An SDI application that handle only one document object at a time. An MDI application has the advantage of being to handle any no. of document at once, displaying each document in separate window .

NONE Excludes the database support libraries from the project build.If there is no database select none. HEADER FILES ONLY Includes database header files and libraries in the build, but the App Wizard generates no source code for data base classes. DATABASE VIEW WITHOUT FILE SUPPORT Includes database header files and libraries, and also create record view and record set. DATABASE VIEW WITH FILE SUPPORT Same as the above setting, except that the resulting application has support for the both database document and serialization. If u include database view using either of the last two option we have to define a source for a data. After selecting source there are three option including the ODBC we are using have to be defined. RECORDSET TYPE Specify the type of recordset.

SNAPSHOT A snapshot is static.It does not reflect changes to the original data. DYNASET The content of dynaset recordset are dynamic, meaning that the recordset is automatically updated to reflect the most recent changes to the underlying records. TABLE This is enable only when DAO is selected for the data source type.

In the step two Click the data source button to display the data base options dialog boxes. The database Options dialog box prompts for a data source that conforms to the standard of either Open data base connectivity(ODBC),Microsoft Data Access Objects (DAO) or Ole database. ODBC functions are implemented in drivers specific to a database management system such as ORACLE or Dbase. Visual C++ Provides a collection of ODBC drivers; Application Wizard generates. After pressing data source we will have three type of database ODBC,DAO, OLE db. Selecting ODBC enables a drop down source of all data source registered with the ODBC data Source Administrator. A data source includes both data and the information required to access the data. To register or un register the data source run the administrator by double clicking the 32 bit ODBC icon in control panel. So, thereby creating graphic user interface using VC++. Similarly for other tables, (GUI) is created.

WORKSPACE AND OUTPUT WINDOWS Visual C++ displays information about a project in the workspace and output dockable windows, shown below. In addition to using the toolbar buttons, we can hide the workspace.

So the final Graphical user interface come out to be.

Вам также может понравиться