Вы находитесь на странице: 1из 9

QUE:1 .

Write about: With respect to Hashing Techniques ANS: Linear hashing :-is a dynamic hash table algorithm invented byWitold Litwin (1980) and later popularized by Paul Larson. Linear hashingallows for the expansion of the hash table one slot at a time. The frequentsingle slot expansion can very effectively control the length of the collisionchain. The cost of hash table expansion is spread out across each hashtable insertion operation, as opposed to be incurred all at once. Thereforelinear hashing is well suited for interactive applications. Algorithm Details A hash function controls the address calculation of linear hashing. Inlinear hashing, the address calculation is always bounded by a size that is apower of two * N, where N is the chosen original number of buckets. Thenumber of buckets is given by N * 2 ^ Level e.g. Level 0, N; Level 1, 2N; Level 2 , 4N. Address (level,key) = hash(key) mod N * (2 Level ) T h e ' s p l i t ' v a r i a b l e c o n t r o l s t h e r e a d o p e r a t i o n , a n d t h e e x p a n s i o n operation.A read operation would use address (level, key) if address (level, key) isgreater than or equal to the 'split' variable. Otherwise, address (level+1, key)is used. This takes into account buckets numbered less than split have beenrehashed with address (level+1,key) after its contents split between twonew buckets ( the first bucket writing over the contents of the old singlebucket prior to the split).A linear hashing table expansion operation would consist of rehashingthe entries at one slot location indicated by the 'split' variable to either of t w o t a r g e t s l o t l o c a t i o n s o f a d d r e s s ( l e v e l + 1 , k e y ) . T h i s i n t u i t i v e l y i s consistent with the assertion that if y = x mod M and y'= x mod M * 2 , theny'= y or y' = y + M. The 'split' variable is incremented by 1 at the end of the expansion operation. If the 'split' variable reaches N * 2 level , then the 'level' variable isincremented by 1, and the 'split' variable is reset to 0. T h u s t h e h a s h b u c k e t s a r e e x p a n d e d r o u n d r o b i n , a n d s e e m unrelated to where buckets overflow at the time of expansion. Overflowbuckets are used at the sites of bucket overflow ( the normal bucket has apointer to the overflow bucket), but these are eventually reabsorbed whent h e ro u n d r o b i n co m e s t o t h e b u ck e t w i th t h e o v e r f l o w b u c k et , a n d th e contents of that bucket and the overflow bucket are redistributed by thefuture hash function ( hash(key) mod N * (2 level +1 ). T h e d e g e n e r at e c a s e , w h i c h i s u n l ik e l y w i t h a r a n d o m i z e d h a s h fu n ct i on , i s t h a t en o u gh e n t r i e s a r e h a s h ed to th e s a m e b u c k e t s o t h at t h e r e i s e n o u g h e n t r i e s t o o v e r f l o w m o r e t h a n o n e o v e r f l o w b u c k e t ( assuming overflow bucket size = normal bucket size), before b e i n g absorbed when that bucket's turn to split comes in the round robin. The point of the algorithm seems to be that overflow is preempted bygradually increasing the number of available buckets, and overflow bucketsa r e e v e n t u a l ly r e ab s o rb e d d u r in g a l at e r s p li t , w h i c h m u s t e v e n t u a l ly happen because splitting occurs round robin. T h e r e i s s o m e f l e x i b i l i t y i n c h o o s i n g h o w o f t e n t h e e x p a n s i o n operations are performed. One obvious choice is to perform the expansionoperation each time no more slots are available at the target slot location.Another choice is to control the expansion with a programmer defined loadfactor. T h e h a sh ta b l e a r r ay f o r l i n ea r h a s h in g i s u s u a l l y i m p l e m e n t e d w i t h a dynamic array algorithm.

QUE:-2. Write about:

ANS:- Integrity rule 1: Entity integrity


It says that no component of a primary key may be null. All entities must be distinguishable. That is, they must have a unique identification of some kind. Primary keys perform unique identification function in a relational database. An identifier that was wholly null would be a contradiction in terms. It would be like there was some entity that did not have any unique identification. That is, it was not distinguishable from other entities. If two entities are not distinguishable from each other, then by definition there are not two entities but only one.

Integrity rule 2: Referential integrity


The referential integrity constraint is specified between two relations and is used to maintain the consistency among tuples of the two relations. Suppose we wish to ensure that value that appears in one relation for a given set of attributes also appears for a certain set of attributes in another. This is referential integrity. The referential integrity constraint states that, a tuple in one relation that refers to another relation must refer to the existing tuple in that relation. This means that the referential integrity is a constraint specified on more than one relation. This ensures that the consistency is maintained across the relations.

Table A
DeptID F-1001 S-2012 H-0001 DeptName Financial Software HR DeptManager Nathan Martin Jason

Table B
EmpNo 1001 1002 1003 DeptID F-1001 S-2012 H-0001 EmpName Tommy Will Jonathan

QUE:3 . Write about:

ANS:- Three Level Architecture of a Database

Figure 1: Three level architecture


1. 2. 3. the external level : concerned with the way individual users see the data the conceptual level : can be regarded as a community user view a formal description of data of interest to the organisation, independent of any storage considerations. the internal level : concerned with the way in which the data is actually stored

Figure : How the three level architecture works External View


A user is anyone who needs to access some portion of the data. They may range from application programmers to casual users with adhoc queries. Each user has a language at his/her disposal. The application programmer may use a high level language ( e.g. COBOL) while the casual user will probably use a query language. Regardless of the language used, it will include a data sublanguage DSL which is that subset of the language which is concerned with storage and retrieval of information in the database and may or may not be apparent to the user. A DSL is a combination of two languages:

a data definition language (DDL) - provides for the definition or description of database objects a data manipulation language (DML) - supports the manipulation or processing of database objects.

Each user sees the data in terms of an external view: Defined by an external schema, consisting basically of descriptions of each of the various types of external record in that external view, and also a definition of the mapping between the external schema and the underlying conceptual schema.

Conceptual View

An abstract representation of the entire information content of the database. It is in general a view of the data as it actually is, that is, it is a `model' of the `realworld'. It consists of multiple occurrences of multiple types of conceptual record, defined in the conceptual schema. To achieve data independence, the definitions of conceptual records must involve information content only. storage structure is ignored access strategy is ignored In addition to definitions, the conceptual schema contains authorisation and validation procedures.

Internal View
The internal view is a low-level representation of the entire database consisting of multiple occurrences of multiple types of internal (stored) records. It is however at one remove from the physical level since it does not deal in terms of physical records or blocks nor with any device specific constraints such as cylinder or track sizes. Details of mapping to physical storage is highly implementation specific and are not expressed in the three-level architecture. The internal view described by the internal schema:

defines the various types of stored record what indices exist how stored fields are represented what physical sequence the stored records are in

In effect the internal schema is the storage structure definition. QUE:-4. Explain the SQL syntax for:

ate examples

ANS:(A) Constraints are used to limit the type of data that can go into a table.
Constraints can be specified when a table is created (with the CREATE TABLE statement) or after the table is created (with the ALTER TABLE statement). We will focus on the following constraints:

NOT NULL UNIQUE PRIMARY KEY FOREIGN KEY CHECK DEFAULT

CREATE TABLE [ IF NOT EXISTS ] table_name ( column_declare1, column_declare2, constraint_declare1, ... )

column_declare ::= column_name type [ DEFAULT expression ] [ NULL | NOT NULL ] [ INDEX_BLIST | INDEX_NONE ] type ::= BIT | REAL | CHAR | TEXT | DATE | TIME | FLOAT | BIGINT | DOUBLE | STRING | BINARY | NUMERIC | DECIMAL | BOOLEAN | TINYINT | INTEGER | VARCHAR | SMALLINT | VARBINARY | TIMESTAMP | LONGVARCHAR | JAVA_OBJECT | LONGVARBINARY constraint_declare :: = [ CONSTRAINT constraint_name ] PRIMARY KEY ( col1, col2, ... ) | FOREIGN KEY ( col1, col2, ... ) REFERENCES f_table [ ( col1, col2, ... ) ] [ ON UPDATE triggered_action ] [ ON DELETE triggered_action ] | UNIQUE ( col1, col2, ... ) | CHECK ( expression ) [ INITIALLY DEFERRED | INITIALLY IMMEDIATE ] [ NOT DEFERRABLE | DEFERRABLE ] triggered_action :: = NO ACTION | SET NULL | SET DEFAULT | CASCADE

(B)
There are several basic types and categories of functions in SQL99 and vendor implementations of SQL. The basic types of functions are: Aggregate functions Operate against a collection of values, but return a single, summarizing value. Scalar functions Operate against a single value, and return a single value based on the input value. Some scalar functions, CURRENT_TIME for example, do not require any arguments. Aggregate Functions Aggregate functions return a single value based upon a set of other values. If used among many other expressions in the item list of a SELECT statement, the SELECT must have a GROUP BY clause. No GROUP BY clause is required if the aggregate function is the only value retrieved by the SELECT statement. The supported aggregate functions and their syntax are listed in Table 4-1. Table 4-1: SQL99 Aggregate Functions Function AVG(expression) COUNT(*) MIN(expression) MAX(expression) SUM(expression) AVG and SUM Usage Computes the average value of a column by the expression Counts all rows in the specified table or view Finds the minimum value in a column by the expression Finds the maximum value in a column by the expression Computes the sum of column values by the expression

COUNT(expression) Counts the rows defined by the expression

The AVG function computes the average of values in a column or an expression. SUM computes the sum. Both functions work with numeric values and ignore NULL values. They also can be used to compute the average or sum of all distinct values of a column or expression. AVG and SUM are supported by Microsoft SQL Server, MySQL, Oracle, and PostgreSQL. Example The following query computes average year-to-date sales for each type of book: SELECT type, AVG( ytd_sales ) AS "average_ytd_sales" FROM titles GROUP BY type; This query returns the sum of year-to-date sales for each type of book: SELECT type, SUM( ytd_sales ) FROM titles GROUP BY type; COUNT The COUNT function has three variations. COUNT(*) counts all the rows in the target table whether they include nulls or not. COUNT(expression) computes the number of rows with non-NULL values in a specific column or expression. COUNT(DISTINCT expression) computes the number of distinct non-NULL values in a column or expression. Examples This query counts all rows in a table: SELECT COUNT(*) FROM publishers; The following query finds the number of different countries where publishers are located: SELECT COUNT(DISTINCT country) "Count of Countries" FROM publishers SELECT type 'Category', AVG( price ) 'Average Price' FROM titles GROUP BY type HAVING AVG(price) > 15

QUE:5 Compare and Contrast the Centralized and Client / Server Architecture for DBMS.
ANS: 1.7.3 Client-Server Objective: to overcome the disadvantages of the first two approaches. Client-server is a concept where a client process, which requires some resource, and a server, which provides the resource. There is no requirement for the server or the client to be on the same network. The server and clients may be at different LANs at different sites. The client manages the user interface and application logic (front end) The client takes the users request, checks the syntax and generates database requests in SQL (for example). It then transmits the message to the server, waits for a response, and formats the response for the end-user. The server accepts and processes the database requests, and transmits the results back to the

client. There are many advantages to this type of architecture: (1) allows a wider access to existing databases (2) Increased performance different CPUs process applications in parallel (3) The server machine concentrates on performing database processing (4) Communication costs are reduced less data is sent across the network (5) Increased consistency the server handles the integrity checks Client3 LAN Database Selected data returned Request for data Client2 Client1 Server (with DBMS) Alternative client-server topologies

QUE: 6. Taking an example Enterprise System, List out the Entity types, Entity Sets, Attributes and Keys
ANS: ENTERPRISE SYSTEM:

Computing environment for the enterprise.

The set of computer technologies (i.e. hardware, software, and practices) used in integrated large scale systems, which are made up of a group of computational entities, including mainframes, servers, and peripheral devices, interconnected by a network forming a virtual centralized computing facility.

An entity is a person, place, thing, event, or concept of interest to the business or organization about which data is likely to be kept. For example, in a school environment possible entities might be Student, Instructor, and Class. Entity-type refers to a generic class of things such as Company. Entity is the short form of entity-type. Entity-occurrence refers to specific instances or examples of a type. For example, one occurrence of the entity Car is Chevrolet Cavalier. An entity usually has attributes (i.e., data elements) that further describe it. Each attribute is a characteristic of the entity. An entity must possess a set of one or more attributes that uniquely identify it (called a primary key). The entities on an Entity-Relationship Diagram are represented by boxes (i.e., rectangles). The name of the entity is placed inside the box.

Types of Entities
Different types of entities are required to provide a complete and accurate representation of an organization's data and to enable the analyst to use the Entity-Relationship Diagram as a starting point for physical database design. Types of entities include: Fundamental where the entity is a base entity that depends on no other for its existence. A fundamental entity has a primary key that is independent of any other entity and is typically composed of a single attribute. Fundamental entities are realworld, tangible objects, such as, Employee, Customer, or Product.

Attributive where the entity depends on another for its existence, for example, Employee Hobby depends on Employee. An attributive entity depends on another entity for parts of its primary key. It can result from breaking out a repeating group, the first rule of normalization, or from an optional attribute. Associative where the entity describes a connection between two entities with an otherwise many-to-many relationship, for example, assignment of Employee to Project (an Employee can be assigned to more than one Project and a Project can be assigned to more than one Employee). If information exists about the relationship, this information is kept in an associative entity. For example, the number of hours the Employee worked on a particular Project is an attribute of the relationship between Employee and Project, not of either Employee or Project. An associative entity is uniquely identified by concatenating the primary keys of the two entities it connects. Subtype/Supertype where one entity (the subtype) inherits the attributes of another entity (the supertype).

QUE: 8. Illustrate with an example of your own the Relational Model Notations
ANS: The Entity-Relationship Model The Entity-Relationship (ER) model was originally proposed by Peter in 1976 [Chen76] as a way to unify the network and relational database views. Simply stated the ER model is a conceptual data model that views the real world as entities and relationships. A basic component of the model is the Entity-Relationship diagram which is used to visually represents data objects. Since Chen wrote his paper the model has been extended and today it is commonly used for database design For the database designer, the utility of the ER model is:

it maps well to the relational model. The constructs used in the ER model can easily be transformed into relational tables. it is simple and easy to understand with a minimum of training. Therefore, the model can be used by the database designer to communicate the design to the end user. In addition, the model can be used as a design plan by the database developer to implement a data model in a specific database management software.

Symbols for Database Model Diagrams Thanks to an extensive set of library objects such as entities, links, items, attributes, users, types, captions, inheritance, references, boundaries, events, clouds etc Edraw is a perfect tool for database model design and ER diagramming. The Database Model Diagram template helps you design and implement database structures. You can use both EntityRelationship (ER) and Integrated Definition for Data Modeling (IDEF1X) notation when creating the diagram. Entity Relationship Symbols

Object Relationship Symbols