Академический Документы
Профессиональный Документы
Культура Документы
This is a keyword used to query data from more tables based on the
relationship between the fields of the tables. Keys play a major role when
JOINs are used.
Inner join
This will only return rows when there is at least one row in both tables
that match the join condition.
Left outer join (or left join)
This will return rows that have data in the left table (left of the JOIN
keyword), even if there's no matching rows in the right table.
Right outer join (or right join)
This will return rows that have data in the right table (right of
the JOIN keyword), even if there's no matching rows in the left
table.
Full outer join (or full join)
This will return all rows, as long as there's matching data in one
of the tables.
Select all records from Table A and Table B, where the join condition is met.
Select all records from Table A, along with records from Table B for which the join condition is met (if at all).
Select all records from Table B, along with records from Table A for which the join condition is met (if at all).
Select all records from Table A and Table B, regardless of whether the join condition is met or not.
Join Syntax
Inner Join:
ON table_name1.column_name = table_name2.column_name;
Left Join:
ON table_name1.column_name = table_name2.column_name;
Right Join:
ON table_name1.column_name = table_name2.column_name;
Full Join:
If you want to create a primary key, you should define a PRIMARY KEY constraint when
you create or modify a table.
When multiple columns are used as a primary key, it is known as composite primary
key.
In designing the composite primary key, you should use as few columns as possible. It is
good for storage and performance both, the more columns you use for primary key the
more storage space you require.
Inn terms of performance, less data means the database can process faster
The following SQL command creates a PRIMARY KEY on the "S_Id" column when the
"students" table is created.
CREATE TABLE students
(
S_Id int NOT NULL PRIMARY KEY,
LastName varchar (255) NOT NULL,
FirstName varchar (255),
Address varchar (255),
City varchar (255),
)
Note:you should note that in the above example there is only one PRIMARY KEY
(pk_StudentID). However it is made up of two columns (S_Id and LastName).
When you use ALTER TABLE statement to add a primary key, the primary key columns must not
contain NULL values (when the table was first created).
How to DROP a PRIMARY KEY constraint?
If you want to DROP (remove) a primary key constraint, you should use following syntax:
In simple words you can say that, a foreign key in one table used to point primary key in
another table.
Here are two tables first one is students table and second is orders table.
First table:
Second table:
1 99586465 2
2 78466588 2
3 22354846 3
4 57698656 1
o The "S_Id" column in the "Students" table is the PRIMARY KEY in the "Students"
table.
o The "S_Id" column in the "Orders" table is a FOREIGN KEY in the "Orders" table.
The foreign key constraint is generally prevents action that destroy links between tables.
To create a foreign key on the "S_Id" column when the "Orders" table is created:
Primary key cannot be null on the other hand foreign key can be null.
Primary key uniquely identify a record in a table while foreign key is a field in a table that is
primary key in another table.
There is only one primary key in the table on the other hand we can have more than one
foreign key in the table.
By default primary key adds a clustered index on the other hand foreign key does not
automatically create an index, clustered or non-clustered. You must manually create an
index for foreign key.
You can say that it is little like primary key but it can accept only one null value and it
cannot have duplicate values.
The unique key and primary key both provide a guarantee for uniqueness for a column or a
set of columns.
There is an automatically defined unique key constraint within a primary key constraint.
There may be many unique key constraints for one table, but only one PRIMARY KEY
constraint for one table.
If you want to create a UNIQUE constraint on the ?S_Id? column when the ?students? table
is created, use the following SQL syntax:
If you want to create a unique constraint on ?S_Id? column when the table is already
created, you should use the following SQL syntax:
If you want to drop a UNIQUE constraint, use the following SQL syntax:
Note: A table has a specified number of columns, but can have any number of rows.
Employee
In the above table, "Employee" is the table name, "EMP_NAME", "ADDRESS" and "SALARY"
are the column names. The combination of data of multiple columns forms a row e.g.
"Ankit", "Lucknow" and 15000 are the data of one row.
It is a variable where we temporary store records and results. This is same like temp table
but in the case of temp table we need to explicitly drop it.
Table variables are used to store a set of records. So declaration syntax generally looks like
CREATE TABLE syntax.
When a transaction rolled back the data associated with table variable is not rolled back.
This is very important to know that once a table is deleted all the information available in the
table is lost forever, so we have to be very careful when using this command.
Let's see the syntax to drop the table from the database.
But if you do not specify the WHERE condition it will remove all the rows from the table.
DELETE FROM table_name;
The TRUNCATE statement: it is used to delete all the rows from the table and free the
containing space.
1 Aryan Allahabad 22
2 Shurabhi Varanasi 13
3 Pappu Delhi 24
On the other hand when we TRUNCATE a table, the table structure remains the same, so
you will not face any of the above problems.
SQL RENAME TABLE
SQL RENAME TABLE syntax is used to change the name of a table. Sometimes, we choose
non-meaningful name for the table. So it is required to be changed.
Let us take an example of a table named "STUDENTS", now due to some reason we want to
change it into table name "ARTISTS".
Table1: students
You should use any one of the following syntax to RENAME the table name:
Or
Truncate table is faster and uses lesser resources than DELETE TABLE comman
Drop table command can also be used to delete complete table but it deletes table structure
too. TRUNCATE TABLE doesn't delete the structure of the table.
Let's see the syntax to truncate the table from the database.
For example, you can write following command to truncate the data of employee table
Note: The rollback process is not possible after truncate table statement. Once you truncate
a table you cannot use a flashback table statement to retrieve the content of the table.
For example, you can write following command to copy the records of hr_employee table
into employee table.
Temporary tables can be created at run-time and can do all kinds of operations that a
normal table can do. These temporary tables are created inside tempdb database.
There are two types of temp tables based on the behavior and scope.
You can also use SQL ALTER TABLE command to add and drop various constraints on an
existing table.
if you want to modify multiple columns in table, the SQL table will be
Temporary Tables
Temporary tables are created in tempdb.
They act like regular tables in that you can query their data via SELECT queries and modify
their data via UPDATE, INSERT, and DELETE statements.
If created inside a stored procedure they are destroyed upon completion of the stored
procedure.
Furthermore, the scope of any particular temporary table is the session in which it is
created; meaning it is only visible to the current user. Multiple users could create a temp
table named #TableX and any queries run simultaneously would not affect one another
- they would remain autonomous transactions and the tables would remain autonomous
objects. You may notice that my sample temporary table name started with a "#" sign.
This is the identifier for SQL Server that it is dealing with a temporary table.
-Temp Table:
CREATE TABLE dbo.#Cars
(
Car_id int NOT NULL,
ColorCode varchar(10),
ModelName varchar(20),
Code int,
DateEntered datetime
)
--Table Variable:
DECLARE @Cars TABLE
(
Car_id int NOT NULL,
ColorCode varchar(10),
ModelName varchar(20),
Code int ,
DateEntered datetime
)
The results differ, depending upon how you run the code. If you run the entire block of
code the following results are returned:
If you are not familiar with the Try...Catch paradigm it is basically two blocks of code
with your stored procedures that lets you execute some code, this is the Try section and
if there are errors they are handled in the Catch section.
Let's take a look at an example of how this can be done. As you can see we are using
a basic SELECT statement that is contained within the TRY section, but for some
reason if this fails it will run the code in the CATCH section and return the error
information.
Common Table Expressions are defined within the statement using the WITH
operator. You can define one or more common table expression in this fashion.
Here is a really simple example of one CTE:
AS
(SELECT NationalIDNumber,
JobTitle
FROM HumanResources.Employee)
SELECT EmployeeNumber,
Title
FROM Employee_CTE
CTE
The blue portion is the CTE. Notice it contains a query that can be run on its own in SQL. This
is called the CTE query definition:
SELECT NationalIDNumber,
JobTitle
FROM HumanResources.Employee
CTE
Non-Recursive CTEs
Non-Recursive CTEs are simple where the CTE doesn’t use any recursion, or repeated processing in
of a sub-routine. We will create a simple Non-Recursive CTE to display the row number from 1 to 10.
As per the CTE Syntax each CTE query will start with a “With” followed by the CTE Expression name
with column list.
Here we have been using only one column as ROWNO. Next is the Query part, here we write our
select query to be execute for our CTE. After creating our CTE query to run the CTE use the select
statement with CTE Expression name.
;with ROWCTE(ROWNO) as
(
SELECT
ROW_NUMBER() OVER(ORDER BY name ASC) AS ROWNO
FROM sys.databases
WHERE database_id <= 10
)
Recursive CTE
Recursive CTEs are use repeated procedural loops aka recursion. The recursive query call themselves
until the query satisfied the condition. In a recursive CTE we should provide a where condition to
terminate the recursion.:
We will see how to create a simple Recursive query to display the Row Number from 1 to 10 using a
CTE.
Firstly we declare the Integer variable as “RowNo” and set the default value as 1 and we have created
our first CTE query as an expression name, “ROWCTE”. In our CTE we’ll first display the default row
number and next we’ll use a Union ALL to increment and display the row number 1 by one until the
Row No reaches the incremented value to 10. To view the result, we will use a select query to display
our CTE result.
;with ROWCTE as
UNION ALL
SELECT ROWNO+1
FROM ROWCTE
)
SELECT * FROM ROWCTE
Output: When we run the query, we can see the below output.
CTE Example:
Now we will create a simple temporary result using CTE Query. Here in this CTE Query we have given
the expression name as “itemCTE” and we have added the list of Columns which we use in the CTE
query. In the CTE query we display all item details with the year.
AS
FROM ItemDetails
Output: When we run the query, we can see the below output.
CTE using Union ALL
For this we use the above CTE Query. In this query, we add the UNION ALL and in UNION ALL Query
we do calculation to add 10% to each item Price and show in next row with adding one year.
AS
FROM ItemDetails
UNION ALL
FROM ItemDetails
)
Output: When we run the query, we can see the below output.
Here we use above same CTE query Insert the result in to the Item History table. From this query we
insert both item details of present year Item price along with the next year Item prices added as 10%
more.
AS
FROM ItemDetails
UNION ALL
SELECT Item_ID as Item_ID, Item_Name,(Item_Price + (Item_Price *10 )/100) as Item_Price,
FROM ItemDetails
Output: When we run the query, we can see the below output as 30 records has been inserted into
our Item History table.
Select Query:
To view the item history result we select and display all the details.
Output: When we run the query, we can see the below output from item history table.
Create View with CTE Example:
Now we see how to use the above CTE query can be used in a view. Here we create a view and we
add the CTE result inside the view. When we select the view as a result, we can see the CTE output
will be displayed.
Example Query:
AS
FROM ItemDetails
UNION ALL
FROM ItemDetails)
SELECT Item_ID, Item_Name, Item_Price,MarketRate,year(IDate) as IDate from itemCTE1
GO
GO
Output: When we run the query, we can see the below output as result from the View.
Multiple CTE
In some scenarios, we need to create more than one CTE query and join them to display our result. In
this case, we can use the Multiple CTEs. We can create a multiple CTE query and combine them into
one single query by using the comma. Multiple CTE need to be separate by “,” comma fallowed by
CTE name.
We will be using above same date range example to use more than one CTE query, here we can see
as we have created two CTE query as CTE1 and CTE 2 to display date range result for both CTE1 and
for CTE2.
Example :
WITH CTE1
AS (
r(2),@startDate,106)+')' as 'WeekNumber'
UNION ALL
FROM CTE1
),
CTE2
AS (
char(2),@startDate1,106)+')' as 'WeekNumber'
UNION ALL
FROM CTE2
UNION ALL
Output: When we run the query, we can see the below output.
Here are some basic guidelines that need to be followed to write a good CTE Query.
1. A CTE must be followed by a single SELECT, INSERT, UPDATE, or DELETE statement that
references some or all the CTE columns.
2. Multiple CTE query definitions can be defined in a non recursive CTE.
3. A CTE can reference itself and previously defined CTEs in the same WITH clause
4. We can use only one With Clause in a CTE
5. ORDER BY, INTO, COMPUTE or COMPUTE BY, OPTION, FOR XML cannot be used in non-
recursive CTE query definition
6. SELECT DISTINCT, GROUP BY, HAVING, Scalar aggregation, TOP, LEFT, RIGHT, OUTER JOIN
(INNER JOIN is allowed) subqueries cannot be used in a recursive CTE query definition.
Readability – CTE’s promote readability. Rather than lump all you query logic into one large
query, create several CTE’s, which are the combined later in the statement. This lets you get
the chunks of data you need and combine them in a final SELECT.
Substitute for a View – You can substitute a CTE for a view. This is handy if you don’t have
permissions to create a view object or you don’t want to create one as it is only used in this one
query.
Recursion – Use CTE’s do create recursive queries, that is queries that can call themselves.
This is handy when you need to work on hierarchical data such as organization charts.
Ranking – Whenever you want to use ranking function such as ROW_NUMBER(), RANK(),
NTILE() etc.
Consider the orders and customers tables from the sample database.
The following statement shows how to use a subquery in the WHERE clause of
a SELECT statement to find the sales orders of the customers who locate in New York :
1 SELECT
2 order_id,
3 order_date,
4 customer_id
5 FROM
6 sales.orders
7 WHERE
8 customer_id IN (
9 SELECT
10 customer_id
11 FROM
12 sales.customers
13 WHERE
15 )
16 ORDER BY
17 order_date DESC;
1 SELECT
2 customer_id
3 FROM
4 sales.customers
5 WHERE
Note that you must always enclose the SELECT query of a subquery in parentheses () .
A subquery is also known as an inner query or inner select while the statement containing the
subquery is called an outer select or outer query:
SQL server executes the whole query example above as follows:
First, it executes the subquery to get a list of customer identification numbers of the customers
who locate in New Year .
1 SELECT
2 customer_id
3 FROM
4 sales.customers
5 WHERE
Second, SQL Server substitutes customer identification numbers returned by the subquery in
the IN operator and executes the outer query to get the final result set.
As you can see, by using the subquery, you can combine two steps together. The subquery
removes the need for selecting the customer identification numbers and plugging them into the
outer query. Moreover, the query itself automatically adjusts whenever the customer data
changes.
Nesting subquery
A subquery can be nested within another subquery. SQL Server supports up to 32 levels of
nesting. Consider the following example:
1 SELECT
2 product_name,
3 list_price
4 FROM
5 production.products
6 WHERE
7 list_price > (
8 SELECT
9 AVG (list_price)
10 FROM
11 production.products
12 WHERE
13 brand_id IN (
14 SELECT
15 brand_id
16 FROM
17 production.brands
18 WHERE
19 brand_name = 'Strider'
20 OR brand_name = 'Trek'
21 )
22 )
23 ORDER BY
24 list_price;
First, SQL Server executes the following subquery to get a list of brand identification numbers of
the Strider and Trek brands:
1 SELECT
2 brand_id
3 FROM
4 production.brands
5 WHERE
6 brand_name = 'Strider'
7 OR brand_name = 'Trek';
Second, SQL Server calculates the average price list of all products that belong to those brands.
1 SELECT
2 AVG (list_price)
3 FROM
4 production.products
5 WHERE
6 brand_id IN (6,9)
Third, SQL Server finds the products whose list price is greater than the average list price of all
products with the Strider or Trek brand.
SQL Server subquery types
You can use a subquery in many places:
In place of an expression
With IN or NOT IN
With ANY or ALL
With EXISTS or NOT EXISTS
In UPDATE, DELETE, orINSERT statement.
SQL Server subquery is used in place of an expression
If a subquery returns a single value, it can be used anywhere an expression is used.
2 order_id,
3 order_date,
4 (
5 SELECT
6 MAX (list_price)
7 FROM
8 sales.order_items i
9 WHERE
10 i.order_id = o.order_id
11 ) AS max_list_price
12 FROM
13 sales.orders o
1 SELECT
2 product_id,
3 product_name
4 FROM
5 production.products
6 WHERE
7 category_id IN (
8 SELECT
9 category_id
10 FROM
11 production.categories
12 WHERE
15 );
This query is evaluated in two steps:
1. First, the inner query returns a list of category identification numbers that match the
names Mountain Bikes and code Road Bikes.
2. Second, these values are substituted into the outer query that finds the product names which
have the category identification number match with one of the values in the list.
Assuming that the subquery returns a list of value v1, v2, … vn. The ANY operator
returns TRUE if one of a comparison pair ( scalar_expression , vi) evaluates to TRUE ;
otherwise, it returns FALSE .
For example, the following query finds the products whose list prices are greater than or equal to
the maximum list price of any product brand.
1 SELECT
2 product_name,
3 list_price
4 FROM
5 production.products
6 WHERE
8 SELECT
9 AVG (list_price)
10 FROM
11 production.products
12 GROUP BY
13 brand_id
14 )
For each brand, the subquery finds the maximum list price. The outer query uses these max
prices and determines which individual product’s list price is greater than or equal to any brand’s
maximum list price.
The ALL operator returns TRUE if all comparison pairs ( scalar_expression , vi) evaluate
to TRUE ; otherwise, it returns FALSE .
The following query finds the products whose list price is greater than or equal to the maximum
list price returned by the subquery:
1 SELECT
2 product_name,
3 list_price
4 FROM
5 production.products
6 WHERE
8 SELECT
9 AVG (list_price)
10 FROM
11 production.products
12 GROUP BY
13 brand_id
14 )
The EXISTS operator returns TRUE if the subquery return results; otherwise it returns FALSE .
On the other hand, the NOT EXISTS is opposite to the EXISTS operator.
The following query finds the customers who bought products in 2017:
1 SELECT
2 customer_id,
3 first_name,
4 last_name,
5 city
6 FROM
7 sales.customers c
8 WHERE
9 EXISTS (
10 SELECT
11 customer_id
12 FROM
13 sales.orders o
14 WHERE
15 o.customer_id = c.customer_id
17 )
18 ORDER BY
19 first_name,
20 last_name;
If you use the NOT EXISTS instead of EXISTS , you can find the customers who did not buy any
products in 2017.
1 SELECT
2 customer_id,
3 first_name,
4 last_name,
5 city
6 FROM
7 sales.customers c
8 WHERE
9 NOT EXISTS (
10 SELECT
11 customer_id
12 FROM
13 sales.orders o
14 WHERE
15 o.customer_id = c.customer_id
17 )
18 ORDER BY
19 first_name,
20 last_name;
In this tutorial, you have learned about the SQL Server subquery concept and how to use various
subquery types to query data.
WHAT IS CURSOR?
A cursor is a database object which is used to retrieve data from a result set one row at
a time. The cursor can be used when the data needs to be updated row by row.
2. Opening Cursor
A cursor is opened for storing data retrieved from the result set.
3. Fetching Cursor
When a cursor is opened, rows can be fetched from the cursor one by one or in a
block to do data manipulation.
4. Closing Cursor
The cursor should be closed explicitly after data manipulation.
5. Deallocating Cursor
Cursors should be deallocated to delete cursor definition and release all the
system resources associated with the cursor.
In programming, we use a loop like FOR or WHILE to iterate through one item at a time,
the cursor follows the same approach and might be preferred because it follows the
same logic.
Cursor Syntax
1. DECLARE cursor_name CURSOR [ LOCAL | GLOBAL ]
2. [ FORWARD_ONLY | SCROLL ]
3. [ STATIC | KEYSET | DYNAMIC | FAST_FORWARD ]
4. [ READ_ONLY | SCROLL_LOCKS | OPTIMISTIC ]
5. [ TYPE_WARNING ] FOR select_statement
6. [ FOR UPDATE [ OF column_name [ ,...n ] ] ] [;]
Cursor Example
The following cursor is defined for retrieving employee_id and employee_name from
Employee table.The FETCH_STATUS value is 0 until there are rows.when all rows are
fetched then FETCH_STATUS becomes 1.
1. use Product_Database
2. SET NOCOUNT ON;
3.
4. DECLARE @emp_id int ,@emp_name varchar(20),
5. @message varchar(max);
6.
7. PRINT '-------- EMPLOYEE DETAILS --------';
8.
9. DECLARE emp_cursor CURSOR FOR
10. SELECT emp_id,emp_name
11. FROM Employee
12. order by emp_id;
13.
14. OPEN emp_cursor
15.
16. FETCH NEXT FROM emp_cursor
17. INTO @emp_id,@emp_name
18.
19. print 'Employee_ID Employee_Name'
20.
21. WHILE @@FETCH_STATUS = 0
22. BEGIN
23. print ' ' + CAST(@emp_id as varchar(10)) +'
'+
24. cast(@emp_name as varchar(20))
25.
26.
27. FETCH NEXT FROM emp_cursor
28. INTO @emp_id,@emp_name
29.
30. END
31. CLOSE emp_cursor;
32. DEALLOCATE emp_cursor;
The Output of the above program will be as follows
CURSOR LIMITATIONS
A cursor is a memory resident set of pointers -- meaning it occupies memory from your
system that may be available for other processes.
Cursors can be faster than a while loop but they do have more overhead.
Another factor affecting cursor speed is the number of rows and columns brought into
the cursor. Time how long it takes to open your cursor and fetch statements.
Too many columns being dragged around in memory, which are never referenced in the
subsequent cursor operations, can slow things down.
The cursors are slower because they update tables row by row.
Suppose we have to retrieve data from two tables simultaneously by comparing primary
keys and foreign keys. In these types of problems, the cursor gives very poor
performance as it processes through each and every column. On the other hand using
joins in those conditions is feasible because it processes only those columns which
meet the condition. So here joins are faster than cursors.
Suppose, we have two tables, ProductTable and Brand Table. The primary key of
BrandTable is brand_id which is stored in ProductTable as foreign key brand_id. Now
suppose, I have to retrieve brand_name from BrandTable using foreign key brand_id
from ProductTable. In these situations cursor programs will be as follows,
1. use Product_Database
2. SET NOCOUNT ON;
3.
4. DECLARE @brand_id int
5. DECLARE @brand_name varchar(20)
6.
7.
8. PRINT '--------Brand Details --------';
9.
10. DECLARE brand_cursor CURSOR FOR
11. SELECT distinct(brand_id)
12. FROM ProductTable;
13.
14. OPEN brand_cursor
15.
16. FETCH NEXT FROM brand_cursor
17. INTO @brand_id
18.
19. WHILE @@FETCH_STATUS = 0
20. BEGIN
21. select brand_id,brand_name from BrandTable where brand_i
d=@brand_id
22. --(@brand_id is of ProductTable)
23.
24. FETCH NEXT FROM brand_cursor
25. INTO @brand_id
26.
27. END
28. CLOSE brand_cursor;
29. DEALLOCATE brand_cursor;
ProductTable p on b.brand_id=p.brand_id
As we can see from the above example, using joins reduces the lines of code and gives
faster performance in case huge records need to be processed.
2 (
7 )
8 GO
11 (
13 ParentID INT,
15 )
16 GO
To track the INSERT operation, we will create a DML trigger that will be fired after performing an
INSERT operation on the parent table. This trigger will retrieve the last inserted ID value to that
parent table from the virtual inserted table, as in the CREATE TRIGGER T-SQL statement below:
2 ON TriggerDemo_Parent
3 AFTER INSERT
4 AS
5 INSERT INTO TriggerDemo_History VALUES ((SELECT TOP 1 inserted.ID FROM inserted), 'Insert')
6 GO
Tracking the DELETE operation can be achieved by creating a DML trigger that is fired after
performing the DELETE operation on the parent table. Again, the trigger will retrieve the ID value of
the last deleted record from that parent table from the virtual deleted table, as in the CREATE
TRIGGER T-SQL statement below:
2 ON TriggerDemo_Parent
3 AFTER DELETE
4 AS
5 INSERT INTO TriggerDemo_History VALUES ((SELECT TOP 1 deleted.ID FROM deleted), 'Delete')
6 GO
Finally, we will track also the UPDATE operation by creating a DML trigger that will be fired after
performing an UPDATE operation on the parent table. Within this trigger, we will retrieve the last
updated ID value from that parent table from the virtual inserted table, taking into consideration that
the UPDATE process is performed by deleting the record and inserting a new record with the
updated values, as in the CREATE TRIGGER T-SQL statement below:
2 ON TriggerDemo_Parent
3 AFTER UPDATE
4 AS
5 INSERT INTO TriggerDemo_History VALUES ((SELECT TOP 1 inserted.ID FROM inserted), 'UPDATE')
6 GO
The tables and the triggers are ready now for our testing. If you try to insert a new record into the
parent table using the INSERT INTO T-SQL statement below:
Then by checking the execution plan generated by executing the previous INSERT statement, you
will see that two insert operations will be performed, affecting two tables; the parent table with the
values specified in the INSERT statement and the history table due to firing the AFTER INSERT
trigger, as shown in the execution plan below:
It is also clear when you check the data inserted into both the parent and history tables using the
SELECT statements below:
2 GO
Where the values specified in the INSERT statement will be inserted into the parent table, and the
insert log that contains the ID of the inserted record and the operation performed will be inserted into
the history table, as shown in the result below:
Now, if you try to update an existing record in the parent table using the UPDATE T-SQL statement
below:
1 UPDATE TriggerDemo_Parent SET Emp_Salary=550 WHERE ID=1
And check the execution plan generated by executing the previous UPDATE statement, you will see
that the update operation will be followed by an insert operation affecting two different tables; the
parent table will be updated with the value specified in the UPDATE statement and the insert
operation into the history table due to firing the AFTER UPDATE trigger, as shown in the execution
plan below:
Checking both the parent and the history table records using the SELECT statements below:
2 GO
You will see that the update statement will modify the Emp_Salary value in the parent table with the
value specified in the UPDATE statement, and the update log that contains the ID of the updated
record and the operation performed will be inserted into the history table, as shown in the result
below:
In the last scenario of the AFTER DML trigger, we will track the deletion of an existing record from
the parent table using the DELETE T-SQL statement below:
Then check the execution plan generated by executing the previous DELETE statement, you will see
that the DELETE operation will be followed by the insert operation, affecting two different tables; the
parent table from which the record with the provided ID in the WHERE clause of the DELETE
statement will be deleted and the insert operation into the history table due to firing the AFTER
DELETE trigger, as shown in the execution plan below:
If you check both the parent and the history table records using the SELECT statements below:
1 SELECT * FROM TriggerDemo_Parent
2 GO
You will see that the record with the ID value equal to 1 was deleted from the parent table that is
provided in the DELETE statement, and the delete log that contains the ID of the deleted record and
the operation performed will be inserted into the history table, as shown in the result below:
2 (
6 Emp_Salary INT
7 )
8 GO
9
10
12 (
14 ParentID INT,
16 )
17 GO
After creating the two tables, we will insert a single record into the source table for our demo using
the INSERT INTO statement below:
For this demo, we will create three triggers to override the INSERT, UPDATE, and DELETE
operations. The first trigger will be used to prevent any insert operation on the parent table and the
log that change into the alternative table. The trigger is created using the CREATE TRIGGER T-SQL
statement below:
The second trigger is used to prevent any update operation on the parent table and the log that
change into the alternative table. This trigger is created as below:
CREATE TRIGGER InsteadOfUpdateTrigger
1
ON TriggerDemo_NewParent
2
INSTEAD OF UPDATE
3
AS
4
INSERT INTO TriggerDemo_InsteadParent VALUES ((SELECT TOP 1 inserted.ID FROM inserted), 'Trying to Update an
5
existing ID')
6
GO
The last trigger will be used to prevent any delete operation on the parent table and the log that
change into the alternative table. This trigger is created as follows:
The two tables and the three triggers are ready now. If you try to insert a new value into the parent
table using the INSERT INTO T-SQL statement below:
Then check both the parent and the alternative table records using the SELECT statements below:
2 GO
Trying to update an existing record in the parent table using the UPDATE T-SQL statement below:
Then checking both the parent and the alternative table records using the SELECT statements
below:
2 GO
You will see from the result that the Emp_Salary value of the record with the ID value equal to 1 from
the parent table will not be changed, and the log for the update operation is inserted into the
alternative table due to having the INSTEAD OF UPDATE trigger in the parent table, as shown in the
result below:
Finally, if we try to delete an existing record from the parent table using the DELETE T-SQL
statement below:
1 DELETE FROM TriggerDemo_NewParent WHERE ID=1
And check both the parent and the alternative table records using the SELECT statements below:
2 GO
It will be clear from the result that the record with the ID value equal to 1 from the parent table will
not be deleted, and the log for the delete operation is inserted into the alternative table due to having
the INSTEAD OF DELETE trigger in the parent table, as shown in the result below:
4 AFTER UPDATE
6 GO
If you try to update the Emp_Salary value of the employee with the ID value equal to 1 using the
UDPATE statement below:
An error message will be raised in the Messages tab, that contains the message provided in the
created trigger, as shown below:
Checking the parent table data using the SELECT statement below:
You will see from the result that the Emp_Salary is updated successfully, as shown below:
If you need the AFTER UPDATE trigger to stop the update operation after raising the error message,
the ROLLBACK statement can be added to the trigger in order to rollback the update operation that
fired that trigger, recalling that the trigger and the statement that fires the trigger will be executed in
the same transaction. This can be achieved using the ALTER TRIGGER T-SQL statement, see:
3 AFTER UPDATE
5 ROLLBACK
6 GO
If you try to update the Emp_Salary value of the employee with the ID equal to 1 using the UPDATE
statement below:
Again, an error message will be displayed in the Messages tab, but this time, the update operation
will be rolled back completely, as shown in the error messages below:
Checking the value from the source table using the SELECT statement below:
You will see that the Emp_Salary value has not changed, as the AFTER UPDATE trigger rolled back
the overall transaction after raising the error message, as shown in the table result below:
Trigger Disadvantages
With all the mentioned advantages of the SQL Server triggers, the triggers increase the complexity
of the database. If the trigger is badly designed or overused, it will lead to major performance issues,
such as blocked sessions, due to extending the life of the transaction for longer time, extra overhead
on the system due to executing it each time an INSERT, UPDATE or DELETE action is performed or
it may lead to data loss issues. Also, it is not easy to view and trace the database triggers, especially
if there is no documentation about it as it is invisible to developers and the applications.
Trigger Alternatives … Enforce Integrity
If it is found that the triggers are harming the performance of your SQL Server instance, you have to
replace them with other solutions. For example, rather than using the triggers to enforce the entity
integrity, it should be enforced at the lowest level by using the PRIMARY KEY and UNIQUE
constraints. The same is applied to the domain integrity that should be enforced through CHECK
constraints, and the referential integrity that should be enforced through the FOREIGN KEY
constraints. You can use the DML triggers only if the features supported by a specific constraint
cannot meet your application requirements.
Let us compare between enforcing the domain integrity using DML triggers and using the CHECK
constraints. Assume that we need to enforce inserting positive values only to the Emp_Salary
column. We will start with creating a simple table using the CREATE TABLE T-SQL statement
below:
2(
6 Emp_Salary INT
7 )
8 GO
Then define the AFTER INSERT DML trigger that ensures that you insert a positive value to the
Emp_Salary column by rolling back the transaction if a user inserts a negative salary value, using
the CREATE TRIGGER T-SQL statement below:
2 AFTER INSERT
3 AS
6 IF @EmpSal<0
7 BEGIN
8 RAISERROR ('Cannot insert negative salary',16,10);
9 ROLLBACK
10 END
For comparison purposes, we will create another simple table, with the same schema, and define a
CHECK constraint within the CREATE TABLE statement to accept only positive values in the
Emp_Salary column, as shown below:
2(
7 )
8 GO
If you try to insert the below record that contains negative Emp_Salary value into the first table that
has a predefined trigger, using the INSERT INTO statement below:
2 GO
The INSERT statement will fail raising an error message showing that you can’t insert a negative
value to the Emp_Salary column and rollback the overall transaction due to having an AFTER
INSERT trigger, as shown in the error message below:
Also, if you try to insert the same record that contains a negative Emp_Salary value into the second
table that has a predefined CHECK constraint using the INSERT INTO statement below:
1 INSERT INTO EmployeeSalaryConstraint VALUES ('Ali', 'Fadi',-4)
The INSERT statement will fail again showing that you are trying to insert the value that conflicts
with the CHECK constraint condition, as shown in the error message below:
From the previous results, you see that both the trigger and the CHECK constraint methods achieve
the goal by preventing you from inserting negative Emp_Salary values. But which one is better? Let
us compare the performance of the two methods by checking the execution plan weight for each
one. From the generated execution plans after executing the two queries, you will see that the
trigger method weight is three times the CHECK constraint method weight, as shown in the
execution plan comparison below:
Also, to compare the execution time consumed by each one, let us run each one 1000 times using
the T-SQL statements below:
2 GO 10000
4 GO 10000
You will see that the first method that is using the trigger will take about 31ms to be executed
completely, where the second method that is using the CHECK constraint will take only 17ms, which
is about 0.5 the time required in the method using the trigger. This is due to the fact that the trigger
will extend the transaction life and will rollback the query that fires the trigger after executing it when
an integrity violation is found, causing a performance degradation due to the rollback process. The
case is different when using the CHECK constraint, where the constraint will do its job before doing
any modification in the data, requiring no rollback in the case of violation.
2 (
6 Emp_Salary INT
7 )
8 GO
10
12 (
14 ProdID INT,
15 ProdSalary INT,
16 TS DATETIME,
17 )
18 GO
Once both tables are created successfully, we will create the AFTER INSERT, UPDATE DML trigger
that will write a record into the history table if any new row is inserted into the production table or an
existing record is modified using the CREATE TRIGGER T-SQL statement below:
To compare logging the changes using the trigger method and the OUTPUT clause, we need to
create two new simple tables, the production and the history tables, with the same schema as the
previous two tables, but this time without defining a trigger, using the CREATE TABLE T-SQL
statements below:
2 (
3 ID INT IDENTITY (1,1) PRIMARY KEY,
6 Emp_Salary INT
7 )
8 GO
10
12 (
14 ProdID INT,
15 ProdSalary INT,
16 TS DATETIME,
17 )
18
19 GO
Now the four tables are ready for the testing. We will insert one record into the first production table
that has a trigger using the INSERT INTO T-SQL statement below:
2 GO
Then we will insert the same record into the second production table using the OUTPUT clause. The
below INSERT INTO statement will act as two insert statements; the first one will insert the same
record into the production table and the second insert statement beside the OUTPUT clause will
insert the insertion log into the history table:
Checking the data inserted into the four production and history tables, you will see that both
methods, the trigger and OUTPUT methods, will write the same log into the history table
successfully and in the same way, as shown in the result below:
From the generated execution plans after executing the two queries, you will see that the trigger
method weight is about (21%+36%) 57% of the overall weight, where the OUTPUT method weight is
about 43%, with a small weight difference, as shown in the execution plans comparison below:
The performance difference is clear when comparing the execution time consumed by each method,
where logging the changes using the trigger method will consume (114+125) 239ms to be executed
completely, and the method that is using the OUTPUT clause method consumes only 5ms, which
is 2% of the time used in the trigger method, as shown clearly from the Time Statistics below:
It is clear now from the previous result that using the OUTPUT method is better than using triggers
for changes auditing.
Differences Between SCOPE IDENTITY,
IDENT CURRENT, and IDENTITY in SQL
Server 2008
What SCOPE_IDENTITY is
SCOPE_IDENTITY is:
1. SCOPE_IDENTITY returns the last IDENTITY value inserted into an IDENTITY column in the same
scope.
2. SCOPE_IDENTITY returns the last identity value generated for any table in the current session and
the current scope.
3. A scope is a module; a Stored Procedure, trigger, function, or batch.
4. Thus, two statements are in the same scope if they are in the same Stored Procedure, function, or
batch.
5. The SCOPE_IDENTITY() function will return the NULL value if the function is invoked before any
insert statements into an identity column occur in the scope.
What IDENT_CURRENT is
IDENT_CURRENT is:
1. IDENT_CURRENT returns the last identity value generated for a specific table in any session and
any scope.
2. IDENT_CURRENT is not limited by scope and session; it is limited to a specified table.
What @@IDENTITY is
@@IDENTITY is:
1. @@IDENTITY returns the last identity value generated for any table in the current session, across
all scopes.
2. After an INSERT, SELECT INTO, or bulk copy statement completes, @@IDENTITY contains the last
identity value generated by the statement.
3. If the statement did not affect any tables with identity columns, @@IDENTITY returns NULL.
4. If multiple rows are inserted, generating multiple identity values, @@IDENTITY returns the last
identity value generated.
5. The @@IDENTITY value does not revert to a previous setting if the INSERT or SELECT INTO
statement or bulk copy fails, or if the transaction is rolled back.
Differences
Example
SELECT IDENT_CURRENT('Table1')
SELECT IDENT_CURRENT('Table2')
6. Run the following SQL statements in a different query window, in other words a different session:
SELECT @@IDENTITY
SELECT SCOPE_IDENTITY()
SELECT IDENT_CURRENT('Table1')
SELECT IDENT_CURRENT('Table2')
Hope this will help you to clear your ideas.
1. A function must have a name and a function name can never start with a special
character such as @, $, #, and so on.
2. Functions only work with select statements.
3. Functions can be used anywhere in SQL, like AVG, COUNT, SUM, MIN, DATE
and so on with select statements.
4. Functions compile every time.
5. Functions must return a value or result.
6. Functions only work with input parameters.
7. Try and catch statements are not used in functions.
Function Types
SQL Server supports two types of functions - user defined and system.
Before we create and use functions, let's start with a new table.
In this type of function, we select a table data using a user created function. A function
is created using the Create function SQL command. The following query creates a new
user defined function.
Scalar function
Now we are getting an Employee table with two different data joined and displayed in a
single column data row. Here create a two-column join function as in the following:
Now the created scalar function is used for displaying Employee info in one column data
row as in the following:
System function
This function is used for inserting records and is a system built-in function.
Here provide some Aggregate basic function example with our Employee Table.
Getting the highest salary record with the max() function as in the following.
Command
Command
Count the total salary with the sum() function as in the following.
Command
Now we will show one more example of how to store data using a function and display
that stored data using a SQL print command.
1. print dbo.fun_PrintNumber()
Now one more mathematical function to create a two-number addition.
1. print dbo.Fun_Addition(12,13)
Summary
In this article, I explained functions in SQL Server. I hope you understand about
functions and how they work in SQL Server databases.
A stored procedure is nothing more than prepared SQL code that you save so you can
reuse the code over and over again. So if you think about a query that you write over
and over again, instead of having to write that query each time you would save it as a
stored procedure and then just call the stored procedure to execute the SQL code that
you saved as part of the stored procedure.
In addition to running the same SQL code over and over again you also have the ability
to pass parameters to the stored procedure, so depending on what the need is the
stored procedure can act accordingly based on the parameter values that were passed.
Take a look through each of these topics to learn how to get started with stored
procedure development for SQL Server.
You can either use the outline on the left or click on the arrows to the right or below to
scroll through each of these topics.
As mentioned in the tutorial overview a stored procedure is nothing more than stored
SQL code that you would like to use over and over again. In this example we will look
at creating a simple stored procedure.
Explanation
Before you create a stored procedure you need to know what your end result is,
whether you are selecting data, inserting data, etc..
In this simple example we will just select all data from the Person.Address table that is
stored in the AdventureWorks database.
To create a stored procedure to do this the code would look like this:
USE AdventureWorks
GO
To call the procedure to return the contents from the table specified, the code would be:
EXEC dbo.uspGetAddress
-- or
EXEC uspGetAddress
--or just simply
uspGetAddress
When creating a stored procedure you can either use CREATE PROCEDURE or
CREATE PROC. After the stored procedure name you need to use the keyword "AS"
and then the rest is just the regular SQL code that you would normally execute.
One thing to note is that you cannot use the keyword "GO" in the stored
procedure. Once the SQL Server compiler sees "GO" it assumes it is the end of the
batch.
Also, you can not change database context within the stored procedure such as using
"USE dbName" the reason for this is because this would be a separate batch and a
stored procedure is a collection of only one batch of statements.
Overview
The real power of stored procedures is the ability to pass parameters and have the
stored procedure handle the differing requests that are made. In this topic we will look
at passing parameter values to a stored procedure.
Explanation
Just like you have the ability to use parameters with your SQL code you can also setup
your stored procedures to accept one or more parameter values.
USE AdventureWorks
GO
CREATE PROCEDURE dbo.uspGetAddress @City nvarchar(30)
AS
SELECT *
FROM Person.Address
WHERE City = @City
GO
We can also do the same thing, but allow the users to give us a starting point to search
the data. Here we can change the "=" to a LIKE and use the "%" wildcard.
USE AdventureWorks
GO
CREATE PROCEDURE dbo.uspGetAddress @City nvarchar(30)
AS
SELECT *
FROM Person.Address
WHERE City LIKE @City + '%'
GO
In both of the proceeding examples it assumes that a parameter value will always be
passed. If you try to execute the procedure without passing a parameter value you will
get an error message such as the following:
USE AdventureWorks
GO
CREATE PROCEDURE dbo.uspGetAddress @City nvarchar(30) = NULL
AS
SELECT *
FROM Person.Address
WHERE City = @City
GO
We could change this stored procedure and use the ISNULL function to get around
this. So if a value is passed it will use the value to narrow the result set and if a value is
not passed it will return all records. (Note: if the City column has NULL values this will
not include these values. You will have to add additional logic for City IS NULL)
USE AdventureWorks
GO
CREATE PROCEDURE dbo.uspGetAddress @City nvarchar(30) = NULL
AS
SELECT *
FROM Person.Address
WHERE City = ISNULL(@City,City)
GO
USE AdventureWorks
GO
CREATE PROCEDURE dbo.uspGetAddress @City nvarchar(30) = NULL, @AddressLine1
nvarchar(60) = NULL
AS
SELECT *
FROM Person.Address
WHERE City = ISNULL(@City,City)
AND AddressLine1 LIKE '%' + ISNULL(@AddressLine1 ,AddressLine1) + '%'
GO
In a previous topic we discussed how to pass parameters into a stored procedure, but
another option is to pass parameter values back out from a stored procedure. One
option for this may be that you call another stored procedure that does not return any
data, but returns parameter values to be used by the calling stored procedure.
Explanation
Setting up output paramters for a stored procedure is basically the same as setting up
input parameters, the only difference is that you use the OUTPUT clause after the
parameter name to specify that it should return a value. The output clause can be
specified by either using the keyword "OUTPUT" or just "OUT". For these examples we
are still using the AdventureWorks database, so all the stored procedures should be
created in the AdventureWorks database.
Simple Output
CREATE PROCEDURE dbo.uspGetAddressCount @City nvarchar(30), @AddressCount int
OUTPUT
AS
SELECT @AddressCount = count(*)
FROM AdventureWorks.Person.Address
WHERE City = @City
To call this stored procedure we would execute it as follows. First we are going to
declare a variable, execute the stored procedure and then select the returned valued.
This can also be done as follows, where the stored procedure parameter names are not
passed.
A great new option that was added in SQL Server 2005 was the ability to use the
Try..Catch paradigm that exists in other development languages. Doing error handling
in SQL Server has not always been the easiest thing, so this option definitely makes it
much easier to code for and handle errors.
Explanation
If you are not familiar with the Try...Catch paradigm it is basically two blocks of code
with your stored procedures that lets you execute some code, this is the Try section and
if there are errors they are handled in the Catch section.
Let's take a look at an example of how this can be done. As you can see we are using
a basic SELECT statement that is contained within the TRY section, but for some
reason if this fails it will run the code in the CATCH section and return the error
information.
One very helpful thing to do with your stored procedures is to add comments to your
code. This helps you to know what was done and why for future reference, but also
helps other DBAs or developers that may need to make modifications to the code.
Explanation
SQL Server offers two types of comments in a stored procedure; line comments and
block comments. The following examples show you how to add comments using both
techniques. Comments are displayed in green in a SQL Server query window.
Line Comments
To create line comments you just use two dashes "--" in front of the code you want to
comment. You can comment out one or multiple lines with this technique.
This next example shows you how to put the comment on the same line.
-- this procedure gets a list of addresses based on the city value that is
passed
CREATE PROCEDURE dbo.uspGetAddress @City nvarchar(30)
AS
SELECT *
FROM Person.Address
WHERE City = @City -- the @City parameter value will narrow the search
criteria
GO
Block Comments
To create block comments the block is started with "/*" and ends with "*/". Anything
within that block will be a comment section.
/*
-this procedure gets a list of addresses based
on the city value that is passed
-this procedure is used by the HR system
*/
CREATE PROCEDURE dbo.uspGetAddress @City nvarchar(30)
AS
SELECT *
FROM Person.Address
WHERE City = @City
GO
/*
-this procedure gets a list of addresses based
on the city value that is passed
-this procedure is used by the HR system
*/
CREATE PROCEDURE dbo.uspGetAddress @City nvarchar(30)
AS
SELECT *
FROM Person.Address
WHERE City = @City -- the @City parameter value will narrow the search
criteria
GO
One good thing to do for all of your SQL Server objects is to come up with a naming
convention to use. There are not any hard and fast rules, so this is really just a
guideline on what should be done.
Explanation
SQL Server uses object names and schema names to find a particular object that it
needs to work with. This could be a table, stored procedure, function ,etc...
It is a good practice to come up with a standard naming convention for you objects
including stored procedures.
Standardize on a Prefix
It is a good idea to come up with a standard prefix to use for your stored
procedures. As mentioned above do not use "sp_", so here are some other options.
usp_
sp
usp
etc...
To be honest it does not really matter what you use. SQL Server will figure out that it is
a stored procedure, but it is helpful to differentiate the objects, so it is easier to manage.
spInsertPerson
uspInsertPerson
usp_InsertPerson
InsertPerson
Again this is totally up to you, but some standard is better than none.
So based on the actions that you may take with a stored procedure, you may use:
Insert
Delete
Update
Select
Get
Validate
etc...
uspInsertPerson
uspGetPerson
spValidatePerson
SelectPerson
etc...
Another option is to put the object name first and the action second, this way all of the
stored procedures for an object will be together.
uspPersonInsert
uspPersonDelete
uspPersonGet
etc...
Again, this does not really matter what action words that you use, but this will be helpful
to classify the behavior characteristics.
Schema Names
Another thing to consider is the schema that you will use when saving the objects. A
schema is the a collection of objects, so basically just a container. This is useful if you
want to keep all utility like objects together or have some objects that are HR related,
etc...
This logical grouping will help you differentiate the objects further and allow you to focus
on a group of objects.
HR.uspGetPerson
HR.uspInsertPerson
UTIL.uspGet
UTIL.uspGetLastBackupDate
etc...
Here is a simple example to create a new schema called "HR" and giving authorization
to this schema to "DBO".
Schema
Prefix
Action
Object
Take the time to think through what makes the most sense and try to stick to your
conventions.
There are many tricks that can be used when you write T-SQL code. One of these is to
reduce the amount of network data for each statement that occurs within your stored
procedures. Every time a SQL statement is executed it returns the number of rows that
were affected. By using "SET NOCOUNT ON" within your stored procedure you can
shut off these messages and reduce some of the traffic.
Explanation
As mentioned above there is not really any reason to return messages about what is
occuring within SQL Server when you run a stored procedure. If you are running things
from a query window, this may be useful, but most end users that run stored procedures
through an application would never see these messages.
You can still use @@ROWCOUNT to get the number of rows impacted by a SQL
statement, so turning SET NOCOUNT ON will not change that behavior.
23
In addition to creating stored procedures there is also the need to delete stored
procedures. This topic shows you how you can delete stored procedures that are no
longer needed.
Explanation
The syntax is very straightforward to drop a stored procedure, here are some examples.
When you first create your stored procedures it may work as planned, but how to do you
modify an existing stored procedure. In this topic we look at the ALTER PROCEDURE
command and it is used.
Explanation
To change the stored procedure and save the updated code you would use the ALTER
PROCEDURE command as follows.
Now the next time that the stored procedure is called by an end user it will use this new
logic.
Indexes in SQL Server
Indexes are used in relational databases to quickly retrieve the data. They are similar to indexes at
the end of the books whose purpose is to find a topic quickly.
Data is internally stored in a SQL Server database in “pages” where the size of each page is
8KB.
A continuous 8 pages is called an “Extent”.
When we create the table then one extent will be allocated for two tables and when that
extent is computed it is filled with the data then another extent will be allocated and this
extent may or may not be continuous to the first extent.
Table Scan
In SQL Server there is a system table with the name sysindexes that content information about
indexes are available on tables into the database. A table even has no index, there will be one row in
the sysindexes table related to that table indicating there is no index on the table.
When you write a select statement with a condition where clause than the first SQL Server
will refer to the “indid” (index id).
Columns of the “Sysindex” table determine whether or not the column on which you write the
conditions has an index. When that indid columns, to get an address of the first extent of the table
and then searches each and every row of the table for the given value.
This process checks the given condition with each and every row of the table called a table
scan.
A drawback for tablescan is that if there is no increase in rows in the table then the time
taken to retrieve the data will increase and that effects performance.
Type of Indexes
In SQL Server indexes are one of the following two types:
1. Clustered Index
2. Non-Clusterd Index.
1. Clustered Index
A B-Tree (computed) clustered index is the index that will arrange the rows physically in the memory
in sorted order.
An advantage of a clustered index is that searching for a range of values will be fast. A clustered
index is internally maintained using a B-Tree data structure leaf node of btree of clustered index will
contain the table data; you can create only one clustered index for a table.
When you write a select statement with a condition in a where clause then the first SQL Server will
refer to the “indid” columns of the “Sysindexes” table and when this column contains the value “1”.
Then it indexes the table with a clustered index and in this case it refers to the columns .”The Root”
node of the B-tree of clustered index and searches in the b-tree to find the leaf node that contains
the first row that satisfies the given conditions and retrieves all the rows that satisfy the given
condition that will be in sequence.
Since a clustered index arranges the rows physically in the memory in sorted order, insert
and update will become slow because the row must be inserted or updated in sorted order.
Finally, the page into which the row must be inserted or updated and if free space is not
available in the page then creating the free space and then performing the insert, update and
delete.
To overcome this problem while creating a clustering index specify a fill factor and when you
specify a fill factor as 70 then in every page of that table 70% will fill it with data and
remaining 30% will be left free.
Since free space is available on every page, the insert and update will fast.
2. Non-clustered Index
A non-clustered index is an index that will not arrange the rows physically in the memory in
sorted order.
An advantage of a non-clustered index is searching for the values that are in a range will be
fast.
You can create a maximum of 999 non-clustered indexes on a table, which is 254 up to SQL
Server 2005.
A non-clustered index is also maintained in a B-Tree data structure but leaf nodes of a B-
Tree of non-clustered index contains the pointers to the pages that contain the table data
and not the table data directly.
When you write a select statement with a condition in a where clause then SQL Server will
refer to “indid” columns of sysindexes table and when this columns contains the value in the
range of 2 to 1000 then it indicates that the table has a non –clustered index and in this case
it will refer to the columns root of sysindexes table to get two addresses.
Of the root node of a B-Tree of a non-clustered index and then search in the B-Tree to find the leaf
node that contains the pointers to the rows that contains the value you are searching for and retrieve
those rows.
There will be no effect of insert and update with a non-clustered index because it will not
arrange the row physically in the memory in sorted order.
With a non-clustered index, rows are inserted and updated at the end of the table.
For example, the following examples create a non-clustered index on department_no of emp tables.
For example, the following example creates a non-clustered index in combination with department
number and job columns of the emp table.
Unique Index
When an index is created using the keyword unique then it is called a unique index, whom
you create unique index on columns then along with an index unique constraint will be
created on the columns.
If the columns on which you are creating a unique index contains duplicate values then a
unique index will not be created and you will get an error.
Altering an Index
To alter an index use an alter index command that has the following syntax:
The Rebuild option will recreate the computer index, the recognize option will reorganize leaf nodes
of the b-tree to index and the disable option will disable the index when the index is eligible then to
enable it. alter the index using the rebuild option.
For example, the following example alters the index “dnoidx” available on the department number of
columns on the emp table.
sp_helpindex 'Stud'
Deleting indexes
Use the drop index command that has the following syntax:
For example, the following example deletes the index dnoidex available on the department number
columns of the emp table.
Transactions are essential for maintaining data integrity, both for multiple related
operations and when multiple users that update the database concurrently.
This article will specifically talk about the concepts related to transactions and how
transactions can be used in the context of a SQL Server database. Besides, a
transaction is a fundamental concept and this article will be helpful for relating
transaction concepts with other databases as well.
Note: I used SQL Server 2012 in the article but you can use SQL Server 2008 as well.
What a transaction is
When to use transactions
Understanding ACID properties
Design of a Transaction
Transaction state
Specifying transaction boundaries
T-SQL statements allowed in a transaction
Local transactions in SQL Server 2012
Distributed transactions in SQL Server 2012
Guidelines to code efficient transactions
How to code transactions
What Is a Transaction?
A transaction is a set of operations performed so all operations are guaranteed to
succeed or fail as one unit.
Both must succeed together and the changes must be committed to the accounts, or
both must fail together and rolled back so that the accounts are maintained in a
consistent state. Under no circumstances should money be deducted from the checking
account but not added to the savings account (or vice versa), you would at least not
want this to happen with the transactions occurring with your bank accounts.
By using a transaction concept, both the operations, namely debit and credit, can be
guaranteed to succeed or fail together. So both accounts remain in a consistent state all
the time.
When you use transactions, you put locks on data that is pending for permanent change
to the database. No other operations can take place on locked data until the acquired
lock is released. You could lock anything from a single row up to the entire database.
This is called concurrency, which means how the database handles multiple updates at
one time.
In the bank example above, locks will ensure that two separate transactions don't
access the same accounts at the same time. If they do then either deposits or
withdrawals could be lost.
Note: it's important to keep transactions pending for the shortest period of time. A lock
stops others from accessing the locked database resource. Too many locks, or locks on
frequently accessed resources, can seriously degrade performance.
Every database software that offers support for transactions enforces these four ACID
properties automatically.
Design of a Transaction
Transactions represent real-world events such as bank transactions, airline
reservations, remittance of funds, and so forth.
Transaction State
In the absence of failures, all transactions complete successfully. However, a
transaction may not always complete its execution successfully. Such a transaction is
termed aborted.
You can set the database connection to implicit transaction mode by using SET
IMPLICIT TRANSACTIONS ON|OFF.
After implicit transaction mode has been set to ON for a connection, SQL Server
automatically starts a transaction when it first executes any of the following
statements: ALTER TABLE, CREATE, DELETE, DROP, FETCH, GRANT,
INSERT, OPEN, REVOKE, SELECT, TRUNCATE TABLE, and UPDATE.
The transaction remains in effect until a COMMIT or ROLLBACK statement has
been explicitly issued. This means that when, say, an UPDATE statement is
issued on a specific record in a database, SQL Server will maintain a lock on the
data scoped for data modification until either a COMMIT or ROLLBACK is
issued. In case neither of these commands are issued, the transaction will be
automatically rolled back when the user disconnects. This is why it is not a best
practice to use implicit transaction mode on a highly concurrent database.
When MARS is enabled, you can have multiple interleaved batches executing at the
same time, so all the changes made to the execution environment are scoped to the
specific batch until the execution of the batch is complete. Once the execution of the
batch completes, the execution settings are copied to the default environment. Thus a
connection is said to be using batch-scoped transaction mode if it is running a
transaction, has MARS enabled on it, and has multiple batches running at the same
time.
A transaction with a single SQL Server that spans two or more databases is actually a
distributed transaction. SQL Server, however, manages the distributed transaction
internally.
At the application level, a distributed transaction is managed in much the same way as
a local transaction. At the end of the transaction, the application requests the
transaction to be either committed or rolled back. A distributed commit must be
managed differently by the transaction manager to minimize the risk that a network
failure might lead you to a situation when one of the resource managers is committing
instead of rolling back the transactions due to failure caused by various reasons. This
critical situation can be handled by managing the commit process in two phases, also
known as a two-phase commit:
Commit phase: If the transaction manager receives successful prepares from all
of the resource managers then it sends a COMMIT command to each resource
manager. If all of the resource managers report a successful commit then the
transaction manager sends a notification of success to the application. If any
resource manager reports a failure to prepare, the transaction manager sends a
ROLLBACK statement to each resource manager and indicates the failure of the
commit to the application.
We recommend you use the following guidelines while coding transactions to make
them as efficient as possible:
Get all required input from users before a transaction is started. If additional user
input is required during a transaction then roll back the current transaction and
restart the transaction after the user input is supplied. Even if users respond
immediately, human reaction times are vastly slower than computer speeds. All
resources held by the transaction are held for an extremely long time, that has
the potential to cause blocking problems. If users do not respond then the
transaction remains active, locking critical resources until they respond, that may
not happen for several minutes or even hours.
Transactions should not be started until all preliminary data analysis has been
completed.
After you know the modifications that need to be made, start a transaction,
execute the modification statements, and then immediately commit or roll back.
Do not open the transaction before it is required.
The smaller the amount of data that you access in the transaction, the fewer the
number of rows that will be locked, reducing contention between transactions.
4. Next let's insert some data into the Person and PersonDetails table, by executing
the statement below, and click "Execute".
Listing 1-2. Create Parent-Child relationship
Since a child can have only those records that map to the parent, hence we can
only insert child records into PersonDetails for those PersonIDs that are already
available in the Person table.
As you can see, the child table's PersonID matches with the parent table. So now we
have a perfect parent-child relationship, where we have two parent records and 2
matching child records in the Person and PersonDetails tables respectively as shown in
Figure 1-3 below:
Figure 1-3. Showing Parent-Child relationship between Person and PersonDetails table
Try It Out: Coding a Transaction in T-SQL
1. Here, you'll code a transaction based on the Person and PersonDetails table,
where we will use SQL Server's primary-key and foreign-key rules to understand
how transactions work. The Person table has three columns; two columns,
PersonID and FirstName, don't allow null values, and PersonID is also a primary
key column. In other words only unique values are allowed. Also, the last column
Company allows null values.
2. In Object Explorer, select the SQL2012Db database, and click the New Query
button.
3. Create a Stored Procedure named sp_Trans_Test using the code in Listing 1-3.
The results window should show a return value of zero, and you should see the
same messages as shown in Figure 1-4.
Figure 1-4. Executing the Stored Procedure in the same query window, enter the
following SELECT statement:
Select the statement as shown in Figure 1-3 and then click the "Execute" button.
You will see that the person named "Vamika" has been added to the table, as
shown in the Results tab in Figure 1-3.
Figure 1-5. Row inserted in a transaction
5. Add another person with the parameter values. Enter the following statement and
execute it as you've done previously with other similar statements.
You should get the same results as shown earlier in Figure 1-4 in the Messages
tab.
6. Try the SELECT statement shown in Figure 84 one more time. You should see
that person "Arshika" has been added to the Person table. Both Person "Vamika"
and "Arshika" have no child records in the PersonDetails table.
How It Works
These local variables will be used with the Stored Procedure, so you can capture and
display the error numbers returned if any from the INSERT and DELETE statements.
You mark the beginning of the transaction with a BEGIN TRANSACTION statement and
follow it with the INSERT and DELETE statements that are part of the transaction. After
each statement, you save the return number for it.
1. BEGIN TRANSACTION
2. -- Add a person
3. insert into person (personid, firstname, company)
4. values(@newpersonid, @newfirstname, @newcompanyname)
5.
6. -- Save error number returned from Insert statement
7. set @inserr = @@error
8. if @inserr > @maxerr
9. set @maxerr = @inserr
10.
11. -- Delete a person
12. delete from person
13. where personid = @oldpersonid
14.
15. -- Save error number returned from Delete statement
16. set @delerr = @@error
17. if @delerr > @maxerr
18. set @maxerr = @delerr
Error handling is important at all times in SQL Server, and it's never more than inside
transactional code. When you execute a T-SQL statement, there's always the possibility
that it may not succeed. The T-SQL @@ERROR function returns the error number for
the last T-SQL statement executed. If no error occurred then @@ERROR returns zero.
@@ERROR is reset after every T-SQL statement (even SET and IF) is executed, so if
you want to save an error number for a specific statement then you must store it before
the next statement executes. That's why you declare the local variables @inserr and
@delerr and @maxerr.
If @@ERROR returns any value other than 0, an error has occurred, and you want to
roll back the transaction. You also include PRINT statements to report whether a
rollback or commit has occurred.
Tip: T-SQL (and standard SQL) supports various alternative forms for keywords and
phrases. You've used just ROLLBACK and COMMIT here.
Then you add some more instrumentation, so you could see what error numbers are
encountered during the transaction.
Now let's look at what happens when you execute the Stored Procedure. You run it
twice, first by adding person "Pearl" and next by adding person "Spark", but you also
enter the same nonexistent person "Agarw" to delete each time. If all statements in a
transaction are supposed to succeed or fail as one unit then why does the INSERT
succeed when the DELETE doesn't delete anything?
Figure 1-4 should make everything clear. Both the INSERT and DELETE return error
number zero. The reason DELETE returns error number zero even though it has not
deleted any rows is that when a DELETE doesn't find any rows to delete, T-SQL doesn't
treat that as an error. In fact, that's why you use a nonexistent person. Excluding these
recently added persons Pearl and Spark. Other records have child records in the
PersonDetails table as shown in Figure 1-3 and you can't delete these existing persons
unless you delete their details from the PersonDetails table first.
In the Messages pane shown in Figure 1-6, note that the entire transaction was rolled
back because the INSERT failed and was terminated with error number 2627 (whose
error message appears at the top of the window). The DELETE error number was 0,
meaning it executed successfully but was rolled back. (If you check the table then you'll
find that person "Spark" still exists in the Person table.)
How It Works
Since person "Pearl" already exists and as you know and shown in Figure 1-2, the
Person table's PersonID column in the primary key and it can only contain unique
values. This is why SQL Server prevents the insertion of a duplicate, so the first
operation fails. The second DELETE statement in the transaction is executed, and
person "Spark" was deleted since it doesn't have any child records in the PersonDetails
table; but because gmaxerr isn't zero (it's 2627, as you see in the Results pane), you
roll back the transaction by undoing the deletion of customer "Spark". As a result you
see all the records in the table as it is.
Add person "ag" and delete person "Vidvr" by entering the following statement, and then
click the "Execute" button.
In the Messages window shown in Figure 1-7, note that the transaction was rolled back
because the DELETE failed and was terminated with error number 547 (the message
for which appears at the top of the window). The INSERT error number was 0, so it
apparently executed successfully but was rolled back. (If you check the table then you'll
find "ag" is not a person.)
How It Works
Since person "ag" doesn't exist, SQL Server inserts the row, so the first operation
succeeds. When the second statement in the transaction is executed, SQL Server
prevents the deletion of customer "Vidvr" because it has child records in the
PersonDetails table, but since gmaxerr isn't zero (it's 547, as you see in the Results
pane), the entire transaction is rolled back.
Try It Out: What Happens When Both Operations Fail
In this example, you'll try to insert an invalid new person, in other words one with a
duplicate name and try to delete an undeletable one. In other words that has child
records in the PersonDetails table.
Add person "Pearl" and delete customer Rupag by entering the following statement, and
then click the "Execute" button.
In the Messages window shown in Figure 1-8, note that the transaction was rolled back
(even though neither statement succeeded, so there was nothing to roll back) because
gmaxerr returns 2627 for the INSERT and 547 for the DELETE. Error messages for
both failing statements are displayed at the top of the window.
How It Works
By now, you should understand why both statements failed. This happened because the
first statement couldn't insert a duplicate record and the second statement couldn't
delete a record that has associated child records. This is why the Message pane in
Figure 1-8 shows both the errors explicitly mentioning duplicate key and conflict
reference with child records.
Summary
This article covered the fundamentals of transactions, from concepts such as
understanding what transactions are, to ACID properties, local and distributed
transactions, guidelines for writing efficient transactions, and coding transactions in T-
SQL. Although this article provides just the fundamentals of transactions, you now know
enough about coding transactions to handle basic transactional processing and
implement it using C# and ADO .NET.
Introduction
Views are virtual tables that hold data from one or more tables. It is stored in the
database. A view does not contain any data itself, it is a set of queries that are applied
to one or more tables that are stored within the database as an object. Views are used
for security purposes in databases. Views restrict the user from viewing certain columns
and rows. In other words, using a view we can apply the restriction on accessing
specific rows and columns for a specific user. A view can be created using the tables of
the same database or different databases. It is used to implement the security
mechanism in the SQL Server.
In the preceding diagram we have created a view that contains the columns of two
tables, Table A and Table B, using a query. A view is created using a select statement.
Views are stored in the database as an object so it doesn't require additional storage
space. Before starting any discussion about views we should have a basic knowledge of
them.
Views are used to implement the security mechanism in SQL Server. Views are
generally used to restrict the user from viewing certain columns and rows. Views display
only the data specified in the query, so it shows only the data that is returned by the
query defined during the creation of the view. The rest of the data is totally abstract from
the end user.
Types of views
There are the following two types of views:
1. User-Defined Views
2. System-Defined Views
User Define Views: First we create two tables. First create a Employee_Details table
for the basic info of an employee.
Syntax
Method 1: We can select all columns of a table. The following example demonstrates
that:
Method 3: We can select columns from a table with specific conditions. The following
example demonstrates that:
Method 4: We can create a view that will hold the columns of different tables. The
following example demonstrates that:
This SQL CREATE VIEW example would create a virtual table based on the result set
of the select statement. Now we can retrieve data from a view as follows:
The preceding query shows that we can select all the columns or some specific
columns from a view.
Dropping a View
We can use the Drop command to drop a view. For example, to drop the view
Employee_View3, we can use the following statement.
We can use the sp_rename system procedure to rename a view. The syntax of the
sp_rename command is given below:
Syntax
Example
Getting Information about a view: We can retrieve all the information of a view using
the Sp_Helptext system Stored Procedure. Let us see an example.
1. Sp_Helptext Employee_View4
Output
Altering a View: We can alter the schema or structure of a view. In other words we can
add or remove some columns or change some conditions that are applied in a
predefined view. Let us see an example.
Refreshing a View: Let us consider the scenario now by adding a new column to the
table Employee_Details and examine the effect. We will first create a view.
Now retrieve the data from the table and view and you will receive the following output:
Output
We don't get the results we exepected because the schema of the view is already
defined. So when we add a new column into the table it will not change the schema of
the view and the view will contain the previous schema. For removing this problem we
use the system-defined Stored Procedure sp_refreshview.
Output
SchemaBinding a VIEW
In the previous example we saw that if we add a new column into the table then we
must refresh the view.
Such a way if we change the data type of any column in a table then we should refresh
the view. If we want to prevent any type of change in a base table then we can use the
concept of SCHEMABINDING. It will lock the tables being referred to by the view and
restrict all kinds of changes that may change the table schema (no Alter command).
We can't specify "Select * from tablename" with the query. We need to specify all the
column names for reference.
In the preceding example we create a view using Schemabinding. Now we try to change
the datatype of Emp_Salary from int to Decimal in the Base Table.
We find that we cannot change the data type because we used the SCHEMABIDING
that prevents any type of change in the base table.
Encrypt a view
The “WITH ENCRYPTION” option can encrypt any views. That means it will not be
visible via SP_HELPTEXT. This option encrypts the definition. This option encrypts the
definition of the view. Users will not be able to see the definition of the view after it is
created. This is the main advantage of the view where we can make it secure.
Output
Check Option: The use of the Check Option in a view is to ensure that all the Update
and Insert commands must satisfy the condition in the view definition.
1. GO
2.
3. Create view [dbo].[Employee_Details7]
4. as
5. select * from Employee_Details
6. where Emp_Salary>30000
7.
8. GO
In the preceding example we create a view that contains all the data for which
Emp_Salry > 30000 but we can insert the data for a salary less then 30000 as follows.
For prevent this problem we can use the Check Option property such as:
1. GO
2.
3. Create view [dbo].[Employee_Details7]
4. as
5. select * from Employee_Details
6. where Emp_Salary>30000
7. with Check Option
8. GO
Now if we try to execute the preceding query then it will throw an error such as:
Output
In a view we can implement many types of DML query like insert, update and delete.
But for a successful implementation of a DML query we should use some conditions
like:
If we use the preceding conditions then we can implement a DML Query in the view
without any problem. Let us see an example.
Output
Output
System Define Views: SQL Server also contains various predefined databases like
Tempdb, Master, temp. Each database has their own properties and responsibility.
Master data is a template database for all other user-defined databases. A Master
database contains many Predefine_View that work as templates for other databases
and tables. Master databases contain nearly 230 predefined views.
These predefined views are very useful for us. Mainly we divide system views in the
following two parts.
1. Information Schema
2. Catalog View
Information schema: There are nearly 21 Information Schemas in the System. These
are used for displaying the most physical information of a database, such as table and
columns. An Information Schema starts from INFORMATION_SCHEMA.[View Name].
Let us see an example.
Output
This Information_Schema returns the details of all the views used by the
table Employee_Details.
Output
Catalog View: Catalog Views are categorized into various groups also. These are used
to show the self-describing information of a database. These start with “sys”.
This query provides the information all types of views using a database.
This query will provide the information about all the databases defined by the system,
including user-defined and system-defined database.
4 Name Nvarchar(500),
5 Age INT
)
6
4 Name Nvarchar(500),
5 Age INT
)
6
4 Name Nvarchar(500),
5 Age INT
)
6
4. Insert values into SQL Server table:
In this SQL Server query we will learn about How to insert values into Sql Server table.
1 INSERT INTO TableName (Name,Age) VALUES ('Max',30);
2 VALUES ('Max',30),('John',28),('Jack',31);
6. Update query:
Update single record:
In this Sql query we will update single record from a table.
1 UPDATE TableName SET NAME='Max Payne' WHERE Id=1
Update all records:
In this Sql query we will update all records within a table.
1 UPDATE TableName SET AGE=31
7. Delete query:
Delete single record:
In this Sql query we will delete single record from a table.
1 DELETE FROM TableName WHERE Id=1
Delete multiple records:
In this Sql query we will delete all records from a table.
1 DELETE FROM TableName
8. Select:
Select all columns from a tables:
In this Sql query we will select all columns from a table.
1 SELECT * FROM TableName
9. Create View:
A view is a virtual table created based on the result generated by SQL statement. Fields in a view
are directly related to one of more tables from database.
1 CREATE VIEW view_name AS SELECT Id,Name,Age
2 FROM TableName
Usage:
1 select * From view_name
2 AS
2 where TABLE_NAME='TableName';
2 where TABLE_NAME='TableName';
2 FROM sys.columns c
8 so.xtype = 'U'
9 AND
si.id = OBJECT_ID(so.name)
10
GROUP BY
11
so.name
12
ORDER BY
13
2 DESC
14
17. Get Comma Separated List of all columns in table:
In this query we will learn about How to get Comma Separated List of all columns in table.
1
Select TABLE_SCHEMA, TABLE_NAME
2
, Stuff(
3
(
4 Select ',' + C.COLUMN_NAME
5 From INFORMATION_SCHEMA.COLUMNS As C
6 Where C.TABLE_SCHEMA = T.TABLE_SCHEMA
8 Order By C.ORDINAL_POSITION
OR
1 EXEC sp_databases
Syntax
1 ALTER TABLE {TABLENAME}
2 ADD {COLUMNNAME} {TYPE}
Usage
1 ALTER TABLE TableName
6
OPEN getinfo
7
8
FETCH NEXT FROM getinfo into @col
9
10
WHILE @@FETCH_STATUS = 0
11
BEGIN
12
SELECT @cmd = 'IF NOT EXISTS (SELECT top 1 * FROM TableName WHERE [' + @col + '] I
13 end'
14 EXEC(@cmd)
15
END
17
18
CLOSE getinfo
19
DEALLOCATE getinfo
20
3 WHERE OBJECTPROPERTY(OBJECT_ID,'TableHasPrimaryKey') = 0
WHERE
3
Table_NAME NOT IN
4
(
5
SELECT DISTINCT c.TABLE_NAME FROM INFORMATION_SCHEMA.COLUMNS c
6
INNER
7 JOIN sys.identity_columns ic
8 on
9 (c.COLUMN_NAME=ic.NAME))
10 AND
12
4 UNION ALL
5 SELECT MonthNumber+1
FROM months
6
WHERE MonthNumber < 12
7
)
8
SELECT DATENAME(MONTH,DATEADD(MONTH,-MonthNumber,GETDATE())) AS [MonthName],Datepart(
9 MonthNumber
10 FROM months
11 ORDER BY Datepart(MONTH,DATEADD(MONTH,-MonthNumber,GETDATE()))
;
12
Note: Here Employee is my table name and CreatedOn is a name of date column used for filtering.
6 SELECT @years = DATEDIFF(yy, @tmpdate, GETDATE()) - CASE WHEN (MONTH(@date) > MONTH(GETDA
END
7 SELECT @tmpdate = DATEADD(yy, @years, @tmpdate)
8 SELECT @months = DATEDIFF(m, @tmpdate, GETDATE()) - CASE WHEN DAY(@date) > DAY(GETDATE())
11
Note: Please take proper backup before running this query. This query will remove all data from all
tables in selected database.
41. Delete all records from all tables having foreign keys:
In this query we will learn about How to delete all rows from all tables in SQL Server which have
foreign keys.
1 EXEC sp_MSForEachTable 'ALTER TABLE ? NOCHECK CONSTRAINT ALL'
Format:
1 UPDATE TableName SET Column1=Column2, Column2=Column1
Usage:
1 UPDATE TableName SET Name=Email,Email=Name
OR
1 ALTER DATABASE oldName MODIFY NAME = newName
4 SELECT SCOPE_IDENTITY()
3 WHERE ID NOT IN
4 (
SELECT MAX(ID)
5
6 FROM TableName
7 GROUP BY DuplicateColumn)
3 Table_Schema AS [Schema],
5 WHERE INFORMATION_SCHEMA.KEY_COLUMN_USAGE.TABLE_NAME='TableName'
3 Table_Schema AS [Schema],
Note: Make sure to select any other database while using this query.
59. Backup Database :
In this query we will learn about How to take backup of database using script in SQL Server.
1 BACKUP DATABASE DataBaseName TO DISK='d:\NameOfBackupFile.bak'
4 SET NOCOUNT ON
5
DECLARE @TableName nvarchar(256), @ColumnName nvarchar(128), @SearchStr2 nvarchar(110
6
SET @TableName = ''
7
SET @SearchStr2 = QUOTENAME('%' + @SearchStr + '%','''')
8
9
WHILE @TableName IS NOT NULL
10
11
BEGIN
12 SET @ColumnName = ''
13 SET @TableName =
14 (
24
26
27 BEGIN
28 SET @ColumnName =
(
29
SELECT MIN(QUOTENAME(COLUMN_NAME))
30
FROM INFORMATION_SCHEMA.COLUMNS
31
WHERE TABLE_SCHEMA = PARSENAME(@TableName, 2)
32
AND TABLE_NAME = PARSENAME(@TableName, 1)
33 AND DATA_TYPE IN ('char', 'varchar', 'nchar', 'nvarchar', 'int', '
34 AND QUOTENAME(COLUMN_NAME) > @ColumnName
35 )
36
38
39 BEGIN
45 )
END
46
END
47
END
48
49
SELECT ColumnName, ColumnValue FROM #Results
50
51
52
53
54
3 on t1.PrimaryColumn=t2.ForeignKeyColumn
3 on t1.PrimaryColumn=t2.ForeignKeyColumn
3 on t1.PrimaryColumn=t2.ForeignKeyColumn
3 on t1.PrimaryColumn=t2.ForeignKeyColumn
69. IF ELSE :
In this query we will learn about How to use IF ELSE Statements in SQL Server.
1
DECLARE @ValueToCheck INT;
2 SET @ValueToCheck = 14;
3
4 IF @ValueToCheck=15
5 SELECT 'Value is 15' As Result
6 ELSE IF @ValueToCheck<15
ELSE IF @ValueToCheck>15
8
SELECT 'Value is greater than 15' As Result
9
5 END AS Result
FROM TableName
6
6 )
(2,'Tony','Tony@gmail.com'),
9
(3,'Jack','Jack@Indivar.com')
10
11
SELECT * FROM @TempTable
12
74. Delete duplicate rows :
In this query we will learn about How to delete duplicate rows in SQL Server.
1 DELETE FROM TableName
2 WHERE ID NOT IN
3 (
4 SELECT MAX(ID)
5 FROM TableName
Note: Here 0 is the number-1 from where you want to start your auto increment. Suppose you wants
to start auto increment your primary key value from 50 then you need to pass 49 in the above
method
UNION combines the result set of two or more queries into a single result set. This
result set includes all the rows that belong to all queries in the UNION.
The following points need to be considered when using the UNION operator:
The number of columns and sequence of columns must be the same in all
queries
The data types must be compatible.
Syntax
Example
Consider the following example. I have two tables that have Id, name and phone
number columns.
Output
UNION ALL
UNION ALL is very similar to UNION. It also includes duplicate rows in the result set.
UNION always returns distinct rows. In other words it eliminates duplicate rows from the
result set.
Example
Consider the following example. I have two tables and both has two rows. One row is
common to both tables. When we union these two tables it returns three rows (removes
duplicate rows from the result set).
Output
A UNION operator does a DISTINCT on the result set, SQL Server automatically does a
distinct sort operation on the result set. Consider the following execution plan, in this
execution plan we can see that the distinct sort is taking 63% of the time of the actual
execution time.
“UNION ALL” always returns all the rows of the result set. It does not remove duplicate
rows. Consider the preceding example with the UNION ALL operator.
Output
When we look into the execution plan of a UNION ALL Query, it does not include a
distinct sort. UNION must perform a distinct sort operation to remove the duplicate value
from the result set that makes a UNION ALL faster than the UNION.
When we use a UNION for the TEXT type columns, SQL Server throws a runtime error.
This error is not generated when we use the “UNION ALL” operator.
Example
Consider the following example. I have two tables and both tables have columns with
the data type TEXT. Now I am trying to UNION the result set.
With the “UNION ALL” operator, it returns all the rows of both tables.
Output
Summary
Each SELECT statement within the UNION / UNION ALL must have the same number
of columns and the columns must have similar or compatible data types. They must be
in the same order.
If the column sizes of the two tables are different then the result set has a column type
that is the larger of two columns. For example, if SELECT ... UNION has CHAR (5) and
CHAR (10) columns then it displays the output data of both of the columns as CHAR
(10).
If the columns across the table have different column names then in general, the
column names of the first query are used in a final result set.
Quite often, you’re faced with the task of comparing two or more tables, or query
results, to determine which information is the same and which isn’t. One of the most
common approaches to doing such a comparison is to use the UNION or UNION ALL
operator to combine the relevant columns of the results that you want to compare. As
long as you adhere to the restrictions placed on either of those operators, you can
combine data sets whether they come from different databases or even different
servers. With the UNION operator, you end up with a result containing every distinct row
from the two results combined. However, it becomes more difficult to use UNION to
return only the common data that is held in both results, or the different data that exists
in one table but not the other(s). To get the results you need, you must use UNION ALL
with a GROUP BY clause, though the logic isn’t always obvious on how to do this. And
it isn’t any easier to use a JOIN operator to get the result you want. .
Enter the INTERSECT and EXCEPT operators. Beginning with SQL Server 2005, you
can use these operators to combine queries and get the results you need. For instance,
you can use the INTERSECT operator to return only values that match within both data
sets, as shown in the following illustration .
The illustration shows how the INTERSECT operator returns data that is common to
both results set; the common data is represented by the area where the two circles
intersect. The illustration also shows how the EXCEPT operator works; only data that
exists in one of the data sets outside the intersecting area is returned. For instance, if
Set A is specified to the left of the EXCEPT operator, only values that are not in Set B
are returned. In the illustration above, that would be the data in the left circle, outside
the section where the two data sets intersect. The following bullets sum up which
operator to use to return different combinations of data:
To return the data in Set A that doesn’t overlap with B, use A EXCEPT B.
To return only the data that overlaps in the two sets, use A INTERSECT B.
To return the data in Set B that doesn’t overlap with A, use B EXCEPT A.
To return the data in all three areas without duplicates, use A UNION B.
To return the data in all three areas, including duplicates, use A UNION ALL B.
To return the data in the non-overlapping areas of both sets, use (A UNION B) except (A
INTERSECT B), or perhaps (A EXCEPT B) UNION (B EXCEPT A)
The differences between the INTERSECT and EXCEPT operators and how to use each
of them will become clearer as we work through the examples in the article. Just to give
you a basic idea of how they work, we’ll start with a rather unrealistic example. To
demonstrate those, however, we must first create two test views (using SQL Server
2005-compatible syntax). The first view contains a single column that describes what
you might have had for lunch:
AS
GO
The second view also contains a single column and describes what you might have had for
dinner:
AS
GO
Now we can use these two views to demonstrate how to use the UNION, INTERSECT, and
EXCEPT operators. I’ve also included a couple examples that use the JOIN operator to
demonstrate the differences.
The first example uses the UNION operator to join the two views in order to return everything
you’ve eaten today:
UNION
FROM Lunch
FULL OUTER JOIN Dinner
ON Dinner.item = Lunch.item
Notice that the join requires more complex syntax; however, both statements return the same
results, as shown in the following table:
item
Apple
Aubergines
Beer
Bread
Calamari
Coffee
Olives
Salad
Salami
Steak
Wine
Now let’s look at how you would return only the food you ate (or drank) for lunch, but did not
have for dinner:
EXCEPT
In this case, I used the EXCEPT operator to return the lunch-only items. I could have achieved
the same results using the following left outer join:
SELECT Lunch.item
FROM Lunch
ON Dinner.item = Lunch.item
Once again, you can see that the join is more complex, though the results are the same, as
shown in the following table:
Item
Beer
Calamari
Salami
If you wanted to return those items you had for dinner but not lunch, you can again use the
EXCEPT operator, but you must reverse the order of the queries, as shown in the following
example:
EXCEPT
Notice that I first retrieve the data from the Dinner view. To use the left outer join, you would
again have to reverse the order of the tables:
SELECT dinner.item
FROM dinner
ON Dinner.item = Lunch.item
As expected, the results are the same for both SELECT statements:
item
Apple
Aubergines
Salad
Steak
Wine
In the next example, I use the INTERSECT operator to return only the food that was eaten at
both meals:
INTERSECT
As you can see, I simply connect the two queries with the INTERSECT operator, as I did with
the EXCEPT operator. You can achieve the same results by using an inner join:
SELECT Dinner.item
FROM Dinner
ON Dinner.item = Lunch.item;
As the following results show, the only items you had at both meals were olives, bread, and
coffee:
item
Bread
Coffee
Olives
Now let’s look at how you would return a list of food that you ate at one of the meals, but not
both meals, in other words, the food you ate other than bread, olives, and coffee. In the
following statement, I use a UNION operator to join two SELECT statements:
SELECT item
FROM
) Only_Lunch
UNION
SELECT item
FROM
Notice that first statement retrieves only the food you ate for lunch, and the second statement
retrieves only the food ate for dinner. I achieve this in the same way I did in previous examples-
by using the EXCEPT operator. I then used the UNION operator to join the two result sets. You
can achieve the same results by using a full outer join:
FROM Dinner
FULL OUTER JOIN Lunch
ON Dinner.item = Lunch.item
item
Apple
Aubergines
Beer
Calamari
Salad
Salami
Steak
Wine
From this point on, I developed the examples on a local instance of SQL Server 2008
and the AdventureWorks2008 database. Each example uses either the INTERSECT or
EXCEPT operator to compare data between the Employee and JobCandidate tables,
both part of the HumanResources schema. The comparison is based on the
BusinessEntityID column in each table. The BusinessEntityID column in the Employee
table is the primary key. In the JobCandidate table, the BusinessEntityID column is a
foreign key that references the BusinessEntityID column in the Employee table. The
column in the JobCandidate table also permits null values.
NOTE:
You can run these examples against the AdventureWorks database on an instance of
SQL Server 2005. However, you must first change the BusinessEntityID column name
to EmployeeID, and you must change the JobTitle column name to Title.
In the following example, I create two queries that retrieve data from the Employee and
JobCandidate tables and use the INTERSECT operator to combine those queries:
SELECT BusinessEntityID
FROM HumanResources.Employee
INTERSECT
SELECT BusinessEntityID
FROM HumanResources.JobCandidate;
The first SELECT statement, as you can see, retrieves the BusinessEntityID column
from the Employee table, and the second SELECT statement retrieves the
BusinessEntityID column from the JobCandidate table. The INTERSECT operator
combines the two queries.
When you use the INTERSECT operator to combine queries (or EXCEPT, for that
matter), the number of columns must be the same in both queries and the columns
must be in the same order. In addition, the corresponding columns between the queries
must be configured with compatible data types. The example above meets these
conditions because each query returns only one column of the same data type (INT).
When the INTERSECT operator is used to combine these the two queries, the
statement returns the following results:
BusinessEntityID
212
274
As it turns out, the Employee table and JobCandidate table have only two
BusinessEntityID values in common. If you were to examine the data in the
JobCandidate table, you would find that the query results here are consistent with the
data in that table. The table contains only 13 rows, and the BusinessEntityID column is
NULL for all but two of the rows. The BusinessEntityID values in those rows are 212
and 274. And, as you would expect, the Employee table also contains a row with a
BusinessEntityID value of 212 and a row with a value of 274.
Certainly, as the above example indicates, using the INTERSECT operator to combine
the results of two queries together is a straightforward process. The key, as I’ve stated,
is to make sure the SELECT lists in the two queries are in sync with each other.
However, that also points out one of the limitations of using the INTERSECT operator to
combine queries-and that is, you cannot include columns in one query that are not
included of the second query. If you do include multiple matching columns in each
query, all the column values must match in order for a row to be returned. For example,
suppose you’re retrieving data from two tables that each include columns for employee
IDs, first names, and last names. If you want to match the two tables based on those
three columns, the three values in the first table must match the three values in the
second table for a row to be returned. (At this point, you might be asking yourself what
you’re doing with all that redundant data in your database.)
Instead of taking this approach, you may decide to compare the IDs in both tables, but
pull the first and last name from only one of the tables. Or you might decide that you
want to pull information from one table that is not stored in the other table. However,
because columns must correspond between the two queries when using an
INTERSECT operator to combine them, you have to find a way to work around this
limitation. One of the easiest ways to do that is to put your INTERSECT construction
into a common table expression (CTE) and then join the expression to one of the tables
to pull the additional data. For instance, the following example includes a CTE that
contains the same INTERSECT construction you saw in the example above:
WITH
cteCandidates (BusinessEntityID)
AS
SELECT BusinessEntityID
FROM HumanResources.Employee
INTERSECT
SELECT BusinessEntityID
FROM HumanResources.JobCandidate
SELECT
c.BusinessEntityID,
e.LoginID,
e.JobTitle
FROM
HumanResources.Employee AS e
ON e.BusinessEntityID = c.BusinessEntityID
ORDER BY
c.BusinessEntityID;
Notice that I’ve created a CTE named cteCandidates. As you would expect, the CTE
returns the BusinessEntityID values that are contained in both the Employee and
JobCandidate tables. In the primary SELECT statement, I then join the Employee table
to the CTE in order to also retrieve the LoginID and JobTitle values from the Employee
table. Because I put the INTERSECT join in the CTE, the statement can now return the
following results:
BusinessEntityIDLoginID JobTitle
212 adventure-works\peng0 Quality Assurance Supervisor
274 adventure-works\stephen0North American Sales Manager
As you can see, I’ve gotten around the limitations of the INTERSECT operator and am
now returning additional information from one of the tables. I could have also joined the
CTE to a different table in order to include additional information. For example, I might
have joined what I have here to the Person table to retrieve the employee’s first and last
names. The point is, the CTE let’s you be quite flexible when working with the
INTERSECT operator; you can still determine which rows match but also return all the
data you need, regardless of the source table.
In the following statement, I again combine two queries, one that retrieves data from the
Employee table and one that retrieves data from the JobCandidate table:
EXCEPT
SELECT BusinessEntityID
FROM HumanResources.JobCandidate;
This statement is nearly identical to the INTERSECT construction you saw in the first
two examples, except, of course, for the use of the EXCEPT operator. However,
because the query to the left of the operator is retrieving data from the Employee table,
the final result set will include data only from that table, and not the JobCandidate table.
The Employee table, as it turns out, contains 290 rows. As you’ll recall from the
previous examples, the two rows in the table with the BusinessEntityID values of 212
and 274 match the two rows in the JobCandidate table that also have BusinessEntityID
values of 212 and 274. That means, these two rows should be excluded from the result
set of the query above, which is exactly what happens. The query returns 288 rows that
have BusinessEntityID values of 1 through 290. However, IDs 212 and 274 are not
included in those results.
Now let’s look at what happens when you reverse the order of the queries, as I’ve done
in the following example:
EXCEPT
SELECT BusinessEntityID
FROM HumanResources.Employee;
Notice that the query that retrieves data from the JobCandidate table now comes first,
that is, sits to the left of the EXCEPT operator. The results from this statement, as you
would expect, are quite different from the previous example. All that is returned is a
single NULL value. In other words, according to the results, the JobCandidate table
contains no BusinessEntityID values that are not contained in the Employee table. This
is, of course, exactly the case.
As with the CTE example above, which uses the INTERSECT operator, you can also
use CTEs with EXCEPT operators. But as the last example points out, if your CTE
returns no data, your main query will also return no data (at least if you’re using an inner
join), but that’s true with either operator. Overall, in fact, you’ll find there’s little
difference between the INTERSECT and EXCEPT operators, in terms of how you use
them. The difference, of course, is in the results. INTERSECT returns rows common to
both queries, and EXCEPT returns only rows in the left query. Both operators, however,
are useful additions to the types of joins that the UNION and UNION ALL operators
provide. You can find more details about the INTERSECT and EXCEPT operators by
referring to the topic “EXCEPT and INTERSECT (Transact-SQL)” in SQL Server Books
Online. There you’ll find additional information about each operator and additional
examples of how to use them.
If you care about SQL Server performance you need to take SQL Server Statistics into account.
Statistics are lightweight and small objects which describe how data in SQL Server tables are
distributed. The query optimizer uses them to create query plans that improve query
performance.
If you have AUTO_UPDATE_STATISTICS option turned on for the database the query
optimizer will automatically determine when statistics might be out-of-date and then update them
when they are used by a query. But you need to update your statistics manually when the
automatic update does not occur frequently enough to provide you with a proper set of statistics
or because the sampled nature of the automatic updates is causing your statistics to be inaccurate.
Note, though, that updating statistics causes queries to recompile and experienced users
recommend not to update statistic too often. All you need to do is to find the middle ground
between the time it takes to recompile queries and improving query plans.
Use the following T-SQL Command to update the statistics for an index
USE AdventureWorks;
GO
GO
With the help of the following T-SQL Command you can update statistics for a table
USE AdventureWorks;
GO
GO
Update all statistics
To update all statistics for internal and user-defined tables in the database, use sp_updatestats
Transact-SQL Command
EXEC sp_updatestats;
All you need is simply set SqlBak maintenance job. You can do it in the following way:
Go to your “Dashboard” page and click “Add new job” then select “Add maintenance job”
Select the computer that you need to work with and check the SQL Server connection.
Add Maintenance Scripts
Also, you can set the schedule for your SqlBak maintenance job. To do it click “Schedule
maintenance” and specify the date of the first start and the interval with which the operation will
be repeated.
Add E-mail Notifications
Besides all this, you can turn “on” the e-mail notification and receive letters if the maintenance
job is “success” or “fail”.
As you can see, all process will take about few minutes and your SqlBak maintenance job will
work according to the selected schedule.