Вы находитесь на странице: 1из 21

A transaction log grows unexpectedly or becomes full on a computer that is running SQL Server

View products that this article applies to. This article was previously published under Q317375

On This Page
Expand all | Collapse all

SUMMARY
In SQL Server 7.0, in SQL Server 2000, and in SQL Server 2005, with the autogrow setting, transaction log files can expand automatically. Typically, the size of the transaction log file stabilizes when it can hold the maximum number of transactions that can occur between transaction log truncations that either checkpoints or transaction log backups trigger. However, in some situations the transaction log may become very large and run out of space or become full. Typically, you receive the following error message when a transaction log file takes up the available disk space and cannot expand any more: Error: 9002, Severity: 17, State: 2 The log file for database '%.*ls' is full. If you are using SQL Server 2005, you receive an error message that is similar to the following: Error: 9002, Severity: 17, State: 2 The transaction log for database '%.*ls' is full. To find out why space in the log cannot be reused, see the log_reuse_wait_desc column in sys.databases In addition to this error message, SQL Server may mark databases suspect because of a lack of space for transaction log expansion. For additional information about how to recover from this situation, see the "Insufficient Disk Space" topic in SQL Server Books Online. Additionally, transaction log expansion may result in the following situations:

A very large transaction log file. Transactions may fail and may start to roll back. Transactions may take a long time to complete. Performance issues may occur. Blocking may occur.

Back to the top

Causes
Transaction log expansion may occur because of the following reasons or scenarios:

Uncommitted transactions Extremely large transactions Operations: DBCC DBREINDEX and CREATE INDEX

While restoring from transaction log backups Client applications do not process all results Queries time out before a transaction log completes the expansion and you receive false 'Log full' error messages Unreplicated transactions

Note In SQL Server 2005, you can review the log_reuse_wait and log_reuse_wait_desc columns of the sys.databases catalog view to determine the following things:

Why the transaction log space is not reused Why the transaction log cannot be truncated

Uncommitted transactions Explicit transactions remain uncommitted if you do not issue an explicit COMMIT or ROLLBACK command. This most frequently occurs when an application issues a CANCEL or a Transact SQL KILL command without a corresponding ROLLBACK command. The transaction cancellation occurs, but it does not roll back; therefore, SQL Server cannot truncate every transaction that occurs after this because the aborted transaction is still open. You can use the DBCC OPENTRAN Transact-SQL reference to verify if there is an active transaction in a database at a particular time. For more information about this particular scenario, click the following article numbers to view the articles in the Microsoft Knowledge Base: 295108 Incomplete transaction may hold large number of locks and case blocking 171224 Understanding how the Transact-SQL KILL command works Additionally, see the "DBCC OPENTRAN" topic in SQL Server Books Online. Scenarios that may result in uncommitted transactions:

An application design that assumes that all errors cause rollbacks. An application design that does not completely take into account SQL Server behavior when it rolls back to named transactions or speciallynested named transactions. If you try to roll back to an inner-named transaction, you receive the following error message: Server: Msg 6401, Level 16, State 1, Line 13 Cannot roll back InnerTran. No transaction or savepoint of that name was found. After SQL Server generates the error message, it continues to the next statement. This is by design. For more information, see the "Nested Transactions" or "Inside SQL Server" topic in SQL Server Books Online. Microsoft recommends the following when you design your application:
o o o

o o

Only open one transaction unit (consider the possibility that another process may call yours). Check @@TRANCOUNT before you issue a COMMIT, a ROLLBACK, a RETURN, or a similar command or statement. Write your code with the assumption that another @@TRANCOUNT might "nest" yours and plan for the outer @@TRANCOUNT to be rolled back when an error occurs. Review savepoint and mark options for transactions. (These do not release locks!) Perform complete testing.

An application that permits user interaction inside transactions. This causes the transaction to remain open for a long time, which causes blocking and transaction log growth because the open transaction cannot be truncated and new transactions are added to the log after the open transaction. An application that does not check @@TRANCOUNT to verify that there are no open transactions. Network or other errors that close the client application connection to SQL Server without informing it. Connection pooling. After worker threads are created, SQL Server reuses them if they are not servicing a connection. If a user connection starts a transaction and disconnects before committing or rolling back the transaction, and a connection thereafter reuses the same thread, the previous transaction still stays open. This situation results in locks that stay open from the previous transaction and prevents the truncation of the committed transactions in the log, which results in large log file sizes. For more information about connection pooling, click the following article number to view the article in the Microsoft Knowledge Base: 164221 How to enable connection pooling in an ODBC application

Extremely large transactions Log records in the transaction log files are truncated on a transaction-by-transaction basis. If the transaction scope is large, that transaction and any transactions started after it are not removed from the transaction log unless it completes. This can result in large log files. If the transaction is large enough, the log file might use up the available disk space and cause the "transaction log full" type of error message such as Error 9002. For additional information about what to do when you receive this type of error message is provided in the "More Information" section in this article. Additionally, it takes a lot of time and SQL Server overhead to roll back large transactions.

Operations: DBCC DBREINDEX and CREATE INDEX Because of the changes in the recovery model in SQL Server 2000, when you use the Full recovery mode and you run DBCC DBREINDEX, the transaction log may expand significantly more compared to that of SQL Server 7.0 in an equivalent recovery mode with the use of SELECT INTO or BULK COPY and with "Trunc. Log on chkpt." off. Although the size of the transaction log after the DBREINDEX operation might be an issue, this approach provides better log restore performance.

While restoring from transaction log backups This is described in the following Microsoft Knowledge Base article: 232196 Log space used appears to grow after restoring from backup If you set SQL Server 2000 to use Bulk-Logged mode and you issue a BULK COPY or SELECT INTO statement, every changed extent is marked and then backed up when you back up the transaction log. Although

this permits you to back up transaction logs and recover from failures even after you perform bulk operations, this adds to the size of the transaction logs. SQL Server 7.0 does not include this feature. SQL Server 7.0 only records which extents are changed, but it does not record the actual extents. Therefore, the logging takes up significantly more space in SQL Server 2000 than in SQL Server 7.0 in Bulk-Log mode but not as much as it does in Full mode.

Client applications do not process all results If you issue a query to SQL Server and you do not handle the results immediately, you may be holding locks and reducing concurrency on your server. For example, suppose you issue a query that requires rows from two pages to populate your result set. SQL Server parses, compiles, and runs the query. This means that shared locks are placed on the two pages that contain the rows that you must have to satisfy your query. Additionally, suppose that not all rows fit onto one SQL Server TDS packet (the method by which the server communicates with the client). TDS packets are filled and sent to the client. If all rows from the first page fit on the TDS packet, SQL Server releases the shared lock on that page but leaves a shared lock on the second page. SQL Server then waits for the client to request more data (you can do this by using DBNEXTROW/DBRESULTS, SQLNextRow/SQLResults, or FetchLast/FetchFirst for example). This means that the shared lock is held until the client requests the rest of the data. Other processes that request data from the second page may be blocked.

Queries time out before a transaction log completes the expansion and you receive false 'Log full' error messages In this situation, although there is enough disk space, you still receive an "out of space" error message. This situation varies for SQL Server 7.0 and SQL Server 2000. A query can cause the transaction log to automatically expand if the transaction log is almost full. This may take additional time, and a query may be stopped or may exceed its time-out period because of this. SQL Server 7.0 returns error 9002 in this situation. This issue does not apply to SQL Server 2000. In SQL Server 2000, if you have the auto-shrink option turned on for a database, there is an extremely small time during which a transaction log tries to automatically expand, but it cannot because the auto-shrink function is running simultaneously. This may also cause false instances of error 9002. Typically, the automatic expansion of transaction log files occurs quickly. However, in the following situations, it may take longer than usual:

Growth increments are too small. Server is slow for various reasons. Disk drives are not fast enough.

Unreplicated transactions

The transaction log size of the publisher database can expand if you are using replication. Transactions that affect the objects that are replicated are marked as "For Replication." These transactions, such as uncommitted transactions, are not deleted after checkpoint or after you back up the transaction log until the log-reader task copies the transactions to the distribution database and unmarks them. If an issue with the log-reader task prevents it from reading these transactions in the publisher database, the size of the transaction log may continue to expand as the number of non-replicated transactions increases. You can use the DBCC OPENTRAN Transact-SQL reference to identify the oldest non-replicated transaction. For more information about troubleshooting unreplicated transactions, see the "sp_replcounters" and "sp_repldone" topics in SQL Server Books Online. For more information, click the following article numbers to view the articles in the Microsoft Knowledge Base: 306769 FIX: Transaction log of snapshot published database cannot be truncated 240039 FIX: DBCC OPENTRAN does not report replication information 198514 FIX: Restore to new server causes transactions to remain in log Back to the top

MORE INFORMATION
The transaction log for any database is managed as a set of virtual log files (VLFs) whose size SQL Server determines internally based on the total size of the log file and the growth increment in use when the log expands. A log always expands in units of whole VLFs and it can only compress to a VLF boundary. A VLF can exist in one of three states: ACTIVE, RECOVERABLE, and REUSABLE.

ACTIVE: The active portion of the log begins at the minimum log sequence number (LSN) that represents an active (uncommitted) transaction. The active portion of the log ends at the last-written LSN. Any VLFs that contain any part of the active log are considered active VLFs. (Unused space in the physical log is not part of any VLF.) RECOVERABLE: The portion of the log that precedes the oldest active transaction is only necessary to maintain a sequence of log backups for recovery purposes. REUSABLE: If you are not maintaining transaction log backups, or if you already backed up the log, SQL Server reuses VLFs before the oldest active transaction.

When SQL Server reaches the end of the physical log file, it starts reusing that space in the physical file by issuing a CIRCLING BACK operation to the beginning of the files. In effect, SQL Server recycles the space in the log file that is no longer necessary for recovery or backup purposes. If a log backup sequence is being maintained, the part of the log before the minimum LSN cannot be overwritten until you back up or truncate those log records. After you perform the log backup, SQL Server can circle back to the beginning of the file. After SQL Server circles back to start writing log records earlier in the log file, the reusable portion of the log is then between the end of the logical log and active portion of the log. For additional information, see the "Transaction Log Physical Architecture" topic in SQL Server Books Online. Additionally, you can see an excellent diagram and discussion of this on page 190 of "Inside SQL Server 7.0" (Soukup, Ron. Inside Microsoft SQL Server 7.0, Microsoft Press, 1999), and also in pages 182 through 186 of "Inside SQL Server 2000" (Delaney, Kalen. Inside Microsoft SQL Server 2000, Microsoft Press, 2000). SQL Server 7.0 and SQL Server 2000 databases have the options to autogrow and autoshrink. You can use these options to help you to compress or expand your transaction log.

For more information about how these options can affect your server, click the following article number to view the article in the Microsoft Knowledge Base: 315512 Considerations for Autogrow and Autoshrink configuration in SQL Server There is a difference between the truncation versus the compression of the transaction log file. When SQL Server truncates a transaction log file, this means that the contents of that file (for example, the committed transactions) are deleted. However, when you are viewing the size of the file from a disk space perspective (for example, in Windows Explorer or by using the dir command) the size remains unchanged. However, the space inside the .ldf file can now be reused by new transactions. Only when SQL Server shrinks the size of the transaction log file, do you actually see a change in the physical size of the log file. For more information about how to shrink transaction logs, click the following article numbers to view the articles in the Microsoft Knowledge Base: 256650 How to shrink the SQL Server 7.0 transaction log 272318 Shrinking the transaction log in SQL Server 2000 with DBCC SHRINKFILE For more information about SQL Server 6.5 transaction log usage, click the following article number to view the article in the Microsoft Knowledge Base: 110139 Causes of SQL transaction log filling up Back to the top

How to locate queries that consume a large amount of log space in SQL Server 2005
In SQL Server 2005, you can use the sys.dm_tran_database_transactions dynamic management view (DMV) to locate queries that consume large amounts of log space. The following columns in the sys.dm_tran_database_transactions DMV can be useful:

database_transaction_log_bytes_used database_transaction_log_bytes_used_system database_transaction_log_bytes_reserved database_transaction_log_bytes_reserved_system database_transaction_log_record_count

You can query the sql_handle column of the sys.dm_exec_requests DMV to obtain the actual statement text that consumes large amounts of log space. You can do this by joining the sys.dm_tran_database_transactions DMV and the sys.dm_tran_session_transactions DMV on the transaction_id column, and then adding an additional join with sys.dm_exec_requests on the session_id column. For more information about the sys.dm_tran_database_transactions DMV, visit the following Microsoft Developer Network (MSDN) Web site: http://msdn2.microsoft.com/en-us/library/ms186957.aspx For more information about the sys.dm_tran_session_transactions DMV, visit the following MSDN Web site: http://msdn2.microsoft.com/en-us/library/ms188739.aspx For more information about the sys.dm_exec_requests DMV, visit the following MSDN Web site: http://msdn2.microsoft.com/en-us/library/ms177648.aspx SQL SERVER SSRS 2008 R2 MapGallery and Codeplex World Map SQL SERVER COUNT(*) Not Allowed but COUNT_BIG(*) Allowed Limitation of the View 5

SQL SERVER How to Stop Growing Log File Too Big


September 20, 2010 by pinaldave

I was recently engaged in Performance Tuning Engagement in Singapore. The organization had a huge database and had more than a million transactions every hour. During the assignment, I noticed that they were truncating the transactions log. This really alarmed me so I informed them this should not be continued anymore because theres really no need of truncating or shortening the database log. The reason why they were truncating the database log was that it was growing too big and they wanted to manage its large size. I provided two different solutions for them. Now lets venture more on these solutions. If you are jumping over this post to leave a comment, please read first the two options as follows: 1) Convert the Recovery Model to Simple Recovery If you are truncating the transaction logs, this means you are breaking the T-Log LSN (Log Sequence Numbers). This follows that if disaster comes, you would not be able to restore your T-Logs and there would be no option for you to do point in time recovery. If you are fine with this situation and there is nothing to worry, I suggest that you change your recovery model to Simple Recovery Model. This way, you will not have extra ordinary growth of your log file. 2) Start Taking Transaction Log Backup If your business does not support loss of data or requires having point in time recovery, you cannot afford anything less than Full Recovery Model. In Full Recovery Model, your transaction log will grow until you take a backup of it. You need to take the T-Log Backup at a regular interval. This way, your log would not grow beyond some limits. If you are taking an hourly T-Log backup, your T-Log would grow until one hour but after this the T-Log backup would truncate all the committed transactions once you take it. Doing this would lead the size of the T-Log not to go down much, but it would rather be marked as empty for the next hours T-Log to populate. With this method, you can restore your database at Point of Time if a disaster ever happens at your server. Let us run an example to demonstrate this. In this case, I have done the following steps: 1. Create Sample Database in FULL RECOVERY Model 2. Take Full Backup (full backup is must for taking subsequent backup) 3. Repeat Following Operation 1. Take Log Backup 2. Insert Some rows 3. Check the size of Log File 4. Clean Up After a short while, you will notice that the Log file (ldf) will stop increasing but the size of the backup will increase. If you have an issue with your log file growth, I suggest that you follow either of the above solutions instead of truncating it.
/* FULL Recovery and Log File Growth */ USE [master] GO -Create Database SimpleTran IF EXISTS (SELECT name FROM sys.databases WHERE name = N'SimpleTran') BEGIN ALTER DATABASE [SimpleTran] SET SINGLE_USER WITH ROLLBACK IMMEDIATE; DROP DATABASE [SimpleTran] END

GO CREATE DATABASE [SimpleTran] GO -Set Database backup model to FULL ALTER DATABASE [SimpleTran] SET RECOVERY FULL GO BACKUP DATABASE [SimpleTran] TO DISK = N'D:\SimpleTran.bak' WITH NOFORMAT, NOINIT, NAME = N'SimpleTran-Full Database Backup', SKIP, NOREWIND, NOUNLOAD, STATS = 10 GO -Check Database Log File Size SELECT DB_NAME(database_id) AS DatabaseName, Name AS Logical_Name, Physical_Name, (size*8)/1024 SizeMB FROM sys.master_files WHERE DB_NAME(database_id) = 'SimpleTran' GO -Create Table in Database with Transaction USE SimpleTran GO IF EXISTS (SELECT * FROM sys.objects WHERE OBJECT_ID = OBJECT_ID(N'[dbo].[RealTempTable]') AND TYPE IN (N'U')) DROP TABLE [dbo].[RealTempTable] GO CREATE TABLE RealTempTable (ID INT) INSERT INTO RealTempTable (ID) SELECT TOP 50000 ROW_NUMBER() OVER (ORDER BY a.name) RowID FROM sys.all_objects a CROSS JOIN sys.all_objects b GO -Check the size of the Database SELECT DB_NAME(database_id) AS DatabaseName, Name AS Logical_Name, Physical_Name, (size*8)/1024 SizeMB FROM sys.master_files WHERE DB_NAME(database_id) = 'SimpleTran' GO -Take Full Backup BACKUP LOG [SimpleTran] TO DISK = N'D:\SimpleTran.bak' WITH NOFORMAT, NOINIT, NAME = N'SimpleTranTransaction Log Backup', SKIP, NOREWIND, NOUNLOAD, STATS = 10 GO -Run following transaction multiple times and check the size of T-Log INSERT INTO RealTempTable (ID) SELECT TOP 50000 ROW_NUMBER() OVER (ORDER BY a.name) RowID FROM sys.all_objects a CROSS JOIN sys.all_objects b GO -Check the size of the Database SELECT DB_NAME(database_id) AS DatabaseName, Name AS Logical_Name, Physical_Name, (size*8)/1024 SizeMB FROM sys.master_files WHERE DB_NAME(database_id) = 'SimpleTran' GO /* Now run following code multiple times. You will notice that it will not increase the size of .ldf file but will for sure increasethe size of the log backup. */ -Second Time -START BACKUP LOG [SimpleTran] TO DISK = N'D:\SimpleTran.log' WITH NOFORMAT, NOINIT, NAME = N'SimpleTranTransaction Log Backup', SKIP, NOREWIND, NOUNLOAD, STATS = 10 GO -Run following transaction and check the size of T-Log INSERT INTO RealTempTable (ID) SELECT TOP 50000 ROW_NUMBER() OVER (ORDER BY a.name) RowID FROM sys.all_objects a CROSS JOIN sys.all_objects b GO -Check the size of the Database

SELECT DB_NAME(database_id) AS DatabaseName, Name AS Logical_Name, Physical_Name, (size*8)/1024 SizeMB FROM sys.master_files WHERE DB_NAME(database_id) = 'SimpleTran' GO -END --Clean Up USE MASTER GO IF EXISTS (SELECT name FROM sys.databases WHERE name = N'SimpleTran') BEGIN ALTER DATABASE [SimpleTran] SET SINGLE_USER WITH ROLLBACK IMMEDIATE; DROP DATABASE [SimpleTran] END

If you run the code that is listed between START and END, you would get the following results almost every time:

This validates our earlier discussion. After seeing this article, the Singapore team implemented Log Backup instead of Log Truncate right away. Let me know what you think about this article.

SQL Server Log File Growth


Issue 3 Number 2 February 2011 Uncontrolled growth in the database log file is one of the more common problems we are called on to fix. Its caused by records not being removed from the log when they should be. This usually results from an incomplete understanding of database recovery modes. In this issue we bring you an overview of the issues involved and suggestions for fixing or avoiding log file growth.

Recovery Mode
The recovery mode of the database determines how and when closed transactions are removed from the log file to make room for new records. A SQL Server database has 3 recovery modes to which it can be set, SIMPLE, FULL, and BULK LOGGED. We can consider BULK LOGGED to be the same as FULL for the purposes of this article.

Full Recovery Mode


Most enterprise installations of SQL Server run in FULL recovery mode because of superior recoverability of data. In FULL recovery mode, transaction records are not removed from the log file until the transaction log is backed up. Therefore, if your database is set to FULL mode, you must back up the log periodically in order to create free space in the file. Faiing to do so is the most common reason for log file growth problems. You may find that the log file is still huge even though you are backing up the logs regularly. Keep in mind that a log file can grow automatically but it will not shrink automatically. Some extraordinary event in the past may have caused the file to grow out of proportion to its normal needs and it has remained at that size although very little of the space in the file is currently being used.

In normal operation, a log file in FULL recovery mode will grow to the size it needs to be to contain the maximum number of transaction records it accumulates between backups. The file size will be relatively large if there is a high transaction load on the database or if the interval between backups is long. It will be smaller if the transaction load is lower or if the backups are more frequent. In either case, the file should stabilize at the size it needs to be.

Simple Recovery Mode


SIMPLE Recovery mode is, well, simpler. Log records are deleted as soon as their transaction completes. This usually prevents log file growth because transactions normally are small and they complete quickly. However, if you are loading a million new records every night and that is being done in a single transaction, your log file will be pretty large because the log file expands to a size that will accommodate all the records from your biggest transactions. You dont need to back up your log file in SIMPLE mode. In fact, you will get an error if you try. The drawback to SIMPLE recovery mode is that your only option for recovering a database is to restore the last full backup you made. That means you will lose everything that happened since the last backup. This is why SIMPLE mode must be used only in cases where that degree of data loss is an acceptable risk.

Shrinking the Log File


Shrinking the log file is useful when an extraordinary event expands the log much larger than normal. However, regular shrinking of the log below its normal size is a waste of time and causes the file to have go through many expensive growth cycles to get back to its natural size. This can cause a heavier load on the disks and delays in processing. Shrinking in this case does not give you any additional usable disk space because the log will quickly grow back to its necessary size.

Other Causes of Log File Growth


In some cases you can be backing up your transaction logs regularly and still find uncontrolled, continuous file growth. A number of conditions can cause this. Here are a couple I have run across: The database is being replicated, but the distribution database is unavailable. Records involving replicated transactions cannot be removed from the log until they are sent to the distributor. There are several things that can cuase this. Network issues are preventing communication with the distribution database. Someone attempted to disable replication manually but did not do a complete job. Due to a communication breakdown, someone turned off or decommissioned the distribution server. I have run into this a few times and it is not as bone-headed as it sounds. Many sites have configured replication that became unnecessary at some point. Everyone has forgotten about it. At some point, it is decided to remove an old and apparently unused server which in fact is where the distribution database is parked. Since no one is accessing the subscriber database any more, no one notices that anything is wrong until the log file fills up the disk.

Conclusion

This article is far from a complete coverage of the log backups, recovery modes, etc, but I hope this has helped your understanding of the issues behind the log file growth problem. Another critical issue with log files and recovery modes is data safety. We havent addressed that subject in this article but we will undoubtedly discuss it in a later newsletter. As always, if you have questions about the subject, give me a call or email me.

Help! My SQL Server Log File is too big!!!


Takeaway: Overgrown transactional log files can turn into real problems if they are not handled properly. Today SQL Server consultant Tim Chapman discusses the perils of not handling SQL Server log growth properly, and what can be done to correct the problems. Over the years, I have assisted so many different clients whose transactional log file has become too large that I thought it would be helpful to write about it. The issue can be a system crippling problem, but can be easily avoided. Today Ill look at what causes your transaction logs to grow too large, and what you can do to curb the problem. Note: For the purposes of todays article, I will assume that youre using SQL Server 2005 or later. Every SQL Server database has at least two files; a data file and a transaction log file. The data file stores user and system data while the transaction log file stores all transactions and database modifications made by those transactions. As time passes, more and more database transactions occur and the transaction log needs to be maintained. If your database is in the Simple recovery mode, then the transaction log is truncated of inactive transaction after the Checkpoint process occurs. The Checkpoint process writes all modified data pages from memory to disk. When the Checkpoint is performed, the inactive portion of the transaction log is marked as reusable.

Transaction Log Backups


If your database recovery model is set to Full or Bulk-Logged, then it is absolutely VITAL that you make transaction log backups to go along with your full backups. SQL Server 2005 databases are set to the Full recovery model by default, so you may need to start creating log backups even if you havent ran into problems yet. The following query can be used to determine the recovery model of the databases on your SQL Server instance. SELECT name, recovery_model_desc FROM sys.databases Before going into the importance of Transactional Log Backups, I must criticize the importance of creating Full database backups. If you are not currently creating Full database backups and your database contains data that you cannot afford to lose, you absolutely need to start. Full backups are the starting point for any type of recovery process, and are critical to have in case you run into trouble. In fact, you cannot create transactional log backups without first having created a full backup at some point.

The Full or Bulk-logged Recovery Mode

With the Full or Bulk-Logged recovery mode, inactive transactions remain in the transaction log file until after a Checkpoint is processed and a transaction log backup is made. Note that a full backup does not remove inactive transactions from the transaction log. The transaction log backup performs a truncation of the inactive portion of the transaction log, allowing it to be reused for future transactions. This truncation does not shrink the file, it only allows the space in the file to be reused (more on file shrinking a bit later). It is these transaction log backups that keep your transaction log file from growing too large. An easy way to make consistent transaction log backups is to include them as part of your database maintenance plan. If your database recovery model is set to FULL, and youre not creating transaction log backups and never have, you may want to consider switching your recovery mode to Simple. The Simple recovery mode should take care of most of your transaction log growth problems because the log truncation occurs after the Checkpoint process. Youll not be able to recover your database to a point in time using Simple, but if you werent creating transactional log backups to begin with, restoring to a point in time wouldnt have been possible anyway. To switch your recovery model to Simple mode, issue the following statement in your database. ALTER DATABASE YourDatabaseName SET RECOVERY SIMPLE

Not performing transaction log backups is probably the main cause for your transaction log growing too large. However, there are other situations that prevent inactive transactions from being removed even if youre creating regular log backups. The following query can be used to get an idea of what might be preventing your transaction log from being truncated. SELECT name, log_reuse_wait_desc FROM sys.databases

Long-Running Active Transactions


A long running transaction can prevent transaction log truncation. These types of transactions can range from transactions being blocked from completing to open transactions waiting for user input. In any case, the transaction ensures that the log remain active from the start of the transaction. The longer the transaction remains open, the larger the transaction log can grow. To see the longest running transaction on your SQL Server instance, run the following statement. DBCC OPENTRAN If there are open transactions, DBCC OPENTRAN will provide a session_id (SPID) of the connection that has the transaction open. You can pass this session_id to sp_who2 to determine which user has the connection open. EXECUTE sp_who2 spid Alternatively, you can run the following query to determine the user. SELECT * FROM sys.dm_exec_sessions

WHERE session_id = spid from DBCC OPENTRAN You can determine the SQL statement being executed inside the transactions a couple of different ways. First, you can use the DBCC INPUTBUFFER() statement to return the first part of the SQL statement DBCC INPUTBUFFER(spid) from DBCC OPENTRAN Alternatively, you can use a dynamic management view included in SQL Server 2005 to return the SQL statement: SELECT r.session_id, r.blocking_session_id, s.program_name, s.host_name, t.text FROM sys.dm_exec_requests r INNER JOIN sys.dm_exec_sessions s ON r.session_id = s.session_id CROSS APPLY sys.dm_exec_sql_text(r.sql_handle) t WHERE s.is_user_process = 1 AND r.session_id = SPID FROM DBCC OPENTRAN

Backups
Log truncation cannot occur during a backup or restore operation. In SQL Server 2005 and later, you can create a transaction log backup while a full or differential backup is occurring, but the log backup will not truncate the log due to the fact that the entire transaction log needs to remain available to the backup operation. If a database backup is keeping your log from being truncated you might consider cancelling the backup to relieve the immediate problem.

Transactional Replication
With transactional replication, the inactive portion of the transaction log is not truncated until transactions have been replicated to the distributor. This may be due to the fact that the distributor is overloaded and having problems accepting these transactions or maybe because the Log Reader agent should be ran more often. IF DBCC OPENTRAN indicates that your oldest active transaction is a replicated one and it has been open for a significant amount of time, this may be your problem.

Database Mirroring
Database mirroring is somewhat similar to transactional replication in that it requires that the transactions remain in the log until the record has been written to disk on the mirror server. If the mirror server instance falls behind the principal server instance, the amount of active log space will grow. In this case, you may need to stop database mirroring, take a log backup that truncates the log, apply that log backup to the mirror database and restart mirroring.

Disk Space
It is possible that youre just running out of disk space and it is causing your transaction log to error. You might be able to free disk space on the disk drive that contains the transaction log file for the database by deleting or moving other files. The freed disk space will allow for the log file to enlarge. If you cannot free enough disk space on the drive that currently contains the log file then you may need to move the file to a drive with enough space to handle the log. If your log file is not set to grow automatically, youll want to consider changing that or adding additional space to the file. Another option is to create a new log file for the database on a different disk that has enough space by using the ALTER DATABASE YourDatabaseName ADD LOG FILE syntax.

Shrinking the File


Once you have identified your problem and have been able to truncate your log file, you may need to shrink the file back to a manageable size. You should avoid shrinking your files on a consistent basis as it can lead to fragmentation issues. However, if youve performed a log truncation and need your log file to be smaller, youre going to need to shrink your log file. You can do it through management studio by right clicking the database, selecting All Tasks, Shrink, then choose Database or Files. If I am using the Management Studio interface, I generally select Files and shrink only the log file. This can also be done using TSQL. The following query will find the name of my log file. Ill need this to pass to the DBCC SHRINKFILE command. SELECT name FROM sys.database_files WHERE type_desc = LOG Once I have my log file name, I can use the DBCC command to shrink the file. In the following command I try to shrink my log file down to 1GB. DBCC SHRINKFILE (SalesHistory_Log, 1000) Also, make sure that your databases are NOT set to auto-shrink. Databases that are shrank at continuous intervals can encounter real performance problems.

TRUNCATE_ONLY and NOLOG


If youre a DBA and have ran into one of the problems listed in this article before, you might be asking yourself why I havent mentioned just using TRUNCATE_ONLY to truncate the log directly without creating the log backup. The reason is that in almost all circumstances you should avoid doing it. Doing so breaks the transaction log chain, which makes recovering to a point in time impossible because you have lost transactions that have occurred not only since the last transaction log backup but will not able to recovery any future transactions that occur until a differential or full database backup has been created. This method is so discouraged that Microsoft is not including it in SQL Server 2008 and future versions of the product. Ill include the syntax here to be thorough, but you should try to avoid using it at all costs. BACKUP LOG SalesHistory WITH TRUNCATE_ONLY It is just as easy to perform the following BACKUP LOG statement to actually create the log backup to disk. BACKUP LOG SalesHistory TO DISK = C:/SalesHistoryLog.bak

Moving forward

Today I took a look at several different things that can cause your transaction log file to become too large and some ideas as to how to overcome your problems. These solutions range from correcting your code so that transactions do not remain open so long, to creating more frequent log backups. In additional to these solutions, you should also consider adding notifications to your system to let you know when your database files are reaching a certain threshold. The more proactive you are in terms of alerts for these types of events, the better chance youll have to correct the issue before it turns into a real problem.

Stop SQL Server transaction log (.LDF) files from growing indefinitely
Symptoms You notice that in your SQL databases directory the .LDF files are growing permanently. Solution Set the recovery mode of your SQL Server databases to 'simple'. Step-by-step instructions 1. Perform a full-backup of your SQL Server databases. Note: This is very important since switching from the full or bulk-logged recovery model to the simple recovery model breaks the backup log chain. Therefore, it is strongly recommend to back up the log immediately before switching, which allows you to recover the database up to that point. After switching, you need to take periodic data backups to protect your data and to truncate the inactive portion of the transaction log. [Source] 2. Switch recovery mode of SQL databases to SIMPLE. (See also: What is simple recovery mode?) Important Note: "The Simple recovery model lets you restore the database to the point from which it was last backed up. However, this recovery model does not enable you to restore the database to the point of failure or to a particular time." [Source]

3. Shrink the transaction log (.LDF) files.

4. Perform a full-backup of your SQL Server databases.

Optionally you can use a script for the steps described above:

Download: SQLScript_SetRecoveryModeSimple.zip How to stop my Sql Transaction Log from growing too much ? When you are working with a Microsoft Dynamics-NAV application and a Sql Server database, it's important to keep the following information in mind:

A Sql Server database consists of data file(s) and transaction log file(s). The data file(s) contain you data and the transaction log stores the details of all the modifications that are performed on the database and the details of the transaction(s) that performed the modification(s). This feature of logging the details of transactions can not be turned of in Sql Server. This implies that your transaction log file(s) will keep growing while you are using your database. The way that these transaction log file(s) grow and the type of data stored can be configured. You can configure the transaction log file(s) to expand as needed. When a transaction log file grows until the log file uses all available disk space and can not expand any more, you can no longer perform any data modification operations on your database. To prevent the transacton log file(s) from growing unexpectedly, consider one of the following methods:

Set the size of the log files to a large value to avoid the automatic expansion of the log file. Configure the automatic expansion of log fils by using memory units instead of a percentage. Change the recovery model. Based upon how critical the data in your database is, you can use one of the following recovery models to determine how your data is backed up: Simple recovery model | Full recovery model | Bulk logged recovery model

By using simple you can recover your database to the most recent backup of your database. By using Full or Bulk-Logged you can recover your database to the point of failure by restoring your database with the transaction log file(s) backup. By default in Sql Server the recovery model is set to Full. Then you will need to regularly backup your transacion log files to keep them from becoming too big. You can change the recovery model to Simple if you do not want to use the transaction log files during a disaster recovery operation.

Backup the transaction log file(s) regularly to delete the inactive transactions in the transaction log. Shedule the Update Statistics option to occur daily. When defragmenting indexes use DBCC INDEXDEFRAG instead of DBCC DBREINDEX. Using DBCC DBREINDEX, the transaction log file might expand drasticly when your database is in Full recovery model. And the DBCC INDEXDEFRAG statement does not hold the locks for a long time, unlike the DBCC DBREINDEX statement.

Issue with Log file growth in Sql server 2008


We are encountering an issue with the Log file growth. We are loading the data into a table from a different source. After the data load the triggers will fire and does the following activities: 1. load the data in a different database(in a different server using linked server). 2. Load the data into a different table in the same database(Only Key columns will be loaded) But during this process log file is growing enormously. Log file is growing to 100 GB where as the data file growth is very minimal. We have tried to implement the following solutions:

1. Setting the recovery model to bulk-logged 2. Scheduled the t-log backup to run for every 30 mins. But both the solutions didn't worked. The log file is growing as usual. Observations: When ever I check the status of the field log_reuse_wait_desc under the sys.databases table I am seeing the status as "Active_Transaction". Is there a way to restrict the log growth in this scenario There are couple of things to consider. What type of backups you do, and what type of backups you need. If you will have the answer to this question you can either switch Recovery Mode to simple or leave it full but then you need to make incremental backups on a daily basis (or whatever makes you happy with log size). To set your database logging to simple (but only if you do Full Backups of your database!). 1. 2. 3. 4. Right click on your database Choose Properties Choose Options Set Recovery mode to simple

This will work and is best if your backup schedule is Full Backup every day. Because in such scenario your log won't be trimmed and it will skyrocket (just like in your case). If you would be using Grandfather&Father&Son backup technique, which means Monthly Full backup, Weekly Full backup, and then every day incremental backup. Then for that you need Full Recovery Mode. If 1GB of log per day is still too much you can enable incremental backup per hour or per 15 minutes. This should fix the problem of log growing more and more. If you run Full Backup every day you can switch it to simple recovery mode and you should be fine without risking your data (if you can leave with possible 1 day of data being lost). If you plan to use incremental then leave it at Recovery Mode Full.

Stop the transaction log of a SQL Server from growing unexpectedly


5 12 2008

In SQL Server 2000 and in SQL Server 2005, each database contains at least one data file and one transaction log file. SQL Server stores the data physically in the data file. The transaction log file stores the details of all the modifications that you perform on your SQL Server database and the details of the transactions that performed each modification. Because the transactional integrity is considered a fundamental and intrinsic characteristic of SQL Server, logging the details of the transactions cannot be turned off in SQL Server. The transaction log file is logically divided into smaller segments that are referred to as virtual log files. In SQL Server 2000, you can configure the transaction log file to expand as needed. The transaction log expansion can be governed by the user or can be configured to use all the available disk space. Any modifications that SQL Server makes to the size of the transaction log file, such as truncating the transaction log files or growing the transaction log files, are performed in units of virtual log files.

If the transaction log file that corresponds to a SQL Server database is filled and if you have set the option for the transaction log files to grow automatically, the transaction log file grows in units of virtual log files. Sometimes, the transaction log file may become very large and you may run out of disk space. When a transaction log file grows until the log file uses all the available disk space and cannot expand any more, you can no longer perform any data modification operations on your database. Additionally, SQL Server may mark your database as suspect because of the lack of space for the transaction log expansion. Reduce the size of the transaction logs To recover from a situation where the transaction logs grow to an unacceptable limit, you must reduce the size of the transaction logs. To do this, you must truncate the inactive transactions in your transaction log and shrink the transaction log file. Note The transaction logs are very important to maintain the transactional integrity of the database. Therefore, you must not delete the transaction log files even after you make a backup of your database and the transaction logs. Truncate the inactive transactions in your transaction log When the transaction logs grow to an unacceptable limit, you must immediately back up your transaction log file. While the backup of your transaction log files is created, SQL Server automatically truncates the inactive part of the transaction log. The inactive part of the transaction log file contains the completed transactions, and therefore, the transaction log file is no longer used by SQL Server during the recovery process. SQL Server reuses this truncated, inactive space in the transaction log instead of permitting the transaction log to continue to grow and to use more space. You can also delete the inactive transactions from a transaction log file by using the Truncate method. Important After you manually truncate the transaction log files, you must create a full database backup before you create a transaction log backup. Shrink the transaction log file The backup operation or the Truncate method does not reduce the log file size. To reduce the size of the transaction log file, you must shrink the transaction log file. To shrink a transaction log file to the requested size and to remove the unused pages, you must use the DBCC SHRINKFILE operation. The DBCC SHRINKFILE Transact-SQL statement can only shrink the inactive part inside the log file. Note The DBCC SHRINKFILE Transact-SQL statement cannot truncate the log and shrink the used space inside the log file on its own. Prevent the transaction log files from growing unexpectedly To prevent the transaction log files from growing unexpectedly, consider using one of the following methods:

Set the size of the transaction log files to a large value to avoid the automatic expansion of the transaction log files. Configure the automatic expansion of transaction log files by using memory units instead of a percentage after you thoroughly evaluate the optimum memory size. Change the recovery model. If a disaster or data corruption occurs, you must recover your database so that the data consistency and the transactional integrity of the database are maintained. Based on how

critical the data in your database is, you can use one of the following recovery models to determine how your data is backed up and what your exposure to the data loss is: o Simple recovery model o Full recovery model o Bulk-logged recovery model By using the simple recovery model, you can recover your database to the most recent backup of your database. By using the full recovery model or the bulk-logged recovery model, you can recover your database to the point when the failure occurred by restoring your database with the transaction log file backups. By default, in SQL Server 2000 and in SQL Server 2005, the recovery model for a SQL Server database is set to the Full recovery model. With the full recovery model, regular backups of the transaction log are used to prevent the transaction log file size from growing out of proportion to the database size. However, if the regular backups of the transaction log are not performed, the transaction log file grows to fill the disk, and you may not be able to perform any data modification operations on the SQL Server database. You can change the recovery model from full to simple if you do not want to use the transaction log files during a disaster recovery operation.

Back up the transaction log files regularly to delete the inactive transactions in your transaction log. Design the transactions to be small. Make sure that no uncommitted transactions continue to run for an indefinite time. Schedule the Update Statistics option to occur daily. To defragment the indexes to benefit the workload performance in your production environment, use the DBCC INDEXDEFRAG Transact-SQL statement instead of the DBCC DBREINDEX Transact-SQL statement. If you run the DBCC DBREINDEX statement, the transaction log may expand significantly when your SQL Server database is in Full recovery mode. Additionally, the DBCC INDEXDEGRAG statement does not hold the locks for a long time, unlike the DBCC DBREINDEX statement.

Вам также может понравиться