Вы находитесь на странице: 1из 16


SAP on SQL Server ... | SCN

Getting Started New sletters Store

Welcome, Guest



Search the Community

Solutions Industries Lines of Business

Services & Support Training & Education University Alliances

About SCN Partnership Events & Webinars

Downloads Developer Center Innovation

Activity Brow se Communications Actions

SAP on SQL Server

9 Posts

SCN Gamification
Posted by Eduardo Rezende Apr 29, 2013

What do you think about the new SCN Gamification? If this is a complete strange topic for you, check Chip's blog:

Were LIVE with #SCN Gamification! #SCNGameOn Personally, I think this new platform will help the community. Do you have suggestions about any particular badge (or mission) we should have here at I already got some badges, like: I Was Here First Steps I Shared Some Knowledge! Which badges did you got? Any particular badge you are looking for?

SQL Server space?

49 View s


Tags: scngameon

How to access an external Microsoft SQL Server database

Posted by Beate Groetschnig Jan 19, 2013

Quite often someone asks me how an external SQL Server database can be accessed by an SAP system, e.g. to: Access data in an external SQL Server database with the SAP system Report against data in an external SQL Server database with Business Intelligence / Business Warehouse Use DBACockpit to monitor an external SQL Server instance Depending on: Which operating system your SAP application servers run on Which purpose you want to use the connection for Which type of SAP application servers (ABAP, Java, Dual-stack) are available in the SAP system There are different connection types, technical requirements and restrictions. This blogpost clarifies the possibilities and restrictions and covers frequently asked questions: 1. Options and technical requirements to access an external SQL Server database 2. How to setup a connection with UDConnect 3. How to setup a connection with DBCon / Multiconnect 4. How to monitor an external SQL Server Database using DBACockpit 5. Troubleshooting

1. Options and technical requirements to access an external SQL Server Database

The SAP standard ways to connect an external SQL Server instance with an SAP system are: Multiconnect (DBCON)



UDConnect (Universal Data Connect)

SAP on SQL Server ... | SCN

Regardless of the way you choose you can only connect to remote databases which are reachable via network from your SAP Application Server. DBCON / Multiconnect DBCON / Multiconnect uses the Microsoft SQL Server Native Client Software (SNAC) to establish a connection to the remote SQL Server instance. The Microsoft SQL Server Client Software for Windows consists of several *.dll files. For long time it was available for Windows platforms only. Recently, Microsoft ported its ODBC SQL Native Access driver to Linux. For this reason heterogeneous Linux/Windows scenarios are now possible. DBCON utilizes the SAP ABAP stack to access the external databases so your system requires at least one ABAP-stack-based SAP Application Server running on Windows or Linux x86_64. UDConnect UDConnect uses a JDBC (Java Database Connectivity) driver to establish a connection to the remote SQL Server instance. The JDBC driver consists of one or more *.jar files and can be used on Windows, Unix and Linux operating systems. As UDConnect utilizes the J2EE engine of the SAP Application server to access the external databases you need to have at least one Java-Stack-based SAP Application Server in your SAP system in order to use UDConnect. Connectivity Matrix Windows Java Stack ABAP Stack Dual Stack UDConnect DBCon UDConnect DBCon Linux x86_64 UDConnect DBCon UDConnect Unix UDConnect none UDConnect

Remarks: If your system comprises solely of ABAP stack-b ased servers running on Unix platforms you can neither use UDConnect nor DBCON. Why? Because UDConnect requires at least one Java-stack b ased SAP Application Server (regardless of the operating system) and DBCON requires at least one Windows- or Linux x86_64-b ased SAP Application Server. Using DBCon on a Linux x86_64 b ased application server can only b e used to connect to SQL Server versions 2005 and higher. Predecessor releases are not supported b y the Microsoft driver. Furthermore, the driver is only supported for Red Hat Enterprise Linux 5.x and higher and for Suse SLES11 SP2 and higher.

2. How to setup a connection with UDConnect

UDConnect cannot be used for remote monitoring a SQL Server based system. However, you can use it to access data in an external SQL Server database. Setting up UDConnect in order to access data in an external SQL Server Database with BW/BI requires four steps: Adding an RFC server on Java-stack side Defining an RFC destination on BW/BI side Installing and configuring the JDBC driver on Java-stack side Configure the connection URL for the external database on Java-stack side For step-by-step instructions please see the configuration guide available under: SAP Netweaver '04: How to configure UD Connect on the J2EE Server for JDBC Access to External Databases SAP Netweaver 7.1: see attached guide (UDConnect_for_710.pdf)

3. How to setup a connection with DBCON / Multiconnect

To access data in an external SQL Server Database with DBCON / Multiconnect three steps are required: Installing the SAP DBSL for SQL Server (dbmssslib.dll / dbmssslib.so) On a Windows-based server: installing the Microsoft SQL Server Native Client (SNAC) or On a Linux x86_64 - based server: installing the Microsoft ODBC driver for Linux Creating a DBCON entry for the external database SAP note 1774329 explains the steps required to prepare your SAP instance to connect to a remote SQL Server instance.

SAP DBSL for Windows DBCON utilizes the ABAP-stack to connect to an external database. The ABAP-stack itself requires the Database Shared Library (DBSL) to communicate with a database. For each Relational Database Management System (RDBMS) supported by the ABAP-stack there is a separate DBSL provided by SAP. To install the DBSL:




SAP on SQL Server ... | SCN

Determine which kernel your SAP system is using (32 bit / 64 bit, Unicode / Non-Unicode, Kernel Release, Operating System) kernel release: go to ransaction SM51 place the cursor on the SAP instance click "Release Info" bitversion, Unicode / Non-Unicode, Operating System: go to "System" "Status" Download the archive containing the most recent SAP DBSL for SQL Server matching your kernel go to SAP Software Download Center Browse our Online Catalog Additional Components SAP Kernel SAP KERNEL <bitversion> <Unicode / Non-Unicode> SAP KERNEL <kernel_release> <bitversion> <Operating System> MS SQL Server lib_dbsl_<patchlevel>-<number>.sar Extract the downloaded archive using command sapcar -xvf lib_dbsl_<patchlevel>-<number>.sar Copy the unpacked dbmssslib.dll file into the kernel directory of all SAP application servers which you want to use to establish the connection.

SAP DBSL for Linux x86_64 Please see SAP note 1644499 if you need to download and install the SAP DBSL for Linux x85_64-based servers. The note describes how to request the DBSL and also explains in detail which steps are required to properly set it up.

DBCON entry The DBCON entry informs the ABAP-stack where to find the external SQL Server Database and how to authenticate. Please see SAP note 178949 to learn how to create a DBCON entry for an external SQL Server Database. Microsoft SQL Server Client for Windows The SQL Server native client is used to establish the connection to the external SQL Server instance. To install it you need to run the sqlclni.msi installation package which is available from the SQL Server installation DVD / CD, or from the Microsoft Software Download website. Microsoft ODBC Driver for Linux x86_64 SAP note 1644499 explains in detail where to download the Linux x86_64 - based ODBC driver and how to install it.

4. How to monitor an external SQL Server instance using DBACockpit

To monitor an SQL Server database with DBACockpit you first need to configure a DBCON connection to the external database. Please refer to section 3 for details. If your local system is running on SQL Server as well you can skip installing the Microsoft SQL Server Native Client (SNAC) and SAP DBSL for SQL Server as both will already be in place. Then, proceed with the DBACockpit-related configuration steps. You can find detailed guides attached to SAP note 1027512 (sqldba_cockpit.pdf) and in SAP note 1316740. UDConnect cannot be used for remote monitoring - the only way you can monitor a remote system is by using DBCon.

5. Troubleshooting
No shared library found for the database with ID <DBCON_entry_name> or Unable to find library <kernel_directory>/dbmssslib.sl'. -> DLENOACCESS (0,Error 0) or ERROR => DlLoadLib()==DLENOACCESS - dlopen ("/usr/sap/<SID>/DVEBMGS00/exe/dbmssslib.so") FAILED or could not load library for database connection <DBCON_entry_name> or cannot open shared object This error indicates that the ABAP stack could not find the SAP DBSL for SQL Server (dbmssslib.dll) in the kernel directory. If you encounter this error on a Unix - based server the root cause is clear: the DBSL does not exist for other platforms than Windows or Linux x84_64. In this case use a Windows-based or a Linux x86_64-based SAP Application Server to establish the connection. If your system does not contain a Windows-based or a Linux x86_64based Application Server you need to setup a small one as workaround. If you encounter this error on a Windows Application Server or a Linux x86_64 based Application Server make sure that the DBSL is properly installed in the kernel directory as explained in point 3. B Wed Jan <timestamp> B create_con (con_name=<dbcon_name>) B Loading DB library '<kernel_directory>\dbmssslib.dll' ... M *** ERROR => DlLoadLib: LoadLibrary(<kernel_directory>\dbmssslib.dll) Error 14001 M Error 14001 = "This application has failed to start because the application configuration is incorrect. Reinstalling the application may fix this problem." B *** ERROR => Couldn't load library '<kernel_directory>\dbmssslib.dll' B ***LOG BYG=> could not load library for database connection <dbcon_name> The DBSL could be found successfully in the kernel directory but there was a problem while loading it. This can have various reasons. To ensure that the file itself is not corrupt please download and install the file from scratch as explained in point 3. If the error remains afterwards please check the OS Log for further



errors at the time of the error.

SAP on SQL Server ... | SCN

Generate Activation Context failed for <kernel_directory>\dbmssslib.dll. Reference error message: The referenced assembly is not installed on your system. Dependent Assembly Microsoft.VC80.CRT could not be found and Last Error was The referenced assembly is not installed on your system.

The Microsoft runtime DLL's which are required by the DBSL are missing on your server. Please install them as explained in SAP Note 684106. Could not find stored procedure 'SAPSolMan<version>.sap_tf_version'" DBACockpit uses stored procedures to collect monitoring information from a database. These stored procedures need to exist in the database that is being monitored. If you are using the connection for a purpose other than remote monitoring with DBACockpit you can ignore this error. If you want to remote monitor the SQL Server database please make sure that you've configured the connection exactly as described in the configuration guide referenced in point 4. Then you need to create the missing stored procedures in the remote database. To do so open transaction DBACockpit in the monitoring system, use the "System"-Dropdown field to select the remote SQL Server system which you want to monitor -> go to Configuration -> SQL Script Execution. If the monitoring schema is missing in the remote database you will be offered a button called "create/repair schema". After using it to create the schema you will be offered a button called "Execute script(s)". Click on it to create all required monitoring Stored Procedures in the remote database. You want to update the JDBC driver used by your UDConnect connection Follow the instructions in SAP Note 1009497.

1692 View s


Tags: sqlserver;dbcon;, udconnect;remote_monitoring

Tune your SQL Server SAP Database

Posted by Beate Groetschnig Sep 28, 2012

Hi again, In my last blog post I've already discussed a major topic for SQL Server databases, the common misconceptions. Now I want to elaborate on another topic which I come across very frequently... PERFORMANCE Performance-tuning is a very complex domain - good and deep knowledge and understanding of how SQL Server works is required to tune. For this reason it's simply impossible to quickly deal with all facts and details you need to thorougly look into every single corner of your database that could be tuned. Why am I still writing a blog post about it then? Because I very often see SQL Server-based SAP systems where little effort could improve performance so much and many of the tasks which I'll talk about can even be carried out without a downtime. For this reason I always find it too bad if I look at a system and see that these basic tasks were not carried out. I have the impression that some SAP recommendations for SQL Server databases which were communicated via SAP Notes within the last couple of years are still not so wellknown yet for some reason so I want to seize the opportunity and broadcast them as these are general ones... they are not supposed to be followed in special cases but they should be followed in any case... As for my last blog post I have again written a KBA which contains everything I want to share while I again post the initial version of it here for those of you who don't have access to SAP Notes and KBAs. SAP KBA 1744217 - Basic requirements to improve the performance of a SQL Server Database Points 2, 3, 4, 5, 8 and 9 don't even require a downtime so you can go ahead and apply them right away. Point 3 will cause some load for large objects and should therefore be carried out when the overall system load is low and you're able to monitor it. Small tables can be compressed quite quick and won't cause considerable load. It's a good idea to simply test it on a handful of tables with different sizes so you can see how long it takes in your system. You'll be astonished how much space (and thereby indirectly I/O accesses) page-compression will save you.

(1) Kernel and Database Shared Library (DBSL) patchlevel

We frequently fix bugs or problems in the kernel and DBSL executables. Some of these are related with error messages but many are as well related with performance issues. For this reason it is important to make sure that your kernel and DBSL executables are updated to the most recent patchlevels provided by SAP on a regular basis. To to do, please follow SAP Note 19466.

(2) Statistics
scn.sap.com/community/sqlserver/blog 4/16


SAP on SQL Server ... | SCN

If you follow point 7 SQL Server itself will take care of automatically updating statistics. Please do not schedule any additional statistics updates unless SAP explicitly recommends you to. Besides the automatic statistics update, please implement SAP Note 1558087.

(3) Database Compression

As of version SQL Server 2008 you can page or row compress database objects. We've seen many cases where compressing database objects could significantly decrease the amount of space occupied by the database. This in turn means that fewer I/O accesses are required to read and write data. For this reason SAP decided to use page compression by default in all newly installed systems as of May 2011. If you are using SQL Server Version 2008 or higher and you fulfill all requirements from SAP Notes 1488135 and SAP Note 1459005 we strongly recommend to implement compression. To check, if your database objects are already compressed please:

1. Goto transaction SE38 or SA38 2. Start report MSSCOMPRESS 3. Set the Data Compression Type and Index Compression Type Filter Options to Not compressed


4. Wait for the table list to be refreshed 5. If uncompressed objects are found, follow SAP Note 1488135 to page-compress them. Note that you can choose between: Always ONLINE ONLINE, retry OFFLINE Always OFFLINE Please be aware that compressing an object will implicitly require to lock the object being compressed at certain times. If you use the online option SQL Server will use as few locks as possible. If you use the offline option, the object will be locked and will not be available for access until the compression has finished. For large objects compression can take a while for this reason ensure to use the first option if you want to avoid this. For tables which contain columns with data type image, text, ntext, varchar(max), nvarchar(max), varbinary(max), and xml, an online compression is not possible with SQL Server Releases lower than SQL Server 2012. Please consider this when planning the compression of your database.

(4) Tempdb size

Especially in BW/BI systems, the tempdb is heavily used for certain tasks. Please make sure that it is correctly sized as described in SAP Note 1174635.

(5) Datafiles
To ensure that the data can be distributed over all existing data files, it is important that all data files provide free space at all times. Please follow SAP Note 1238993 to ensure that your data files are configured correctly. It's also recommended that you have ~ 0.5 - 1 datafiles per CPU core (e.g. if your SQL Server can use 4 CPU cores, 24 datafiles make sense). If you are using a BW system it makes sense to have the same number of datafiles for the tempdb.

(6) Lock Pages in Memory Feature

scn.sap.com/community/sqlserver/blog 5/16


SAP on SQL Server ... | SCN

As of SQL Server 2005 it is possible to disallow the operating system to page out pages allocated by SQL Server to the page file. As a major part of the main memory allocated by SQL Server is the Data Cache it is important that it is not being paged out. Otherwise it would in the end be read from disk (the page file) instead of from the main memory which decreases performance. Please follow SAP Note 1134345 to make sure that you are using the lock pages in memory feature.

(7) Parameters
Please make sure that the database parameter are set as recommended in SAP Notes: 327494 - SQL Server 2000 879941 - SQL Server 2005 1237682 - SQL Server 2008 1702408 - SQL Server 2012

(8) sp_autostats
We recommend to switch on sp_autostats for all objects in the database in order to leave the task of updating statistics to SQL Server. For some tables we've experienced better performance if the automatic statitsics update is switched off. To correctly configure these for your database release, please follow SAP KBA 1649078.

(9) Disallow Page Level Locks

For several tables you need to disallow page level locks. Please follow SAP KBA 1648817 to properly configure this.

(10) Service Packs and Cumulative Updates

Make sure that you apply the most recent service pack and cumulative update for your SQL Server Release both, on server side as well as on all client sides. Please see SAP Note 62988 and SAP KBA 1733195 for details.

1149 View s


Have you ever ...

Posted by Beate Groetschnig Sep 6, 2012

... wondered why SQL Server behaves to weird Have you ever asked yourself questions like: why is my transaction log running full if I'm already using recovery model simple? how often should I update the statistics of the database objects? how often should I reorganize or rebuild tables and indexes? why is the timestamp of the optimizer statistics for some objects not new if my Update_Tabstats job runs frequently? why is so much data missing in some tables after I used repair_allow_data_loss to repair database inconsistencies? why does DBCC CHECKDB or DBCC CHECKTABLE still find inconsistencies when I've already used repair_allow_data_loss? why are my datafiles not growing the way I expect them to even though I've configured the files to autogrow? why is my database occupying less space after I've archived so much data? why is my table not occupying less space after I've deleted so many rows from it? why is the result of my query not ordered anymore even if it always used to be? Bad news first: My experience says: NO, to 99,99999 % what you see is NOT a bug. Instead, there's simply a gap between what you think how it's supposed to work and how Microsoft designed it to work. ... and now the good news: YES, there IS a comprehensive explanation to your: but why?!?! questions and finally you get all the anwers at once

To clarify all these frequent misconceptions I released: SAP KBA 1660220 - Microsoft SQL Server: Common misconceptions For those of you who don't have access to SAP Notes and KBAs I once paste the current content of the note here ....




SAP on SQL Server ... | SCN

If you come across similar topic, let me know and I'll try my best to cover them as well. Regards, Beate ---------------------------------------------------------------------------------------------------------------------------------------------------------------------

1660220 - Microsoft SQL Server: Common misconceptions

--------------------------------------------------------------------------------------------------------------------------------------------------------------------Some widely accepted information about the Microsoft SQL Server database is shown not to be entirely correct upon closer examination. This Knowledge Base Article lists widespread incorrect assumptions about Microsoft SQL Server and explains why they are wrong. 1. The Microsoft SQL Server Agent job Update_Tab stats updates the database statistics which are used by the database optimizer to calculate execution plans and therefore it is critical for performance if the job fails. 2. When using recover model simple the transaction log cannot run full. 3. The result set of database accesses is always ordered by the primary key even if no ORDER BY clause is used explicitly. 4. If database accesses hang for a long time, the problem is caused by a deadlock. 5. Updating Microsoft SQL Server database statistics manually (with a SQL Server Agent job, with a SQL Server Maintenance Plan or by other means) is part of maintenance and therefore required on a regular basis. 6. Reorganizing some or all database objects is a required maintenance task and should therefore be carried out on a regular basis. 7. DBCC CHECKDB and DBCC CHECKTABLE with option repair_allow_data_loss allow you to repair database inconsistencies and will not cause any data loss. 8. After archiving or deleting data from a table the table and its indexes will occupy less space in the database and the database itself will also occupy less space. 9. If the autogrow option is configured for all datafiles Microsoft SQL Server will grow all files in a balanced way.

1. The SQL Server Agent job Update_Tabstats updates the database statistics which are used by the database optimizer to calculate execution plans and therefore it is critical for performance if the job fails.
Job SAP CCMS_<sid>_<SID>_Update_Tab stats does not touch the optimizer statistics at all and has no influence on execution plans. Instead, it is part of the database monitoring framework implemented and provided by SAP. It is not natively included in a Microsoft SQL Server installation, but is developed and delivered by SAP and was introduced with Basis Release 7.00 SP12. The job executes stored procedure sap_update_tabstats. It collects meta information about database objects (e.g. keyfigures like the number of table rows, the reserved size of an object, the row modification counter, and many more) and stores them persistently in the database. As SQL Server does not keep any history for such keyfigures, SAP collects and stores them with this job in order to make historical information about database objects available. This allows to analyze how certain properties of database objects change over the time and serves as source of information for SAP DB monitoring transactions (e.g. DBACockpit, fastest growing tables, ...). Also see SAP Note 1178916 for more information.

2. When using recovery model simple the transaction log cannot run full.
The transaction log of a database consists of one or more files. You can decide for each file if you allow SQL Server to autogrow the file if required or not (autogrow on/off). With recovery model simple you ensure that SQL Server will truncate the log at each checkpoint - still, this doesn't mean that the transaction log cannot run full. Imagine you have a very long running transaction and all transaction log space is consumed before the transaction reaches the point in time where it commits. In such a case the transaction log can run full even if you are using recovery model simple. To resolve you need to take a closer look at the transactions - is there a long running transaction which keeps SQL Server from truncating the log? Is it normal that this transaction takes so much time or is it caused by a wrong executiong plan, bad I/O or any other performance-degrading issue? To understand this in detail you need to make yourself familiar with how transaction log truncation works, meaning: which parts of the transaction log are considered active and which are considered inactive when the truncation is carried out. Parts cannot be truncated as long as they are still active. For a detailed example please see SAP Note 421644.

3. The result set of database accesses is always ordered by the primary key even if no ORDER BY clause is used explicitly.




SAP on SQL Server ... | SCN

The result set is only ordered by the key if SQL Server uses the primary index for the access. Since the database optimizer chooses the access path dynamically based on existing statistics, you cannot assume that SQL Server ALWAYS uses the primary index for certain accesses and therefore cannot rely on an ordered result. If you require the result of a query to be ordered you must use an ORDER BY clause.

4. If database accesses hang for a long time, the problem is caused by a deadlock.
This assertion is incorrect. Genuine deadlocks (in other words, the mutual blocking of several transactions) are quickly recognized by Microsoft SQL Server, and eliminated by canceling one of the blocking transactions with an SQL Error 1205 within seconds. Database accesses that hang for a long time may have a wide variety of causes (blocking locks or suboptimal execution plans for example), but are not deadlocks. A good starting point to analyze why certain actions hang is creating snapshots of the current situation with hangman. Refer to SAP Notes 948633, 541256 and 806342. If you really encounter a deadlock (which becomes evident by the occurence of an SQL Error 1205), you can analyze it in more detail by following SAP Notes 111291 and 32129.

5. Updating Microsoft SQL Server database statistics manually, with a SQL Server Agent job or a SQL Server Maintenance Plan is required on a regular basis.
Updating statistics on a regular basis is important but with SQL Server there is no need to schedule this task explicitly! As long as the autostats feature is properly enabled, Server automatically detects if statistics need to be updated and carries out this task for you. The decision if statistics are considered out of date depends on several factors like the number of rows in the table and the number of rows modified since the last statistics update. For details on the algorithm please see: Statistics Used by the Query Optimizer in Microsoft SQL Server 2000 Statistics Used by the Query Optimizer in Microsoft SQL Server 2005 Statistics Used by the Query Optimizer in Microsoft SQL Server 2008 To ensure that the automatic statistics update is enabled correctly, please refer to the configuration note for your SQL Server release: SQL Server 2000: SAP Note 327494 SQL Server 2005: SAP Note 879941 SQL Server 2008: SAP Note 1237682 SQL Server 2012: SAP Note 1702408 Bottom line: Don't update optimizer statistics manually for any object (or the whole database) unless SAP explicitly asks you to do so. It will produce I/O load and will not have any benefit. The only exception to this rule are tables which contain date information. To ensure proper statistics for those tables at any times you need to follow SAP Note 1558087 and schedule an update job for such tables.

6. Reorganizing some or all database objects is a required maintenance task and should therefore be carried out on a regular basis.
A bad overall performance or a bad performance of single database operations is often believed to be caused by the fragmentation of tables and indexes. As a solution, reorganization or rebuild appears to be the cure. This might apply for other relational database management systems, but for SQL Servern in most situations both, the assumption that the bad performance is caused by fragmentation, and trying to solve the problem by reorganizing or rebuilding database objects, are false. For this reason, SAP explicitly recommends not to reorganize or rebuild any database objects on a regular basis. You should not even reorganize or rebuild objects as an attempt to solve a performance problem as long as it is not evident that fragmentation is the root cause of the problem (which it hardly ever is). Please see SAP Note 159316 which explains this topic in more detail.

7. DBCC CHECKDB and DBCC CHECKTABLE with option repair_allow_data_loss allow you to repair database inconsistencies and will not cause any data loss.
The repair_allow_data_loss option is not a tool which can perform magic to retrieve back data from pages which are physically damaged or contain logically incorrect information. Instead, it does more or less exactly what its name says: it tries to retrieve as much data as possible and will discard as much data as required to return to a consistent version of the affected object(s). It is important to understand that database inconsistencies are in almost all cases caused by malfunctions on lower layers (typically hardware or driver malfunctions). This means that due to a malfunction on these lower layers one or more database pages are damaged - meaning their content is not fully correct anymore to a certain extent. There are various types of database inconsistencies e.g. pages might not linked properly anymore, links between pages might be missing completely, pages from the allocation maps (GAM, IAM, SGAM) might contain incorrect data, pages might




SAP on SQL Server ... | SCN

be damaged to an extent that they do not even have the physical structure of a SQL Server page anymore. If you are very very lucky this affects a page which was cached for faster access in your main memory and the inconsistent page hasn't yet been written back to the disk. This is what we call a transient inconsistency but unfortunately an inconsistency is hardly ever a transient one. In most cases the inconsistent pages are in the database files or in the log files. This means the incorrect information is on disk and there is no proper version of the affected page(s) anymore. This should make it clear why you cannot simply "recover" from an inconsistency. An inconsistency is a damaged page - there is no way to make the database guess what the correct content of an inconsistent page would have been and to let the DB simply revert the page content back to the correct version. In most cases you will have more than one inconsistency. In order to judge how bad the situation is you need very exhaustive knowledge of SQL Server to understand which kind of pages (e.g. index pages, data pages, leaf pages, allocation map pages) are affected and which impact this has. Using repair_allow_data_loss in order to let the database discard everything that cannot be interpreted or read properly anymore is no solution - in most cases it will even make things worse and still there is no guarantee that this option will even be able to recreate a consistent version of the affected object(s) - despite accepting data loss. You might still have inconsistencies left afterwards as depending on how bad the situation is, it might not even be possible anymore to return to a physically consistent state. On the other hand and much more important: this leads to completely uncontrollable, unpredictable dataloss and there is no way to log or trace what is thrown away. You will have data loss and this will cause inconsistencies on SAP application level (usually SEVERE inconsistencies). For these reasons, SAP does not support the usage of repair_allow_data_loss. See SAP KBA 1704851 and SAP Note 142731 for further details.

8. After archiving or deleting data from a table the table and its indexes will occupy less space in the database and the database itself will also occupy less space.
There are different key figures which inform you about the space consumption of an object (reserved size, data size, index size, unused size). If you have a large table and you delete a large amount of data from the table (e.g. by reorganizing the entries from application level or by archiving) SQL Server will not release the freed space back to the data files. Instead, it will keep the freed space reserved for the object. If you really have the need to release the space back to the datafiles and to then release it back to the filesystem, please refer to SAP KBA 1721843 for more details. If you are not urged to gain back the space on filesystem level, SAP recommends to simply leave the object as it is. SQL Server will reuse the freed space as soon as new entries are inserted into the table.

9. If the autogrow option is configured for all datafiles Microsoft SQL Server will grow all files in a balanced way.
The assumption that MS SQL Server grows files in a balanced way if autogrow is switched on for all files is a common misconception. SQL Server uses a proportional filling algorithm to distribute new data over all existing datafiles. This is described in more detail in SAP Note 1238993 - even though the note explicitly mentions SQL Server Release 2008 it works the same way for all releases. Briefly explained: if new data needs to be added to the database, SQL Server distributes the new data over all datafiles which still provide free space. SQL Server will not grow any file as long there is at least one datafile which still provides free space. Even if autogrow is configured for all datafiles SQL Server will wait until ALL files are full - only if all datafiles are completely full and SQL Server needs to add new data it will autogrow files. If you are using SQL Server release >= 2008 and have set trace flac 1117, SQL Server will grow all existing datafiles with autogrow=on at an autogrow event. In any other case, SQL Server will grow a single file only! All new data will then go into this single grown file until this file is full again. Then, as soon as new data needs to be added again, SQL Server will repeat the previously explained procedure and will grow a single file only. In order to allow MS SQL Server to use all files at any time, we strongly recommend you to make sure that you always have free space left in all existing datafiles. If some of your datafiles are currently full, extend them if your disk layout allows it. For SQL Server Releases >= 2008 Microsoft provides trace flag 1117 which makes SQL Server grow all files instead of growing a single file only at an autogrow event. This reduces monitoring efforts to ensure proper data distribution. Please see note SAP Note 1238993 for more details and to learn how to set the trace flag.

800 View s 1 Comments Tags: sqlserver, database, mss, microsoft, db, nw os, db_administration, db_development, db_maintenance

Benifits of using SQL server 2008 (R2) for SAP

Posted by Nick Loy May 9, 2012

Features added in SQL server 2008 (R2)



Merge Command Backup Compression (optional) Transparent Data Encryption (optional) Changed Data Capture (optional) Star Join Optimization (BW) Grouping Sets (BW) Parallelism for partitions (BW) Row and Page compression

SAP on SQL Server ... | SCN

Increased speed of partition drop: 15000 Partitions (SQL Server 2008 SP2 - feature not yet available in SQL Server 2008 R2). Unicode compression (SQL Server 2008 R2)

Merge command backup compression:

In the previous versions of SQL Server you can only make an uncompressed backup to disk. The size of this backup is almost the same as the size of the database. To decrease the size of the backup file, you can use compression software like Winzip, WinRar etc. This requires additional CPU power and disk space. As of SQL 2008 a new backup option is introduced called 'Compression' which directly create a compressed backup.
Advantages and Disadvantages:

Decrease in backup size up to 65 %. This depends of course on the content of the data. CPU usage will increase during the backup process. The more data can be compressed the more increase in CPU usage. Faster backup speed (25%) because it requires less disk I/O

Transparent Data Encryption (optional)

Transparent data encryption (TDE) performs real-time I/O encryption and decryption of the data and log files. The encryption uses a database encryption key (DEK), which is stored in the database boot record for availability during recovery. The DEK is a symmetric key secured by using a certificate stored in the master database of the server or an asymmetric key protected by an EKM module. TDE protects data "at rest", meaning the data and log files. It provides the ability to comply with many laws, regulations, and guidelines established in various industries. This enables software developers to encrypt data by using AES and 3DES encryption algorithms without changing existing applications.

Changed Data Capture (optional)

Change Data Capture records INSERTs, UPDATEs, and DELETEs applied to SQL Server tables, and makes a record available of what changed, where, and when, in simple relational change tables rather than in an esoteric chopped salad of XML. These change tables contain columns that reflect the column structure of the source table you have chosen to track, along with the metadata needed to understand the changes that have been made.

Star Join Optimization (BW)

SQL Server 2008 (Katmai) introduces a new automatic star join optimization feature to enhance the performance of complex BI queries. This feature needs intra query parallelism to be activated. Intra query parallelism is disabled per default in BI systems. SAP Note 1126568 - Enable star join optimization with SQL Server 2008

Grouping Sets (BW)

GROUPING SETS clause, which allows you to easily specify combinations of field groupings in your queries to see different levels of aggregated data. Today well look at how you can use the new SQL Server 2008 GROUPING SETS clause to aggregate your data.

Parallelism for partitions (BW)

Partitions and parallel performance are most applicable to Data Warehouses, batch processing, and reporting. Not all data warehouse environments will want to enable parallelism for all queries. Specifically, parallelism is most effective if the system is running only a few queries at a time and you want as many resources possible made available to those queries to minimize execution time. If the data warehouse is already a high concurrency environment, parallelism will not improve throughput or response time since the collection of single-threaded queries are likely already consuming the available resources. Likewise, for peak performance, you do not want parallelism in a high concurrency OLTP workload

Row and Page compression (Database compression)

Depending on the SAP and SQL Server release, there are different compression types which can be used to save disk space with SAP NetWeaver running on SQL Server. This note describes how to implement ROW and PAGE compression for SQL Server 2008 (and newer) for SAP products based on application Server ABAP. It does not cover older SQL Server releases or other SAP products like SAP Application Server JAVA, SAP CRM Mobile Client or SAP Business Objects products. The attached SDN article "Using SQL Server Database Compression with SAP NetWeaver" contains a more detailed description and also covers compression types of older SQL




SAP on SQL Server ... | SCN

Server releases. The SDN article is also available at http://www.sdn.sap.com/irj/sdn/sqlserver

Increased speed of partition drop: 15000 Partitions

As of SQL Server 2005 you can partition table and indexes. The maximum number of partitions used to be 1,000 for SQL Server 2005, 2008 and 2008 R2. Some customers were reaching this limit when running SAP Business Warehouse (SAP BW) on Microsoft SQL Server. Therefore it was decided to increase this limit to 15,000 for SQL Server 2008 and 2008 R2 in a Service Pack.

SAP supports partitioning only for specific tables in SAP BW. In BW 7.00 and newer releases, the F-fact table of an SAP BW cube is automatically partitioned by the packet dimension. Each time a new request is loaded into the cube, a new partition is created on the F-fact table. Typically customers load data once a day or less. Therefore 1,000 partitions are sufficient for almost 3 years. Furthermore, you can reduce the number of partitions by performing the SAP BW cube compression (which you should not confuse with SQL Server data compression). However, some customers loaded data several times a day, which resulted in hitting the 1,000 partition limit quickly. The 1,000 partition limit also was a pain during migrations of SAP BW systems from ORACLE to SQL Server. ORACLE supports much more than 1,000 partitions since years. Therefore we often see SAP BW systems on ORACLE, which already have more than 1,000 partitions. We had this particular scenario in mind when we decided to set the new limit to 15,000. In practice more than a few thousand partitions make no sense. Having tens of thousands partitions will not increase the overall system performance. It will very likely decrease it.

Unicode compression (SQL Server 2008 R2)

SAP encourages their customers to convert existing non-Unicode systems to Unicode. However, several SQL Server customers hesitated to perform a Unicode Conversion, because of the increased storage requirements of UCS-2. This issue has been finally solved with Unicode-compression, which is a new feature of SQL Server 2008 R2. A Unicode Conversion running on SQL Server 2008 R2 even results in a decreased database size. Therefore hardware restrictions are no longer an excuse to stay on non-Unicode.

840 View s


SAP ECC 6.0 running with MS SQL Server 2012

Posted by Jairo Pedroza May 7, 2012

We had privilege to deliver SQL 2012 First Customer Shipment project, using SAP Migration Standard Tools. In May, 1 we had Production system Go Live and I'd like to share benefits so far.
OS Source: Windows Server 2003

OS Target: Windows Server 2008 R2

DB Source: MS SQL Server 2005

DB Target: MS SQL Server 2012

>>> Saving Storage : Reduction 75% size used (Data compression) >>> SAP ECC Performance : Response time much better than before. >>> I/O: Huge I/O reduction improving performance
>>> Backup: Compressed and faster (Backup compression) >>> Monthly growth in GB was reduced ( Compression ) >>> BI Solution process: Reduce from 7hours to 3min

See few results......... 100% UPTIME !!!




SAP on SQL Server ... | SCN

Dialog Response Time after 9 months:

Microsoft Case Study http://www.microsoft.com/casestudies/Case_Study_Detail.aspx?casestudyid=710000001454 Links for more information about SAP Applications on MS SQL Server 2012 and Case study. http://www.microsoft.com/casestudies/Microsoft-SQL-Server-2012-Enterprise/Wei-Chuan-Foods-Corporation/FoodManufacturer-Keeps-Competitive-Edge-and-Improves-Support-for-Growing-Business/710000000330 http://blogs.msdn.com/b/saponsqlserver/archive/2011/11/17/microsoft-s-sap-deployment-and-sql-server-2012.aspx http://www.microsoft.com/casestudies/Microsoft-SQL-Server-2012-Enterprise/Microsoft-Information-TechnologyGroup-MSIT/Microsoft-Uses-SQL-Server-2012-5.8-Terabyte-SAP-ERP-Database-to-Run-Its-GlobalBusiness/710000000346 http://blogs.msdn.com/b/saponsqlserver/archive/2012/03/29/sql-2012-is-released-amp-running-live-at-wei-chuanfoods-taiwan.aspx SAP Notes: 1651862 - Release planning for Microsoft SQL Server 2012 SQL Server 2012 Technologies for SAP Solutions http://ecohub.sap.com/api/resource/4fabc95ad2a87c2a63d2b792

Jairo Pedroza ITST Consulting www.itst.com.br http://itst.com.br/noticias/sap-netweaver-7-0-ou-maior-com-microsoft-sql-server-2012/




SAP on SQL Server ... | SCN

1262 View s 2 Comments Tags: sqlserver, database, mss, ms_sql_server_2012, migration_to_ms_sql_server, sap_application_on_ms_sql_server_2012

DB Size decreased by 81%? Awesome

Posted by Huseyin Bilgen Mar 6, 2012

In my previous blog (

How to Decrease Your SAP Database Size?) I gave all known options to decrease the Total

DB size of SAP Solutions. The fastest and may be the cheapest method is using an alternative database software. The Dino's in IT still doesn't believe in databases other than Oracle, but nowadays it is easy to say that they are wrong. With the 64-bit technology evolution, all databases performs quite well. This weekend, in Turkey, at one of the biggest Retail customers, we did an export and import of ERP production system which is already running Windows 2008/Sql Server 2008. But their database doesn't have ROW and PAGE compressions. So we decided to export and import the database to lower DB size.

Total DB Size (prior to operation): 5200 GB Total DB Size (after SQL 2008 R2 conversion*): 1000 GB Export/Import Duration (done in parallel): 60 Hours Space BeneFIT: ~ 81 % --------------------------* SQL 2008 R2 Conversion: Here, the meaning of conversion is exporting the database and importing again on a SQL 2008 R2 version database with latest SAP Kernel 7.0 to enable ROW and PAGE compressions.

613 View s


Tags: sqlserver, sap_netw eaver_platform, erp

Client side libraries used by SAP/ABAP for SQL Server

Posted by Jesus Garcia Castro Nov 2, 2011

Client side libraries used by SAP/ABAP for SQL Server 1. Interface to the Database
In order to store and retrieve data from a database, a program needs to use an interface that is dependent on that particular DBMS (Database Management System). As of R/3 kernel Release 4.5A, the database dependent part of the R/3 database interface is stored in a separate library, the DBSL, and so the ABAP implementation consists of: The DBSL (Database Shared Library), a database-dependent part of the SAP kernel that is dynamically linked to the SAP kernel. The database client tools, i.e. some libraries that are usually provided by the database manufacturer. These are either statically or dynamically linked to the database library. Note that a Java stack also needs an interface, but it uses a completely different technology (the JDBC). This is beyond the scope of this article. 1.1. SAPs DBSL The ABAP stack of an SAP system installed on a SQL Server database needs such an interface to access the <SAPSID> database; in the same way it is necessary in order to access an external SQL Server through a DBCON (Multiconnect) connection, e.g. to use the external database as a data source in an SAP BW system, or to centrally monitor several databases of your corporate landscape from an SAP Solution Manager (transaction DBACOCKPIT). The dynamic link library that SAP delivers so that an ABAP stack is able to connect to a MS SQL Server database is called dbmssslib.dll (check note 400818 for further details on this naming convention). It is distributed with the kernel, but you can also download it separately from the SAP Service Marketplace (LIB_DBSL.SAR, on the database dependent part of the kernel). It must be installed in the ABAP kernel executable directory (DIR_EXECUTABLE) of the SAP application server that is to access the <SAPSID> database (or any other SQL Server external database). Note that the fact that it is a DLL implies that it can only be installed on a Microsoft Windows platform; as the Microsoft Data Access technologies are not available on other platforms, SAP did not implement any other dbmssslib for non-Windows platforms; this implies that you cannot access from an SAP application server running e.g. on a Unix host to a SQL Server database directly. 1.2. Loading the Database interface at startup When initiating an SAP system, the database-dependent database library is loaded before the DBSL is called for the first time. The system searches for the library in the directory indicated by the environment variable DIR_LIBRARY (e.g. /usr/sap/<SAPSID>/SYS/exe/run). The environment variable dbms_type contains the name of the required database management system. When the system is initiated, an attempt is made to load the library belonging to the required database management system from the directory indicated by the environment variable DIR_LIBRARY.

2. Short notice on Microsoft Data Access technologies




SAP on SQL Server ... | SCN

Among a large list of possibilities, Microsoft delivers ODBC and OLEDB for general access to data sources, as well as the SQL Server Native Client that can only be used for MS SQL Server databases and, so, it is highly optimized. ODBC (Open DataBase Connectivity): It is a call-level access (i.e. API functions) for C/C++ applications to varying data stores through ODBC drivers. ODBCconf.exe is a command-line utility for configuring drivers and data source names (DSNs). OLEDB (Ob ject Linking and Emb edding, DataBase): It is an ob ject-level access (i.e. set of COM-based interfaces) that expose data from a variety of sources through OLEDB providers to be accessed by C/C++ applications. These are the ones that are used by an ABAP stack. For more information on the available technologies, you can check http://msdn.microsoft.com/library/ee730344.aspx. 2.1. MDAC (Microsoft Data Access Components)[i] In order to connect their applications to a relational database, developers can use a variety of providers and drivers that are shipped by Microsoft, or by third parties. MDAC is one of these interfaces and it is part of the operating system. It implements OLEDB (CLSID_MSDASQL) and ODBC drivers for SQL Server. It requires a separate connection for each active select. The active select referred to here is the select using a client, or firehose cursor. We learned quite early that the normal select via server side cursor is quite expensive. So whenever possible we use the client side cursor method, which means that we just issue the select and process the rowset. The drawback of this really fast method used before MARS (check next paragraph), was that it blocked the socket (a socket defines the database connection through the network as a file descriptor defines the access to a local file) for all other operations until the rowset was read. So we mainly exploited this method for uncommitted reads, using before MARS multiple additional database connections. So an SAP connection consisted of N uncommitted read connections and one committed read connection (which handled the committed reads, the blocking reads, the modifications). In case of committed read we used the firehose/client side cursor method only for single selects or certain special cases. Nowadays, MARS (Multiple Active Result Sets) allows the handling of multiple rowsets in one database connection. So, now with MARS, the SAP connection consists of only two database connections: one for committed read (where we still use server side cursors) and one for uncommitted reads (where we handle parallelism by using MARS). 2.2. SNAC (SQL Server Native Client) SQL Server Native Client is a stand-alone data access application programming interface (API) that includes OLEDB (CLSID_SQLNCLI) and ODBC drivers. It was first shipped with SQL Server 2005 (SNAC 9.0). SNAC supports MARS (Multiple Active Row Sets) which allows a single connection to simultaneously support multiple active selects. The SNAC software is distributed by Microsoft with the SQL Server 2005 and later versions as the file sqlncli.msi. You should look fora version suitable for your hardware platform and install it on your application server. It is important that you install the SNAC 2005 SP1 or later (check note 960985). You should also make sure that you install in all the SAP application servers the SNAC version that matches your SQL Server version (or a later one, according to note 1082356). If this is not done, unexpected issues can take place. Older releases of the ABAP/DBSL interface use OLEDB. DBSL 7.00/7.01 implements both the older OLEDB and the newer ODBC version. DBSL 7.10 and later implements only the ODBC version. Exception: special 7.10 and later DBSL DLLs are available for use with SQL 2000 where supported (dbmssslib_oledb.dll). This is mainly because SQL Server 2000 does not support MARS, and the ODBC DBSL requires MARS. The ODBC DBSL will always try to use the latest available SNAC ODBC driver. The SNAC ODBC driver is implemented by SQLNCLI*.DLL. Microsoft guarantees that newer versions are backward compatible with previous server versions.

3. Compatibility and limitations

The internal implementation of the DBSL needs to interface with the database-specific technology that was released by Microsoft to communicate to the various data stores. SAP developers used the MDAC initially, but also implemented SNAC as soon as it was released by Microsoft to take advantage of its benefits. Although it is possible to use MDAC 2.8 to access a SQL Server 2005 database, Microsoft released the SNAC as a more powerful interface. Both provide native data access to SQL Server databases, but the SQL Server Native Client has been specifically designed to expose the new features of SQL Server 2005 (such as MARS, user-defined data types, query notifications, snapshot isolation, and XML data type support), while at the same time maintaining backward compatibility with earlier versions. Nowadays both MDAC and SNAC are possible depending on the SAP release and SQL Server version. Generally speaking, SQL Server 2000 did not allow MARS and was accessed through OLEDB, but the later SAP releases require ODBC. It is not intended to describe in detail the differences in this article, but in case you are interested on this topic we recommend the following articles as a good starting point: http://msdn.microsoft.com/en-us/library/ms810810.aspx http://msdn.microsoft.com/en-us/library/ms131035.aspx http://msdn.microsoft.com/library/ee730344.aspx The point is that now we are facing a wide range of possible combinations, some due to technical improvements and limitations and some others developed for compatibility. In order to be able to know which combinations are supported we present the following table:




SAP on SQL Server ... | SCN

4. How to check which client library my SAP system is indeed using?

To check the SQL Server and driver version use SM50, select a work process and view the developer trace file. Look for the following (ODBC): C Thank You for using the SLODBC-interface C ODBC Driver chosen: SQL Server Native Client 10.0 native C lpc:(local) connection used on <server> C Driver: sqlncli10.dll Driver release: 10.50.1804 C GetDbRelease: 10.50.1702.00 C GetDbRelease: Got DB release numbers (10,50,1702,0) Or (OLEDB): C Thank You for using the SLOLEDB-interface C Provider Release:9.00.4035.00 C Using Provider SQLNCLI C Using MARS (on sql 9.0)

5. References
SAP note 400818 - Information about the R/3 Database Library SAP Note 323151 - Several DB connections with Native SQL SAP Note 178949 - MSSQL: Database MultiConnect with EXEC SQL SAP Note 734034 - Native OLEDB provider SQLNCLI SAP Note 738371 - Creating DBCON multiconnect entries for SQL Server SAP Note 960985 - existing Stored Procedure erroneously considered as missing SAP Note 1082356 - Using the ODBC based DBSL for Microsoft SQL Server SAP Note 1238905 - Connection is busy with results for another command SAP Note 1248222 - ODBC DBSL profile parameters and connect options SAP Note 1263367 - Accept MDAC driver for DBCON SAP Note 1341097 - MSSQL: 720 DCK, 7.0* on SQL 2000, dbmssslib_oledb.dll SAP Note 1506487 - Error 3997 when executing native SQL SAP Note 1644499 - How to set up a connection to MS SQL Server from Linux SAP KBA 1544360 - SQL Error 402 during DB compression with report MSSCOMPRESS http://msdn.microsoft.com/library/ee730344.aspx http://msdn.microsoft.com/en-us/library/ms810810.aspx http://msdn.microsoft.com/en-us/library/ms131035.aspx http://help.sap.com/saphelp_nw04/helpdata/en/f3/914f3445194d468f652d45494230b1/content.htm [i] Starting with Windows Vista, the data access components are now called Windows Data Access Components, or Windows DAC

447 View s


The others
Posted by Lars Breddemann Oct 14, 2009




SAP on SQL Server ... | SCN

Ok, I admit, I don't have a very good idea of MS SQL Server. I do Oracle and MaxDB - that pretty much is it. Of course, as a database support guy you always need to peek over the fence to the other DBMS (e.g. working on priority Very High messages during weekends) but this has nothing to do with gaining a certain level of real experience with the 'other' DBMS. Although my MS SQL colleague is usually sitting just on the opposite end of the desk usually everybody is busy enough with working on his/hers own stuff. But this makes me even more lucky to have found the following blog about SAP on MS SQL Server: Running SAP Applications on SQL Server So if you're in MS SQL Server you really want to pay this one a visit (as long as you return to good old Oracle and MaxDB afterwards ;-)). As far as I'm informed the blog is written by several authors, some of them working at SAP in Walldorf most of their time. Have fun reading! Lars

70 View s


Tags: support, database, mss

Site Index Privacy

Contact Us Terms of Use

SAP Help Portal Legal Disclosure


Follow SCN