Академический Документы
Профессиональный Документы
Культура Документы
EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice. THE INFORMATION IN THIS PUBLICATION IS PROVIDED AS IS. EMC CORPORATION MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license. For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com. All other trademarks used herein are the property of their respective owners. THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL INACCURACIES. THE CONTENT IS PROVIDED BY DELL AS IS, WITHOUT EXPRESS OR IMPLIED WARRANTIES OF ANY KIND. Trademarks used in this text: Dell, the Dell Logo, PowerEdge, and PowerVault are trademarks of Dell Inc. Other trademarks and trade names may be used in this document refer to either the entities claiming the marks and names or their products. Dell Inc. disclaims any proprietary interest in the trademarks and trade names other than its own. Copyright 2006 Dell Inc. All rights reserved. Reproduction in any manner whatsoever without the written permission of Dell Inc. is strictly forbidden.
Best Practices for Implementing SAP on Dell/EMC Version 1
ii
Contents
Preface .............................................................................................................................. ix Chapter 1 Introduction ....................................................................................................................1-1 Managing complexity in SAP environments..................................................................1-2 Sizing SAP .....................................................................................................................1-2 Chapter 2 SAP NetWeaver..............................................................................................................2-1 SAP architecture.............................................................................................................2-2 The advent of SAP NetWeaver ......................................................................................2-4 Chapter 3 Dell/EMC Software Solutions for SAP ..........................................................................3-1 Software Overview.........................................................................................................3-2 EMC SnapView..............................................................................................................3-2 EMC MirrorView ...........................................................................................................3-4 EMC MirrorView/S ................................................................................................3-5 EMC MirrorView/A ...............................................................................................3-5 EMC Replication Manager Family.................................................................................3-6 EMC PowerPath .............................................................................................................3-8 EMC Navisphere ............................................................................................................3-9 EMC Visual Products ...................................................................................................3-10 SAP Expert Monitor for EMC (SEME) .......................................................................3-11 Chapter 4 Dell/EMC Storage Platform Considerations for SAP ....................................................4-1 CX-Series storage...........................................................................................................4-2 RAID levels and performance ........................................................................................4-2 When to use RAID 5...............................................................................................4-2 When to use RAID 1/0 ...........................................................................................4-3 When to use RAID 3...............................................................................................4-3 When to use RAID 1...............................................................................................4-3 Cache...4-3 Read cache ..............................................................................................................4-4 Write cache .............................................................................................................4-4 Fibre Channel drives.......................................................................................................4-7 ATA drives .....................................................................................................................4-7
iii
Contents
ATA drives and RAID levels......................................................................................... 4-8 RAID group partitioning and ATA drives ............................................................. 4-8 ATA drives as mirror targets and BCVs ................................................................ 4-8 Mixing drive types in an array ............................................................................... 4-9 LUN Distribution ................................................................................................... 4-9 Minimizing disk contention ................................................................................. 4-11 Stripes and the stripe element size ....................................................................... 4-11 RAID 5 stripe optimizations ................................................................................ 4-11 Number of Drives per RAID group...................................................................... 4-12 Large spindle counts ............................................................................................ 4-12 How many disks to use in a storage system ......................................................... 4-13 RAID-level considerations........................................................................................... 4-14 RAID 5 ............................................................................................................. 4-14 RAID 1/0.............................................................................................................. 4-15 RAID 3 .4-15 Binding RAID groups across buses and DAEs............................................................ 4-15 Binding across DAEs ........................................................................................... 4-15 Binding across Back-End Buses .......................................................................... 4-16 Binding with DPE Drives..................................................................................... 4-16 Chapter 5 Database Layout Considerations.................................................................................... 5-1 Striped metaLUNs.......................................................................................................... 5-2 Host-based striping ........................................................................................................ 5-2 Log and BCV placement................................................................................................ 5-2 Logical volume managers and datafile sizes.................................................................. 5-3 PowerPath and device queue depth................................................................................ 5-3 Snaps, snapshots, BCVs, and clones.............................................................................. 5-3 Data access ............................................................................................................. 5-3 Resource requirements ........................................................................................... 5-4 Performance considerations ................................................................................... 5-5 Appendix A References and Further Reading ................................................................................... A-1
iv
Figures
Figure 2-1. Two-tier SAP R/3 system configuration......................................................2-3 Figure 2-2. Three-tier SAP R/3 system configuration....................................................2-3 Figure 2-3. SAP NetWeaver...........................................................................................2-4 Figure 3-1. MirrorView/S...............................................................................................3-5 Figure 3-2. MirrorView/A ..............................................................................................3-6 Figure 3-3. Replication Manager user interface .............................................................3-7 Figure 3-4. EMC Navisphere Analyzer ........................................................................3-10 Figure 3-5. SAP Expert Monitor for EMC array information ......................................3-11 Figure 3-6. SAP Expert Monitor for EMC logical volume information ......................3-12 Figure 4-1. Write cache auto-configuration ...................................................................4-7
Figures
vi
Tables
Table 3-1. Comparing SnapView performance and economics .....................................3-4 Table 4-1. Random access performance of 5400 rpm ATA drives relative to 10 K rpm Fibre Channel drives.......................................................................................................4-8 Table 4-2. Example of RAID group and LUN numbering...........................................4-10 Table 4-3. System high-efficiency / high-performance drive counts ...........................4-14 Table 4-4. RAID Types and Relative Performance in Failure Scenarios.....................4-15
vii
Tables
viii
Preface
This document describes how to exploit Dell/EMC features and functionality in SAP environments. This document is intended to be a guide for making decisions in deploying the Dell/EMC family of storage products (EMC CLARiiON storage platforms). It covers the major topics in determining storage needs for an SAP rollout. As part of an effort to improve and enhance the performance and capabilities of its product line, Dell and EMC from time to time release revisions of their hardware and software. Therefore, some functions described in this guide may not be supported by all revisions of the software or hardware currently in use. For the most up-to-date information on product features, refer to your product release notes.
Audience
This solutions guide is intended for SAP administrators, database and system administrators, system integrators, storage management personnel, and members of EMC Technical Global Services responsible for configuring and managing SAP systems on Windows, Linux, and UNIX platforms. The information in this document is based on SAP Version 4.0 and later.
ix
Preface
Chapter 1
Introduction
This chapter presents these topics: Managing complexity in SAP environments..................................................................1-2 Sizing SAP .....................................................................................................................1-2
1-1
Introduction
Sizing SAP
Sizing the architecture to support SAP solutions is critical to the success of an SAP project. The first step to implement SAP with Dell is to correctly size the necessary platform for PowerEdge Servers and Dell/EMC storage platforms. To enable maximum performance, Dell selects the appropriate configurations for a cost-effective solution. Sizing results are generated by qualified Dell and EMC personnel who that understand the SAP architecture within the technical architecture of PowerEdge, PowerVault, and Dell/EMC products. Whether you are starting a new project, upgrading to a new version of SAP, or expanding your enterprise, Dell and EMC can assist in architecting the necessary hardware to support your organization's IT requirements.
1-2
Chapter 2
SAP NetWeaver
This chapter presents these topics: SAP architecture.............................................................................................................2-2 The advent of SAP NetWeaver ......................................................................................2-4
2-1
SAP NetWeaver
SAP architecture
This document uses the SAP NetWeaver-based solution, Enterprise Core Component (follow-on to R/3), as the foundation for the information provided. There are many options when deploying SAP solutions, which could require additional storage. The general requirements are discussed in detail. For implementing optional components, the Competence Centers can assist in architecting the full solutions storage requirements through the formal sizing process. SAP NetWeaver-based solutions have a flexible two-tier or three-tier architecture: Central instance Database instance Dialog instances, if required Front-end GUI SAP offers the following types of standard configurations: Central system, in which the central instance and the database instance are on the same host Standalone database system, in which the central instance and the database instance are on different hosts The database server is the host on which the database is installed. In a two-tier configuration, this server can also accommodate the central instance (the SAP instance that includes the message server and enqueue server processes). If the central instance is installed on a separate application server, the configuration is three-tiered, and the database server is called a standalone database server. Dialog instances are SAP instances that include only dialog, batch, spool, or update processes; these run on hosts called application servers. Each of these instance hosts (servers) require internal storage to meet the needs for the operating system and associated swap area. The majority of the storage requirements are typically from whichever server or servers host the database functionality. Other servers can require external storage if serving as part of a cluster or using boot-from-SAN technology. Figure 2-1 on page 2-3 shows a traditional two-tier (SAP ECC) configuration in which the SAP central instance resides on the database server (also called central system). This configuration is often used for sandbox, development, and small productive environments. A three-tier configuration should be considered to support a highly available solution.
2-2
SAP NetWeaver
Database Server
Database
SAP GUI
SAP GUI
SAP GUI
SAP GUI
Figure 2-2 shows a three-tier distribution of the instances for a large SAP System (one that spans several computers and has a standalone database server).
Standalone Database Server
Database
Application Server with Central Instance
SAP GUI
SAP GUI
SAP GUI
SAP GUI
SAP architecture
2-3
SAP NetWeaver
The configuration of the system is planned in advance of the installation together with the Dell|SAP Competence Center or other SAP knowledgeable resources. The configuration is designed using both SAPs QuickSizer and Dells SAP sizing tools on the basis of sizing information that reflects the system workload. Details such as the set of applications to be deployed, how intensively these are used, and the number and type of users, IT practices around backup/restore, disaster recovery strategies, and system availability requirements are necessary to architect a solution that meets each customers unique needs.
2-4
SAP NetWeaver
SAP NetWeaver is the technical foundation of mySAP Business Suite solutions, SAP Composite Applications, partner solutions, and customer custom-built applications. It also enables Enterprise Services Architecture, SAP's blueprint for service-oriented business solutions. More information on these topics specifically can be found on SAPs Service Marketplace at http://service.sap.com.
2-5
Chapter 3
This chapter presents these topics: Software overview..........................................................................................................3-2 EMC SnapView..............................................................................................................3-2 EMC MirrorView ...........................................................................................................3-4 EMC Replication Manager Family.................................................................................3-6 EMC PowerPath .............................................................................................................3-8 EMC Navisphere ............................................................................................................3-9 EMC Visual Products ...................................................................................................3-10 SAP Expert Monitor for EMC (SEME) .......................................................................3-11
3-1
Software overview
The family of Dell/EMC storage platforms (EMC CLARiiON) consists of highperformance, fully redundant, high availability storage platforms providing nondisruptive component replacements and code upgrades. Dell/EMC storage platforms offer high levels of performance, data integrity, reliability, and availability. In addition to the hardware array, the following software products support SAP environments: EMC SnapView EMC SnapView is a business continuance solution that allows customers to use special devices to create mirror images or snapshots of source devices. These business continuance volumes (BCVs), clones, or snapshots can be attached to the same or different hosts when they are not established with their source devices. The source devices remain online for regular I/O operation while the mirrors are created and mounted. SnapView can be used within a single storage array. EMC MirrorView EMC MirrorView is a business continuance solution that allows specific source volume(s) to be mirrored to their like remote target storage platforms. MirrowView is used across two storage arrays. EMC Navisphere The Navisphere Management Suite is a suite of integrated software tools that allows customers to manage, provision, monitor, and configure systems, as well as control all platform replication applications from an easy-to-use, secure web-based management console. Navisphere-managed array applications include Navisphere Analyzer, SnapView, MirrorView, and SAN Copy.
EMC Visual Products VisualSAN and VisualSRM software integrates with Dell/EMC storage platforms to provide network, configuration, and performance management for mid-tier SANs.
EMC Replication Manager Family EMC Replication Manager makes it easy to create point-in-time replicas of databases and/or file systems residing on your existing storage arrays. Replicas can be stored on clones or snaps. SAP Expert Monitor for EMC (SEME) The SEME is an SAP-owned and supported add-on that allows monitoring the performance statistics of the storage array from within a Basis transaction. The latest version availability of each of the solutions is provided in the latest EMC Support Matrix (ESM) at http://www.EMC.com/interoperability.
EMC SnapView
SnapView is a storage-system-based software application that allows you to create a copy of a LUN by using either clones or snapshots. A clone, also referred to as a business continuance volume (BCV), is an actual copy of a LUN and takes time to create, depending on the size of the source LUN. A snapshot is a virtual point-in-time copy of a LUN and takes only seconds to create.
3-2
SnapView has the following important benefits: It allows full access to production data with modest to no impact on performance and without the risk of damaging the original data. For decision support or revision testing, it provides a coherent, readable and writable copy of real production data. For backup, it practically eliminates the time that production data spends offline or in hot backup mode, and it offloads the backup overhead from the production host to another host. A snapshot is a virtual LUN that allows a second host to view a point-in-time copy of a source LUN. You determine the point in time when you start a SnapView session. The session keeps track of how the source LUN looks at a particular point in time. SnapView also allows you to instantly restore a sessions point-in-time data back to the source LUN, if the source LUN were to become corrupt or if a sessions point-in-time data is desired as the source. You can do this by using SnapViews rollback feature. The advantage of the snapshot is that it is pointer based and does not require the same capacity as the source data. It typically requires 20 percent of the source. The disadvantage is that is still places load on the production data as it points to the source. The advantage of the clone is that it is independent of the production area and has its own dedicated space. Therefore, production is not interrupted from a backup of this area. The only time production is interrupted is when a clone needs to be rebuilt. The disadvantage is that it requires the same disk capacity as the source.
EMC SnapView
3-3
In general, use the guidelines in Table 3-1 when choosing whether to use clones/BCVs or snapshots.
Table 3-1. Comparing SnapView performance and economics SnapView Snap Supports moderate I/O workloads and functionality requirements Space-saving virtual copy Requires fraction of capacity of source volume SnapView Clone/BCV Supports high I/O workloads and availability needs
Performance
Economics
In SAP environments, SnapView allows you to refresh test instances with production data in minutes rather than days. You also can use SnapView to perform a split mirror backup (with or without BRBACKUP) that minimizes impact on the production database. Chapter 4, Dell/EMC Storage Platform Considerations for SAP, provides more detailed information on clones and snapshots.
EMC MirrorView
EMC MirrorView is a software application that maintains a copy image of a logical unit (LUN) at separate locations in order to provide for disaster recovery, that is, to let one image continue if a serious accident or natural disaster disables the other. MirrorView is typically used for creating a Disaster Recovery site of the SAP production environment. The production image (the one mirrored) is called the primary image; the copy image is called the secondary image. MirrorView supports up to two remote images, but since you operate on one image at a time, the examples in this manual show a single image. Each image resides on a separate storage system. The primary image receives I/O from a host called the production host; the secondary image is maintained by a separate storage system that can be a stand-alone storage array or connected to its own computer system. The same management station, which can promote the secondary image if the primary image becomes inaccessible, manages both storage systems. In SAP environments, MirrorView also allows you to refresh test instances with production data in minutes rather than days. You can use MirrorView to perform a split mirror backup (with or without BRBACKUP) that minimizes impact on the production database. The two implementation options for MirrorView that depend on distance and bandwidth requirements are: MirrorView/Synchronous MirrorView/Asynchronous
3-4
EMC MirrorView/S
MirrorView/S is primarily used in campus environments. It maintains a real-time mirror image of the production data at a remote site in mirrored volumes. MirrorView/S provides a consistent real-time view of the production data at the target site at all times as illustrated in Figure 3-1. Data on both the source and target volumes is always fully synchronized at the completion of an I/O sequence via a first-in-first-out (FIFO) queue model. All data movement is at the block level, with synchronized mirroring.
Source
Target
2
Limited distance
The sequence of operations follows: 1. An I/O write is received from the server into the source array. 2. The I/O is transmitted to the target array using FLARE Consistency Assist. 3. The target array sends a receipt acknowledgment back to the source array. 4. An acknowledgment is presented to the server.
EMC MirrorView/A
MirrorView/A is an asynchronous replication product based on delta set technology. It periodically captures the changes on the source LUN(s) in a delta set; the delta set is then applied to the target LUN(s) at the end of every period. MirrorView/A can replicate over extended distances. You can specify the duration of the update period in minutes to hours to days to meet your required RPO. Because MirrorView/A runs on the Dell/EMC storage platform, it does not use any host-CPU cycles for replication. Unlike host-based asynchronous remote-replication solutions, MirrorView/A is independent of host operating systems, applications, and file systems. Depending on recovery-point requirement and workload characteristics, MirrorView/A can absorb the peak-load bandwidth requirement for remote replication by buffering and replicating during times of inactivity and can operate on much lower bandwidth than that of the peak load.
EMC MirrorView
3-5
MirrorView/A provides a consistent disk-based replica for fast restart at the remote site by creating a point-in-time gold copy of the secondary LUNs on the target system at the beginning of each cycle before applying the changes as shown in Figure 3-2. This provides a consistent restartable copy of source data on the target (at most, two cycle times behind the data on the source under nominal conditions) at all times.
Source Target
Source
Target
3 Extended distance 2
Delta Set
4
Gold Copy
The sequence of operations is as follows: 1. An I/O write is received from the server into the source array. 2. The source array sends a receipt acknowledgment to the production server. 3. A delta set is created, and changes are tracked during the MirrorView/A replication cycle, using FLARE Consistency Assist. 4. A gold copy is created at the target site to ensure that a crash-recoverable copy is available at all times in case of link failure during delta set transport. 5. The delta set is transported and applied to the target disk, the gold copy is removed, and the delta set is cleared for next cycle.
3-6
Replication Manager includes a set of specific benefits for improving availability and data protection. Some of those benefits include:
Automation of storage management tasks EMC Replication Manager helps users automate tasks such as replication, mounting replicas to alternate hosts, storage management, and storage associations. Downtime reduction techniques Traditional restore requires the data to be read from linear tape to disk. With EMC disk-based replication technologies, the recovery and rollback can begin immediately, without having to wait until the restore is complete. Also, when restoring from a disk copy, the data can be tested first to ensure that you are restoring data that does not include the logical error. This is achieved by mounting the replica on an alternate host before performing the restore. That way, you do not need to restore more than once because the restore was taken from an incorrect backup. Alternate uses for replicated data Scheduled and on-demand replicas have other uses in addition to the most obvious data protection. Replicas can also assist with better management of onsite resources. Ease of use Replication Manager provides many features that make the product easy to use for IT professionals with little to no storage knowledge. Users do not have to be storage wizards to understand how to use this product.
As well as providing business continuance to local sites, Replication Manager uses SnapView to create and refresh copies of production data. During the implementation phase, using SnapView can: Reduce the risks of downtime and data corruption.
3-7
Leverage scarce resources. Decrease the amount of time it takes to get your systems up, running, and stabilized.
Use second instances during SAP upgrades, and continue to use these instances beyond the upgrade, such as when adding new modules and functionality, testing new applications that snap-on or interface with SAP, and when new sites go live on the new version or new modules of SAP. As instances grow, some operations that users and administrators performed on separate instances impact performance of a consolidated instance. With local copies of real data and not just contrived test cases, Replication Manager allows trying out new products, functionality, and business processes in a controlled environment using automation. Testing is both iterative and destructive (that is, test the process until failure, and then repeat the process again and again). Replication Manager can greatly reduce the time to reestablish the test environment and refresh the entire test cycle.
EMC PowerPath
High availability and high performance are inherent requirements for mission-critical SAP applications. PowerPath provides consistent and improved service levels for large and mission-critical database environments by increasing the servers ability to access data on the storage array. PowerPath moves I/O workloads across multiple channels to ensure fastest possible I/O speed through dynamic load balancing. If many I/O requests on one path cause an imbalance, PowerPath balances the load of requests across the paths to optimize performance. PowerPath understands the nature of I/O requests and automatically determines optimum ways of distributing them. PowerPath allows for prioritizing storage device access. Device Priority allows a device to have a higher priority over another device. In this case, channels with low queues support the high-priority devices while the channels with the long queues support the low-priority devices. PowerPath offers policy-based dynamic path management that accelerates information access and provides high availability. In the rare instance of a path failure, PowerPath reissues I/O to an alternate channel maintaining data availability and ensuring optimization of information access. For instance, if a cable is mistakenly dislodged, PowerPath Auto Detect takes all existing I/O that was going down that particular path and reroutes it to another active path. Once the cable is reattached, PowerPaths Auto Restore feature automatically restores path access permitting data flow down the path again with no application interruption. Since PowerPath provides multipathing and dynamic path management, at a minimum the database hosts in SAP implementations should run PowerPath.
3-8
EMC Navisphere
The Navisphere Management Suite consists of three software offerings: Workgroup, Departmental, and Enterprise. Navisphere Management Suite discovers, monitors, and configures all Dell/EMC storage arrays via a single, easy-to-use management interface. It includes agent software for managing legacy arrays, centralized event monitoring, and transferring host information to the array for display in Navisphere Manager. The command line interface (CLI) can be used to script and automate common storage management tasks. LUN masking is also provided to connect hosts properly into a SAN. Navisphere is web-based and allows for secure management of Dell/EMC storage platforms from anywhere, anytime. Navisphere is complemented by other EMC ControlCenter storage management products that provide storage network, performance, and resource management. Navisphere consists of the following products: Manager Allows graphical user interface (GUI) management and configuration of single or multiple storage platforms and is also the center for management and configuration of system-based access and protection software, including Access Logix, SnapView, and MirrorView applications. Agent Provides the management communication path to the system and enables CLI access. Analyzer Provides performance analysis for Dell/EMC storage arrays and components Navisphere Analyzer provides extensive access to graphs and charts, enabling users to evaluate and fine-tune their storage performance. More than 60 different performance metrics are collected from disks, storage processors (SPs), LUNs, cache, and SnapView snapshot sessions. Navisphere Analyzer provides chart information at the summary and detail level, so you can drill down into the collected data at the level you choose as shown in Figure 3-4 on page 3-10.
EMC Navisphere
3-9
Navisphere can be launched on its own or from the ControlCenter Console. Additionally, Navisphere manages all array-based applications, such as Access Logix, MirrorView, SnapView, and SAN Copy. Navisphere runs on the array, which ensures high availability. High availability means secure, fail-safe access to the storage array. For example, in the case of a storageprocessor outage, failover takes over and maintains storage array uptime. Since the software is installed on the array, a workstation CPU failure does not affect storage access.
3-10
3-11
Figure 3-6. SAP Expert Monitor for EMC logical volume information
3-12
Chapter 4
This chapter presents these topics: CX-Series storage...........................................................................................................4-2 RAID levels and performance ........................................................................................4-2 Cache ............................................................................................................................4-3 Fibre Channel drives.......................................................................................................4-7 ATA drives .....................................................................................................................4-7 ATA drives and RAID levels .........................................................................................4-8 RAID-level considerations ...........................................................................................4-14 Binding RAID groups across buses and DAEs ............................................................4-15
4-1
CX-Series storage
The family of Dell/EMC storage platforms consists of three older family members, the CX200, CX400, and the CX600, and three newer members, the CX300, CX500, and CX700. The CX700 has a storage processor enclosure (SPE) design. The CX700 offers a faster chipset and memory subsystem as compared to the CX500, as well as double the disk bandwidth (it has four redundant disk buses on the back end). Bandwidth and IOPS performance of the CX700 is greatest, disk for disk, than any other Dell/EMC storage array. The CX700 represents the best choice for the highest performance and greatest scalability. The CX500 uses a small form-factor DPE that includes dual storage processors and 15 drives in 3 U of rack space. The CX500 SP offers dual CPU (versus the single CPU CX300 SP) and a chipset faster than that in the CX300. In steady random I/O environments, the CX500 performs slightly below the CX700 up to its maximum complement of 120 drives. The CX500 has a smaller write cache than the CX700, and thus does not absorb as large a burst of host writes. The CX500 is wellbalanced performer. The CX500 provides much higher bandwidth than the CX300, offering near wire speed with large, sequential I/O and Fibre Channel drives. The CX300 shares similar hardware with the older CX400, but it has half the number of disk ports. It performs as well in random environments up to its limit of 60 drives. However, due to its single back-end disk bus, its bandwidth performance is modest.
4-2
Any RDBMS tablespace where record size is larger than 64 KB and access is random (personnel records with binary content, such as photographs) RDBMS log activity Messaging applications Video/Media
Cache
In addition to choosing what RAID level to use, the arrays cache must also be configured. The Dell/EMC storage arrays cache is very flexible in how it can be configured.
Cache
4-3
Read cache
For systems with modest prefetch requirements (about 80 percent of installed systems), 50 MB to 100 MB of read cache per SP is sufficient. For heavy sequential read environments (requests greater than 64 KB and sequential reads from many LUNs expected over 300 MB/s), use up to 250 MB of read cache. For extremely heavy sequential read environments (120 or more drives reading in parallel), up to 1 GB of read cache can be effectively used by the CX600.
Write cache
Set the read cache as just explained, and then allocate the remaining memory to write cache.
Caches on or off
Most workloads benefit from both read and write cache; the default for both is on. To save a very small amount of service time (a fraction of a millisecond to check the caches when a read arrives), turn off read caching on LUNs that do not benefit from it. For example, LUNs with very random read environments (no sequential access) do not benefit from read cache. Use Navisphere CLI scripts to turn on read cache for LUNs when preparing to perform backups. Write caching is beneficial in all but the most extreme write environments. Deactivation of write cache is best done using the per-LUN write-aside setting discussed later in this section.
Page size
In cases where I/O size is very stable, you gain some benefit by setting the cache page size to the request size seen by the storage systemthe file system block size or, if raw partitions are used, application block size. In environments with varying I/O sizes, the 8 KB page size is optimal. Be careful when applying a 2 KB cache page size. Sequential writes to RAID groups with misaligned stripes and RAID 5 groups with more than eight drives may be affected.
The HA Cache Vault option and write cache behavior
The HA Cache Vault option, found on the Cache page of the storage-system properties dialog box, is on (selected) by default. The default is for classic cache vault behavior as outlined in the CLARiiON Fibre Channel Fundamentals (on EMCs Powerlink support site).
4-4
Several failures cause the write cache to disable and dump its contents to the vault. One type of failure is that of a vault drive. If the user clears the HA Cache Vault selection, then a vault disk failure does not cause write cache to disable. Since a disabled write cache significantly impacts host I/O, it is desirable to keep the write cache active as much as possible. Clearing this selection exposes the user to the possibility of data loss in a triple-fault situation: If a drive fails, then power is lost, and then another drive fails during the dump, it is not possible to dump the cache to the vault. The user must make the decision based on the relative merit versus risk.
Prefetch settings
The default setting for prefetch (Variable, with segment and multiplier set to 4) causes efficient cache behavior for most workloads. You should consider increasing the prefetch multiplier when both of the following conditions apply: I/O request sizes are small (less than 32 KB). Heavy sequential reads are expected. Decrease the prefetch multiplier when: Host sequentiality is broken up due to use of a striped volume on the host side. I/O sizes close to that of the maximum prefetch value are used. Navisphere Analyzer shows that prefetches are not being used.
High and low watermarks and flushing
The Dell/EMC storage platform design has two global settings called watermarkshigh and lowthat work together to manage flushing. For most workloads, the defaults afford optimal behavior: FC Series High watermark of 60 percent and a low watermark of 40 percent. CX Series High watermark of 80 percent and a low watermark of 60 percent. Increase the high watermark only if Navisphere Analyzer data indicates an absence of forced flushes during a typical period of high utilization. Decrease the high watermark if write bursts are causing enough forced flushes to impact host write workloads such that applications are affected. This reserves more cache pages to absorb bursts. The low watermark should be 20 percent lower than the high watermark.
Write-aside size
The write-aside size is a per-LUN setting. This setting specifies the largest write request that is cached. Larger I/O automatically bypasses write cache.
Cache
4-5
Write-aside helps keep large I/O from taking up write cache mirroring bandwidth, and makes it possible for the system to exceed the write cache mirroring maximum bandwidth. The cost is that I/O that bypasses cache has a longer host response time than cached I/O. To exceed the write cache mirroring bandwidth, there must be sufficient drives to absorb the load. Furthermore, if parity RAID (RAID 5 or RAID 3) is used, ensure that: I/O is equal to or a multiple of the LUN stripe size and I/O is aligned to the stripe and The LUN stripe element size is 128 blocks or less. These conditions for parity RAID are crucial and cannot be stressed enough. Getting I/O to align for effective write-aside can be difficult. If in doubt, use write cache. The tradeoff for doing write-aside is as follows: The data written this way is not available in cache for a subsequent read. The response times for writes are longer than for cached writes. For CX Series users, it is suggested to change the write-aside size to 2048 blocks unless there is a clear need to use write-aside. The Navisphere CLI getlun command displays the write-aside size for a LUN. To change the write-aside size, use the Navisphere CLI chglun command with the -w option. In the following example, the -l 22 flag indicates the action is on LUN 22, and the write-aside is being adjusted so that I/Os of up to 1 MB are cached: navicli h ip_address chglun l 22 w 2048 Note that if writes bypass the write cache, the host cannot get read hits from those requests. An interesting example is an RDBMS TEMP table. The TEMP data is written and then reread; if the writes bypass the cache, they take longer than if cached. Also, subsequent rereads have to go to disk (no possibility of a cache hit). Using requests that are small enough to ensure caching is best: writes hit the write cache and thus return more quickly, and the reread can be serviced from data still in the write cachemuch faster than going to disk. Pay attention to host file system buffering, which might coalesce TEMP writes into large requests.
Balancing cache usage between SPs
Lastly, ensure that the write cache usage is balanced between SPs. The amount of cache each SP is allocated is adjusted so that if more write I/O is coming through an SP, it gets more than half of the write cache as illustrated in Figure 4-1 on page 4-7. This adjustment is done every 10 minutes.
4-6
SP A Local Write Cache Mirror of Peers Write Cache Local Read Cache
(not mirrored)
SP A Local Write Cache Mirror of Peers Write Cache Local Read Cache
(not mirrored)
Balance the storage system by ensuring that each SP owns an equal number of LUNs using the write cache.
ATA drives
ATA drives are not recommended for busy random-access environments. The ATA specification was not designed for a heavily random multithreading environment. The ATA drives have been used in random environments where the IOPS requirement was modest. In tests of raw speeds, with random I/O, the ATA drives have about onethird to one-fourth the ability to service I/O, with the greatest difference being with smaller I/O sizes and at higher thread counts as shown in Table 4-1 on page 4-8.
4-7
The 7200 rpm drives perform incrementally better than the 5400 rpm drives. The difference is not so great as with fibre drives as their lack of command queuing restricts their random performance.
Table 4-1. Random access performance of 5400 rpm ATA drives relative to 10 K rpm Fibre Channel drives Threads per RAID group 2 KB to 8 KB I/O size 32 KB I/O size
50%
50%
16
25%
35%
As mentioned in the section titled RAID levels and performance on page 4-2, in sequential operations using RAID 3 with large I/O sizes and modest thread counts (one to four threads per disk group), the ATA drives perform close to the Fibre Channel drives.
4-8
In a system that is already experiencing some forced flushes, a synchronization of a BCV, or the establishment of a BCV implemented on ATA drives, could cause the write cache to fill. This would cause forced flushes for other LUNs being written. Similarly, with ATAs as a synchronous mirror target, if the cache is flushing more slowly on the target than at the source (due to slower drives being used), the source cache can fill. The result is an increase in response time for mirrored writes.
LUN Distribution
For the purposes of this discussion: Back-end bus refers to the redundant pair of Fibre Channel loops (one from each SP) by which all Dell/EMC storage arrays access disk drives. (Some Dell/EMC storage arrays have dual back-end busesa total of four fiber loops, some have four backend buses.) A RAID group partitioned into multiple LUNs, or a LUN from such a RAID group, is referred to as a partitioned RAID group or partitioned LUN, respectively. A RAID group with only one LUN is called a dedicated RAID group and a dedicated LUN, respectively. For efficient distribution of I/O on Fibre Channel drives, distribute LUNs across RAID groups. When doing distribution planning, take the capacity of the LUN into account. Calculate the total GB of high-use storage, and distribute the capacity appropriately among the RAID groups. Additionally, balance load across storage processors. To do this, assign SP ownership: the default owner property for each LUN specifies the SP through which that LUN is normally accessed.
When partitioning ATA-drive RAID groups, keep all LUNs from each RAID group owned by a single SP.
Regarding the previous note: to avoid ownership conflicts affecting performance, it is useful (though not critical) to assign all LUNs from each ATA group to a single host. Otherwise, a path-induced trespass on one host causes the ownership of its LUNs to conflict with others on the same RAID group. When planning for metaLUNs, note that all LUNs used for a metaLUN are trespassed to the SP that owns the base LUN; their original default owner characteristic are overwritten. Thus, when planning for metaLUNs, designating pools of SP A and SP B LUNs assists in keeping the balance of LUNs across SPs even.
4-9
In CX Series systems, the first five drives in the base disk enclosure are used for several internal tasks. Drives 0 through 4 are used for cache vault. The cache vault is only accessed when the system is disabling write cache (or enabling after a fault). Thus, there is no effect on host performance due to the vault activities unless there is a fault. The first four drives are also used as operating system boot and system configuration. Once the system has booted, there is very little activity from the FLARE operating system on these drives. Again, this does not affect host I/O. Navisphere uses the first three drives for caching NDU data. Heavy host I/O during an NDU can cause the NDU to time out, so it is recommended that before an NDU commences the host load be reduced to 100 IOPS per drive. Also, very heavy host I/O on these four drives results in increased response times for Navisphere commands. Thus, for performance-planning purposes, it is suggested to consider these drives as already having a LUN assigned to them. Host I/O performance is not affected by system access. Be sure to distribute the load accordingly.
Using LUN and RAID group numbering
This suggestion does not help performance but does assist in the administration of a well-designed system. Use RAID group numbering and LUN numbering to your advantage. For example, number LUNs so that all LUNs owned by SP A are even numbered and LUNs owned by SP B are odd numbered. A scheme to extend this is to use predictable RAID group numbering, and extend the RAID group number into the LUN number. This facilitates selection of LUNs for metaLUNs. The RAID group number embedded in the LUN number allows you to select LUNs from multiple RAID groups as shown in Table 4-2.
Table 4-2. Example of RAID group and LUN numbering RAID group 10 LUN 100 101 20 200 201 30 300 301 Default owner SP A SP B SP A SP B SP A SP B
4-10
For example, if selecting LUNs with which to extend FLARE LUN 101 into a metaLUN, choose LUNs 201 and 301. MetaLUN components are all trespassed to the same SP as the base LUN with all three LUNs belonging to the same SP. Also, now the I/O for the new metaLUN 101 is distributed across three RAID groups.
Environments that require sequential reads (online backups) concurrent with production get very good results with RAID 1/0 groups, as the read load can be distributed across many spindles. RAID 5 can also deliver good read throughput while under moderate load, such as messaging applications. Such arrangements should be tested before deployment. Keep write loads from saturating write cache while backing upthe higher priority of cache flushes slow read access.
Snapshot save areas and BCV LUNs
It is not wise to place snapshot cache LUNs on the same drives as the source LUNs you snap. Write operations result in very high seek times and disappointing performance. The same holds true for BCV LUNs: put them on disk groups separate from the LUNs they are cloning.
4-11
For example, with a 12+1 RAID 5 group and a 64 KB stripe element, the stripe size is 12*64 KB = 768 KB. For MR3, a cache page size of 8 KB or larger must be used, as 4 KB page is too small. When the cache is not in use, a disk group of 2+1, 4+1, or 8+1 is more likely to align the stripe size to common host I/O sizes and still maintain aligned stripe element sizes.
Uncached writes, parity RAID, and MR3
The write cache imposes a maximum write bandwidth that the system can sustain. Bypassing write cache allows the system to achieve higher write loads, providing enough disks are available to deliver the required performance. Uncached writes can make use of MR3 processing on parity RAID types. I/O of up to 1 MB is buffered by the host-side storage array port. For MR3 to be effective, I/O must be aligned to the RAID stripe and must be a multiple of the RAID stripe size.
4-12
A large disk count allows concurrent requests to execute independently. For workloads that are random and bursty, striped metaLUNs are ideal. MetaLUNs that share RAID groups ideally have their peaks at different times. For example, if several RDBMS servers share RAID groups, activities that cause checkpoints should not be scheduled to overlap.
4-13
Table 4-3. System high-efficiency / high-performance drive counts For absolute best performance small I/O, random access, drives per system CX700 CX600 CX500 CX400 CX300 CX200 200 160 120 60 60 30
For absolute best performance large I/O, sequential access, drives per system CX700 CX600 CX500 CX400 CX200, CX300 80 40 40 20 20
These considerations are for customers whose absolute top priority is performance. As drives are added to systems, performance increases; however, the increase may not be linear.
RAID-level considerations
Most storage is implemented with RAID 1/0 or RAID 5 groups, as the redundant striped RAID types deliver the best performance and redundancy. RAID 3 is as redundant as RAID 5 (single parity disk).
RAID 5
RAID 5 is best implemented in four-to-nine-disk RAID groups. Smaller groups incur a high cost in capacity for parity usage. The main drawback to larger groups is the amount of data affected during a rebuild. The time to complete a rebuild is also longer with a larger group, though binding large RAID 5 groups across two back-end buses can minimize the effect. Table 4-4 on page 4-15 provides detailed rebuild times. Also, a smaller group provides a higher level of availability, since it is less likely that two of five drives fail, compared to two of ten drives. For systems where slowdowns due to disk failure could be critical, or where data integrity is critical, use a modest number of spindles per RAID group. Better yet, use RAID 1/0.
4-14
RAID 1/0
Use RAID 1/0 when availability and redundancy are paramount, which are typical requirements for SAP production systems. By nature, mirrored RAID is more redundant than parity schemes. Furthermore, a RAID 1/0 group needs only two DAEsone from each back-end busin order to afford the highest possible level of data availability. The advantages of RAID 1/0 to RAID 5 when under rebuild are illustrated in Table 4-4.
Table 4-4. RAID Types and Relative Performance in Failure Scenarios RAID Type Rebuild IOPS Loss Rebuild Time Impact of Second Failure during Rebuild Loss of data
RAID 5
50 percent
15-to-50 percent slower than RAID 1/0 15-to-50 percent faster than RAID 5
RAID 1/0
20-to-25 percent
RAID 1
20-to-25 percent
RAID 3
RAID 3 groups can be built of either five or nine drives. The redundancy is equivalent to RAID 5. However, rebuilds should be a bit faster with release 16 as the rebuild code takes advantage of the large back-end request size that RAID 3 uses.
Binding parity RAID groups such that each drive is in a separate DAE does not impact performance. However, there is a small increase in data availability in this approach. Using a parity RAID type with the drives striped vertically increases availability to over 99.999 percent. However, this is very unwieldy; if very high availability is required, use RAID 1/0.
4-15
There is absolutely no advantage in binding a RAID 1/0 group in more than two DAEs, but it certainly is not harmful in any way.
Parity groups of 10 drives or more benefit from binding across two buses, as this helps reduce rebuild times. For example, bind a ten-drive RAID 5 with five drives in one DAE, and another five drives in the next DAE above it.
Mirrored groups (RAID 1, RAID 1/0)
Binding mirrored RAID groups across two buses increases availability to over 99.999 percent and keeps rebuild times lower. This technique ensures availability of data in two (rare) cases of double failure: an entire DAE or redundant back-end bus (dual-cable failure). Bind the drives so that the primary drives for each mirror group are on the first back-end bus, and the secondary (mirror) drives are on the second back-end bus. Binding across buses also has a minimal but positive impact on performance. When creating the RAID group (or defining a dedicated LUN in the bind command), use Navisphere CLI to bind across buses. When designating the disks, Navisphere CLI uses the disk ordering given in the createrg or bind command to create Primary0, Mirror0, Primary1, Mirror1, and so on, in that order. Disks are designated in Bus_Enclosure_Disk notation. Here is an example of binding the first two drives from enclosure one of each bus:
navicli h ip address createrg 55 0_1_0 1_1_0 0_1_1 1_1_1
4-16
However, a LUN bound with some drives in the vault enclosure (DPE or first DAE, depending on the model) and with some drives outside of the vault enclosure may require a rebuild, which is a more disk-intensive process. This affects performance to some degree on reboot. To avoid a rebuild on boot, follow these steps: Do not split RAID 1 groups across the vault enclosure and another DAE. For parity RAID (RAID 5, RAID 3), make sure at least two drives are outside the vault enclosure. For RAID 1/0, make sure at least one mirror (both the primary and secondary drive in a pair) is outside the vault enclosure. For RAID 1/0, you can use NaviCLIs ordering when using createrg, as explained earlier to ensure at least one pair is outside the vault enclosure. Example:
navicli h ip address createrg 45 0_1_0 1_1_0 0_0_1 1_0_1 0_0_2 1_0_2
Note that the pair 0_1_0 and 1_1_0 are outside the vault enclosure. Or simply ensure more than half of the drives in a RAID 1/0 group are outside the vault enclosure.
4-17
Chapter 5
This chapter presents these topics: Striped metaLUNs ..........................................................................................................5-2 Host-based striping.........................................................................................................5-2 Log and BCV placement ................................................................................................5-2 Logical volume managers and datafile sizes ..................................................................5-3 PowerPath and device queue depth ................................................................................5-3 Snaps, snapshots, BCVs, and clones ..............................................................................5-3 Customers are strongly advised to read the SAP Support Notes for their platforms and database combinations.
5-1
Striped metaLUNs
SAP and database vendors recommend spreading data across many spindles and controllers for parallel I/O operations. Dell/EMC storage arrays support 36 GB, 73 GB, and 181 GB drives for delivering both performance and capacity. For random OLTP workloads such as SAP, the larger drives are as appropriate as the smaller drives since both are high-performance disks at 10,000 rpm and have demonstrated excellent performance both in the lab and at customer sites. To deliver more throughput than is possible from a single volume, Dell/EMC recommends using metaLUNs for volume sets up to 500 GB in size. MetaLUN performance is equal to or better than host volume stripe sets. MetaLUNs with PowerPath can scale linearly beyond the capacity of a single channel to service I/O requests. During database layout, consider whether to store the database on either raw devices or cooked (file system) devices. Because raw devices do not use the hosts file buffer cache, some implementations may see a slight improvement in I/O performance. In Oracle environments, your decision on whether to use raw or cooked devices is based on the expertise and preferences of the system and database administrators. In UDB DB2 environments, SAP recommends using DMS DEVICE containers for your large, fast-growing tables. In both of these database platforms, the database management tools monitor and manage on the tablespace (logical) level. To ease management of fast-growing tables, SAP and EMC recommend isolating these heavily accessed or growing tables into their own tablespaces.
Host-based striping
Logical volume managers offer host-based striping and allow administrators to reduce the stripe width from the metaLUN default if desired. This in effect produces double striping and has no negative impact on performance. Customers have success in R/3 environments with stripe widths of 128 KB and higher. If using host-based striping, ensure that the stripe width is a multiple of the database block size (8192 by default).
5-2
Data access
Users can access BCVs or snapshots from a secondary server. BCVs can be accessed once they are fractured from their source, and snapshots can be accessed once theyve been associated with (activated to) a Snap session. A distinction between BCVs and snapshots, however, is how soon after creation
5-3
the replica can be accessed. Because snapshots function on a pointer-and-copy mechanism, the processing to start a Snap session is minimal and the Snap session data is available virtually instantaneously. BCVs on the other hand, being full-image copies, must have sufficient time for the initial copy (referred to as synchronization by the software) to occur before they can be used for the first time. Once the initial synchronization has been performed, subsequent updates (resynchronizations) are incremental. The amount needing to be transferred is only the data that changed on the source while the BCV was fractured. In consideration of data availability, it should be noted that creating the snapshot (or, more specifically, when starting the session to which the snapshot is activated) or fracturing the BCV from the source, is what defines the point-intime characteristic of the replica. As such, the application writing to the source volume must have logical consistency. Typically this is done by using the online quiesce capability if the application has that capability. If not, the application must be stopped (and host buffers flushed to disk) while starting the session or fracturing the BCV, and then restarted once the session has been started or the BCV fractured. Finally, regarding access to replicas, it should be noted that the general Dell/EMC recommendation is that users avoid accessing replicas from the production server (or, likewise, that users not access replicas of the same source from the same server). This recommendation stems from the fact that unless drive signatures are modified correctly, data corruption may occur. The only exception to this restriction is if users are using EMC PowerVolume (available with PowerPath 4.x), and/or a member of the EMC Replication Manager family.
Resource requirements
Since BCVs are full image copies, they require the same amount of usable disk space as the source volume. BCVs do not have to be the same RAID type or drive type as their source, however. Consequently, users may opt to put production data on RAID 1/0 drives for maximum protection, whereas they may opt to put BCVs on RAID 5 drives to decrease the relative drive cost for BCVs. Furthermore, users may opt to put BCVs on ATA drives to for additional cost savings. In this way, users can leverage the flexibility of Dell/EMC storage platforms to provide tiered storage and protection in their environment. The pointer-and-copy-based design for snapshots minimizes the amount of space needed to support these replicas. Since only the data that is changed during the snapshot session must be accommodatedthat is, copied to the reserved LUN2the reserved LUN needs only to be large enough to hold that changed data. In other words, instead of requiring another LUN that is the same size as the source LUN (as BCVs do), snapshots require only a fraction of additional space. How much space is required varies between implementations but depends on variables such as the rate of change of the source LUN data, how many
5-4
session(s) are run, and for how long. A conservative size would be 20 to 30 percent of the source LUN for a single session, and then increasing that figure by 10 to 20 percent for each additional session. Although these estimates are only general guidelines, they should nevertheless demonstrate how snapshots typically require significantly less disk capacity than BCVs. Also, as with BCVs, users may opt to put the reserved LUNs on ATA drives for greater cost savings.
Performance considerations
Users may find that the difference between BCVs and snapshots in terms of performance may help clarify which replica is right for their environment.
BCV synchronization effect on source I/O
One significant benefit that BCVs offer over snapshots is that they tend to provide better performance of the source volume. This difference is due to the fact that BCVs typically exist on different drives than the source volume (EMC recommends this), and thus reads and writes to the source hit different spindles than reads and writes to the BCV. The only time that BCVs impact source performance is when the BCV is being synchronized. After initial creation, BCVs only require incremental resynchronization (that is, only the changes since the last synchronization). To offset this impact, users are advised to strategically select when they resynchronize their BCV so as to minimize the impact to source volume performance. Additionally, users can select the synchronization rate that best suits their environment. The high synchronization rate devotes a high percentage of the storage arrays CPU to completing the synchronization as fast as possible, whereas the low synchronization rate allows many other operations to occur during the synchronization. This slows down the synchronization, but also minimizes the impact of the synchronization. (Medium is the default, and the rate can be changed while a synchronization is in progress.)
Snap session effect on source I/O
The pointer-and-copy design of snapshots, while conserving disk space, typically affects source volume performance. This is due in part to the fact that, for data that has not yet changed on the source volume, reads to the snapshot are hitting the same spindles as reads to the source volume. Additionally, when write requests to the source generate the copy of the original data to the reserved LUN, CPU processing must be directed toward handling the copy operation, thus decreasing the CPU processing cycles available to handle I/O to the source.
Use of ATA drives
The performance of ATA drives, in general, depends on the available write cache. As long as the write cache can absorb the writes to the ATA drives, there is no noticeable performance difference between writes to Fibre Channel
5-5
drives and writes to ATA drives. Once the write cache becomes full, however, this difference becomes noticeable. When using ATA drives for BCVs, users should keep in mind that while synchronizing the BCV, if the write cache cannot keep up with the writes to the BCV, the synchronization operation takes longer when performed on ATA drives as opposed to Fibre Channel drives. Additionally, once the BCV is synchronizedand until it is fracturedall writes to the source LUN are simultaneously written to the BCV. In this case, if the write cache becomes full, the writes to the BCV are processed at the slower ATA processing rate and thus impact server response time. To minimize this server impactand also to ensure a point-in-time replica is available for recoveryusers are encouraged to fracture the BCV as soon as it has been synchronized. Likewise, some users may opt to use ATA drives for SnapView reserved LUNs. When using ATA drives for reserved LUNs, performance testing showed that unless several sessions are running on a given source, the difference between using Fibre Channel and ATA drives for the reserved LUNs is minimal. Thus, using ATA drives for reserved LUNs may provide users an additional cost savingswithout a significant performance impact in allocating storage space for their SnapView replica.
5-6
You can find the following technical documents and white papers on the web: Oracle8i Designing and Tuning for Performance at http://technet.oracle.com SQL Server 2000 Operations Guide at http://www.microsoft.com/technet/sql SAP on DB2 UDB documents at http://www.redbooks.ibm.com/ and http://www4.ibm.com/software/data/pubs/ For EMC- and partner-specific papers, please visit: EMC Powerlink at http://Powerlink.EMC.com. EMCs public Technical Publications Library at http://www.EMC.com/techlib. If you have access to the SAP Service Marketplace, the following SAP documents are related reading: Customer-Based Upgrade: Upgrading SAP R/3 With Minimal Downtime Case Study of a Pilot Project: http://service.sap.com/upgrade SAP Split Mirror Disk Backup, June 1999: http://service.sap.com/media R/3 System: SAP Tools to Back Up the Oracle Database, 1998: http://service.sap.com/media Database Layout for R/3 Installations under ORACLE: Terabyte Project, 2000: http://service.sap.com/atg Database Layout for SAP Installations with Informix: Terabyte Project, February 2000: http://service.sap.com/atg Database Layout for SAP Installations with DB2 UDB for Unix and Windows, March 2001: http://service.sap.com/atg
A-1
The following are useful websites: http://www.EMC.com/sap http://service.sap.com/, keywords: split-mirror, upgrade, systemmanagement, data-archiving, seme, atg http://help.sap.com
A-2