Академический Документы
Профессиональный Документы
Культура Документы
Abstract
TABLE OF CONTENTS
1
Preface ................................................................................................................................................... 6
2.1
2.2
2.3
3.2
3.3
3.4
4.2
4.3
5.2
5.3
8.1
8.2
8.3
8.4
8.5
8.6
8.7
9.2
SnapMirror Options....................................................................................................................................... 42
9.3
LIST OF TABLES
Table 1) Application working set size recommendations.............................................................................................. 39
LIST OF FIGURES
Figure 1) SPM main window. ......................................................................................................................................... 8
Figure 2) New sizing step 1 window. .............................................................................................................................. 9
Figure 3) New sizing step 2 window. ............................................................................................................................ 10
Figure 4) Forward sizing workload selection window. .................................................................................................. 11
Figure 5) Forward sizing pre-filter hardware configuration window. ............................................................................. 12
Figure 6) Detailed disk selection. ................................................................................................................................. 13
Figure 7) Detailed flash acceleration options. .............................................................................................................. 13
Figure 8) Forward sizing advanced configuration options. ........................................................................................... 14
Figure 9) Forward sizing results window. ..................................................................................................................... 14
Figure 10) Sizing method selection. ............................................................................................................................. 15
Figure 11) Reverse sizing choose workflow. ................................................................................................................ 16
Figure 12) Enter the number of HA pairs in clustered Data ONTAP. ........................................................................... 17
Figure 13) Reverse sizingFlash Cache options. ......................................................................................................... 17
Figure 14) Reverse sizingadding an aggregate.......................................................................................................... 18
Figure 15) Reverse sizingchoosing workload type. .................................................................................................... 19
Figure 16) Reverse sizingestimating system utilizations and latency output. ............................................................. 20
Figure 17) Reverse sizingestimating maximum throughput output............................................................................. 20
Figure 18) Flash options. ............................................................................................................................................. 21
Figure 19) Flash acceleration optionsAuto_Suggest. ................................................................................................. 21
Figure 20) Flash acceleration optionsManual............................................................................................................. 22
Figure 21) Flash Pool enabled aggregate. ................................................................................................................... 23
Figure 22) View menu containing history, saved inputs, and templates. ...................................................................... 24
1 Preface
System performance modeler (SPM) is NetApps next-generation performance sizing tool, available to both
NetApp employees and partners. It is designed to simplify the process of performance sizing for NetApp FAS
systems running NetApp Data ONTAP 7G software; Data ONTAP 8.0, 8.1, and 8.2 operating in 7-Mode;
and clustered Data ONTAP 8.1 and 8.2. SPM integrates the previous legacy sizers functionality and new
features into an intuitive user interface and step-by-step process to support multiple workload requirements
and produce recommendations to meet customers performance needs.
This document is for NetApp employees and partners (pre- and postsales) who are interested in learning
more about how to use SPM as well as the benefits of and the theory behind SPMs development.
2.1
SPM Capabilities
SPM provides the ability to size systems using a single unified process that supports the following features
and more:
Heterogeneous workloads
Prior to SPM, standalone application-specific sizers were used to size each application to be deployed on the
system. SPM is designed to be more intuitive than the previous sizers, and it supports multiple workloads
within a single sizing by combining the various independent workload sizers into modules within the
workflow. The following application modules are supported by SPM:
SMB 1.0 and 2.x/3.0 Common Internet File System (CIFS) protocol home directories
Database applications
Custom applications
2.2
The NetApp sizing architecture has many components, and SPM makes up only a portion of the larger
picture. SPM provides an interface to various underlying sizer models and application-specific logic. It is
implemented as a web application using the NetApp web framework. SPM collects the necessary system
configuration and workload parameters to send to the application logic and lower layers of the sizing
architecture. The common sizing infrastructure (CSI) is the heart of the sizing architecture. Various models of
the subsystems of Data ONTAP, disk types, and various controller models are contained within the CSI and
6
are used to generate sizing results. The CSI combines real empirical data with system models to produce
realistic results.
2.3
When using SPM, it is important to remember that the tool is a guide and that the accuracy of the
recommendations can vary greatly compared to reality, depending on the quality of information input, the
deployment of additional Data ONTAP features, and how workloads applied to the system might change over
time. NetApp storage systems have a plethora of features that cannot all be modeled when they interact
despite the fact that their interoperability is supported. Therefore, the accuracy of SPM recommendations is
variable, and that should be taken into consideration. Although SPM delivers a significant degree of
automation and dramatically simplifies the performance sizing process, it does not replace user experience
and the application of best practices.
4. Navigate to View > History to review your previous sizings and rehydrate them.
5. Navigate to View > User Templates and System Templates to use predefined templates to guide your
sizing efforts.
6. Navigate to Help > Feedback to provide feedback to the performance sizing team.
Depending on the sizing use case, one of the following use case sections can help you get started quickly.
3.1
If you are attempting to size a new storage environment, choose the SPM Forward Sizing workflow. Using
this workflow requires some knowledge of the customers workloads and applications. Completing the
Forward Sizing workflow provides recommendations for the number of nodes and number of disks, as well
as estimated utilizations of the recommended system. Refer to section 4.2 for detailed instructions on how to
complete a forward sizing.
3.2
Often a customer would like to know what would happen if additional workloads were applied or hardware
changes were made to an existing storage system. The SPM Reverse Sizing workflow using the estimated
system utilization and latency can be very helpful in this situation. Because SPM supports perfstat import,
real data from a customer environment can be used in the workflow. After the desired system configuration
and workload settings are provided, SPM can provide the estimated system utilization and latencies that
should be expected after making the potential changes. More information on using the Reverse Sizing
workflow is provided in section 4.3.
3.3
As systems become more powerful, consolidating a few systems into a single system can be ideal. The
Forward Sizing workflow can provide recommendations for consolidation into a new system; if a system is
already deployed, Reverse Sizing workflow solving for system utilization and latency might be more useful. If
the older systems being considered for consolidation are NetApp storage systems, a perfstat can be
7
captured and put into SPM to provide the statistics for the NetApp systems. If systems other than NetApp
systems are being considered for consolidation, the workload modules within SPM can be used to enter
additional workload details. Completing either workflow should provide an idea of what the performance of
the single system could be with multiple workloads applied to it. Additional information about forward sizing is
available in section 4.2, and reverse sizing information is available in section 4.3.
3.4
It is important to have an idea of how much of a workload a system can handle. SPM can determine the
maximum system throughput using the Reverse Sizing workflow. In this workflow, it is possible to either
upload a perfstat to supply a controller configuration or specify one manually. After completing the workflow
for the reverse sizing in this mode, SPM provides an estimated maximum throughput for the controller
configuration and workloads. Additional information on using the Reverse Sizing workflow is available in
section 4.3.
4 SPM Workflows
This section describes various aspects of the SPM workflow.
4.1
Customer. Displays the customers name as entered in step 1 of the New Sizing wizard.
Sizing Title. Displays the sizing title as entered in step 1 of the New Sizing wizard.
Workloads. Lists all the workloads added during the sizing workflow.
View. Opens previous sizing requests located in history or saved configurations. Sizing templates are
also available from this menu.
Help. Shows where additional information can be found, as well as a way to provide feedback to the
SPM team.
Save For Later. Saves the workload and options for later use. The saved date can be retrieved from the
Saved Inputs option from the "View" menu.
Perform Sizing. Allows the user to actually perform the sizing. In the case of the Forward Sizing
workflow, it opens the hardware pre-filter window. In the case of the Reverse Sizing workflow, it submits
the sizing to CSI.
4.2
As previously mentioned, SPM simplifies the sizing process by offering a step-by-step workflow to enter
sizing-related information and produce sizing results. This section describes the steps of a Forward Sizing
workflow. Reverse Sizing has a similar workflow.
10
You can select multiple platforms by highlighting the relevant platforms using the control or shift keys
while clicking. In clustered Data ONTAP sizings, clusters are assumed to be homogeneous in
platform type and configuration, and the systems must be deployed in HA pairs. SPM does not
currently support heterogeneous clusters or configurations that are not in HA pairs.
Also in this window is a checkbox for Degraded Failover Performance OK on HA takeover event. When this
checkbox is deselected, 50% additional utilization headroom is added to the controller so that the system is
capable of completely handling its own workload and its partners workload in case of a failover. This might
mean that the size of the recommended solution would be doubled in size.
Note:
Degraded failover performance is additive with system headroom. For example, if degraded failover
performance is not checked and headroom is set to 30%, then only (.5 * .7 = .35) 35% of the storage
controller will be utilized, resulting in much larger controllers being needed.
11
You should select your disk type and the required flash acceleration options. Flash can significantly reduce
the number of disks required to achieve the desired performance, depending on the characteristics of the
workload. A convenient feature of SPM is the ability to size both with and without flash acceleration. This
provides the results in a single report and illustrates the effect of the acceleration modules on the predicted
performance of the system. When sizing for Flash Pool , SPM will identify the number of SSD drives
necessary to achieve the performance requirements of the workloads created in the subsequent steps. This
step does not place any specific workload on a Flash Pool aggregate. All options that are selected will be
modeled.
Figure 5) Forward sizing prefilter hardware configuration window.
At this point, you can either click Calculate Sizing or select the Detailed Inputs checkbox to use nondefault
options in your sizing. Whether you decide to use some of the detailed inputs or not, click Calculate Sizing
after you are satisfied with the options you have set.
12
13
System headroom (%). Amount of CPU and other system resources that should be reserved and
unused while sizing to allow for future growth. Increasing the headroom can increase the platform
count if the supplied workloads exceed the headroom threshold, even if the workload can be
serviced with fewer systems.
Map to full shelves. Select this option if SPM should produce disk requirements equal to the
number of disks in a full shelf. The final disk count can be increased if the number of disks required
for performance, capacity, and spares does not equal full shelves.
Capacity reserve (%). Amount of disk space that should be reserved for future growth. This can
increase the number of disks required to meet capacity requirements.
Note:
Spare disks per node. The number of disks that should be added as spares.
SPM automatically calculates the number of parity drives required.
System age. As system age increases, I/O operations can become less optimal, which ultimately
increases disk utilization. Adjusting the system age can increase the amount of disks required to
support the workload. The Empty System setting represents a new storage environments age.
14
4.3
The Reverse Sizing workflow for SPM is similar to the Forward Sizing workflow. However, it answers
different questions. Forward sizing is primarily focused on new sales; reverse sizing is focused on existing
installations. SPM provides two methods of reverse sizing:
Estimate resource utilizations and latencies, which answers the what if questions
Estimate maximum throughput, which provides the exact opposite of a forward sizinglike functionality
Starting with SPM 1.4, the reverse (formerly advanced) sizing workflow is supported for both 7-Mode and
clustered Data ONTAP.
When using the Reverse Sizing modes, only a single platform model can be selected.
Also, instead of solving for the number of disks, the aggregate sizes and types are user defined using the
aggregate attributes feature. There are no additional advanced parameters, such as capacity reserves of
spare disks, because aggregates are user defined and have already been determined.
Note:
You can toggle between the forward and Reverse Sizing workflows by clicking the Toggle button at
the bottom of the SPM window.
This section describes how to complete a reverse sizing for both resource estimation and maximum
throughput calculations.
Resource Utilization and IO Latencies. This option is useful for answering a what if question. For
example, if a system is already deployed and a customer wants to determine what would happen if
another workload were deployed on it, using this sizing method will determine what the customer should
expect for overall system utilization and latency.
Maximum Throughput. This option is the exact opposite of a forward sizing. SPM provides an
estimated maximum throughput given a system configuration and one or more application workloads.
As shown in Figure 11, a perfstat file can be uploaded during this step.
15
16
The FlashCache options section enables you to select no Flash Cache or specify the exact type, number,
and size of the cards.
Figure 13) Reverse sizing: Flash Cache options.
In this step, aggregate configurations must be defined. For each aggregate that is part of the system being
modeled, add an aggregate and the number of disks in the Aggregate Attributes window using the New
Aggregate button. SPM assumes that the RAID type is NetApp RAID-DP technology and uses the default
RAID group size (16). For guidance on RAID group and disk spare configurations, refer to the Storage
Subsystem Technical FAQ.
17
18
19
Figure 16) Reverse sizing: estimating system utilizations and latency output.
20
The results of the reverse sizing are also captured in the report available from the results window. The report
is described in detail in section 10.
5.1
When the Detailed Inputs checkbox is selected, the following more detailed options are allowed to be
selected.
21
5.2
SPM 1.4 supports the creation of aggregates enabled by Flash Pool during reverse sizing in both Data
ONTAP 7-Mode and clustered Data ONTAP. A new field, Aggr Type, has been added to enable the selection
of the aggregate type to be created. The Aggr Type can be selected as either Normal or FlashPool Enabled.
When the Aggr Type is selected as FlashPool Enabled, two new fields will display in which the user can
specify the SSD drive type and SSD data drives.
22
5.3
In older versions of SPM, when both Flash Cache and Flash Pool were enabled, CSI used to combine both
of them and generate one output per platform/drive combination selected.
With the addition of the Auto_Suggest option, the CSI outputs have also changed. When both the Flash
Cache and Flash Pool options are selected, either in Auto_Suggest or Manual mode, CSI will generate two
outputs per platform/drive combination selected: one output with only Flash Cache and the other with only
Flash Pool, based on the workload characteristics.
23
Figure 22) View menu containing history, saved inputs, and templates.
After rehydrating, sizing reports are only available to view or to send through e-mail after the sizing is
resubmitted. Sizings can also be shared with other users using this process. The other user would access
shared sizings through User Templates in the Open menu.
Figure 23) Sizing history dialog box.
The Search button in the main window allows you to search for your own sizings based on multiple criteria.
Your history of sizings and saved sizings is also maintained under the View menu.
For each sizing thats submitted, a unique sizing ID is generated so that it can be recalled later, if needed.
This is the same sizing ID used when sending feedback.
24
25
Perfstat is a NetApp tool used to capture performance and configuration information from an existing
installation. SPM supports uploading perfstats to automatically fill in the controller and workload information
necessary for sizing. After a perfstat is uploaded, its possible to modify the controller and workload to model
changes to an existing system. SPM allows you to submit perfstat files in two ways: online and offline.
Figure 26) Perfstat parsing options.
26
on Java that parses through the perfstat file and generates an intermediate file that SPM understands. The
parsed perfstat intermediate file is much smaller and therefore uploads more quickly compared to the
perfstat file. This method requires that the Java Runtime Environment be installed on the local machine for
the utility to work properly.
After the offline parser is selected and allowed to run, an additional HTML page loads, providing the interface
to the offline parser.
Use the following steps to parse a perfstat file.
1. Select a perfstat file to parse.
27
After the perfstat file is parsed, a list of controllers in the perfstat file is made available.
2. Select the desired controller configurations and then select the workload characteristics.
Figure 29) Perfstat parser system configuration dialog box.
3. Select the desired iteration and option (Min, Max, or Average IOPS and CPU).
Figure 30) Perfstat parser workload characteristics dialog box.
28
After the controllers and workloads are selected, the parser prompts for a location in which to save an
intermediate file (.spm file).
4. Save the file.
5. Close the parsing window after the file is saved. SPM then displays the Import workloads from Perfstat
page. Use the Browse button to browse for the intermediate file (the .spm file generated in step 3).
Figure 31) Offline perfstat parser intermediate file upload.
The information in the perfstat file to be used for the sizing should now be visible on the main SPM window
as a controller configuration and a workload configuration.
8.1
The virtual desktop infrastructure (VDI) module provides an easy-to-use interface to size for multiple different
VDI environments. Most of the inputs require that the sizing module be available from the customers
proposed or existing environment. The protocol type is the storage protocol that will be used to host the
29
virtual machines (VMs). Each protocol has different performance characteristics. The module also supports
The basic window will have the mandatory fields, and the details will all be set to default values;
however, the user can expand the detailed section and set personal values (see Figure 32).
Free aggregate space and free snap reserve define the amount of free space that must be kept in the
aggregate and the amount that should be reserved for NetApp Snapshot copies.
The number of input/output operations per second (IOPS) is the amount of I/O each user is estimated to
produce.
The C drive size (in GB) is the size of the main VM operating system drive.
VM memory size refers to the amount of memory per VM and is used to factor in the vswap storage
requirements.
Unique data per VM is the estimated space unique to each VM. Because of cloning and deduplication
technologies, VMs might not require much additional space. This helps define additional storage
requirements for the life of the VM.
Note:
Disk capacity requirements can vary significantly depending on the cloning and Snapshot
technologies employed. Make sure to understand these differences when sizing for VDI
environments.
The read and write workload estimates are the percentage of read and write I/O expected from the VMs.
Working set size defines the percentage of the total data that is considered active and can vary
depending on the customer environment.
Read and write I/O size defines the I/O size for the workload.
Random read latency defines the maximum allowed latency for reads.
30
31
Sizing output can vary significantly based on differences found in vendors cloning technologies, so NetApp
recommends reviewing TR-3949: NetApp and VMware View Performance Report.
Detailed instructions on the deployment characteristics of VDI technologies are beyond the scope of this
document; therefore, NetApp recommends engaging with a consulting systems engineer (CSE) in your area
when sizing any VDI opportunity.
8.2
The information necessary to fill out the workload parameters needed to complete this module can be
acquired by filling out the Exchange 2007 Mailbox Server Role Storage Requirements Calculator.
Because providing detailed instructions on the technical and deployment characteristics of Microsoft
Exchange is beyond the scope of this document, NetApp recommends engaging with a CSE in your area
when sizing any Exchange opportunity.
32
8.3
In addition to the manual entry Microsoft Exchange application module, an additional module has been
included in SPM that supports the upload of the Microsoft Exchange 2010/2013 Mailbox Server Role
Requirements Calculator. This application module requires the Exchange 2010 Mailbox Server Role
Requirements Calculator or Exchange 2013 Server Role Requirements Calculator spreadsheet.
When sizing for Exchange using SPM, review the recommendations and best practices in TR-4166i: NetApp
System Performance Modeler and Microsoft Exchange Server 2010.
To complete a sizing, do as follows:
1. Enter the values in the Exchange Mailbox Server Role Requirements Calculator sheet.
In the Inputs sheet of the calculator, perform the following actions:
a. Under Backup Configuration, select Yes for Database and Log Isolation Configured. Otherwise, the
database and log will be placed on the same LUN, which is against NetApp best practices and will
result in a configuration that will not work with SnapManager for Exchange.
b. If the database size is less than the NetApp best practice of 2TB minimum, it will adversely affect the
performance of the system, because each database performs maintenance. Make sure that you set
the Maximum Database Size Configuration value to Custom. Set the Maximum Database Size (GB)
to a value greater than 2TB until the actual database size is close to 2048GB. This value can be
viewed in the LUN Requirements (in Exchange 2010 calculator) and Volume Requirements (in
Exchange 2013 calculator) and in column E under DB size + overhead.
c.
Under Backup Configuration, select the VSS Hardware Provider for Backup Methodology.
d. Under Exchange Data Configuration, select Yes for Dedicated Maintenance/Restore Volume.
e. Under Exchange I/O Configuration, specify the additional I/O or server requirements.
f.
Under Tier-1 User Mailbox Configuration, select Yes for Desktop Search Engines Enabled (for
Online Mode Clients).
g. Under User Mailbox Configuration, selecting Yes for Desktop Search Engines Enabled will affect the
IOPS accordingly.
h. Using the online retention settings affects the capacity calculations.
2. Upload the completed copy of the Mailbox calculator by clicking the Browse button in the basic inputs
section.
33
3. After the application module workflow is completed, two workloads are generated within SPM, a primary
site and a secondary site. Modify the controller constraints and use the checkboxes next to the
generated workload to model the primary and secondary workloads independently.
8.4
Various factors must be considered for each home directory deployment. The key considerations for
architecting and sizing a CIFS home directory solution include the number of users, the number of
concurrent users, the space requirement for each user, and the network load. Additional factors, such as
virus scanning of the home directories, can also affect sizing recommendations.
SPM supports these various considerations in the CIFS home directory workload module. The initial
workload parameters include:
User Type. Select from light, low, medium, or heavy; based on this, the other dependent values will be
set to default optimal values.
Home Directory Size (GB). The amount of space required for each user.
34
The following list describes various user parameters. The number of users along with the size of the home
directory defines the capacity requirements; the number of users along with type of user and concurrency
defines the performance requirements. The user type is based on the data transfer rate requirement per
user.
Throughput type. Choose between MB/s and IOPS, depending on what the basis of the sizing should
be. The default value is MB.
Random read latency (ms). If latency other than 20ms is required for this deployment, enter the value
here.
Sizing options. This option allows you to further customize the configuration based on the deployment
type designated as one of the following:
Fresh installation
NetApp upgrade
The CIFS home directory sizing workload module was implemented based on TR-3564i: Sizing of CIFSBased Home Directories. Although this document is no longer current, refer to it for additional reference
material.
8.5
If clustered Data ONTAP 8.2.x or later is selected, sizing for SMB 2.x and 3.0 is supported. Various factors
must be considered for each home directory deployment. The key considerations for architecting and sizing
a CIFS home directory solution include the number of users, the number of concurrent users, the space
requirement for each user, and the network load. Additional factors, such as virus scanning of the home
directories, can also affect the sizing recommendations.
SPM supports these various considerations in the CIFS home directory workload module. The initial
workload parameters include:
Random Read latency (ms). Input the desired latency as specified in milliseconds. This defaults to
20ms, which is considered a reasonable value.
User Type. Sets how much usage each user will add. Select from light (3kB/s), medium (10kB/s), or
heavy (20kB/s); based on this, other dependent values will be set.
35
Concurrency %: This specifies what percentage of the users would actually be using the system at one
time.
Home Directory Size (GB). The amount of space required for each user.
8.6
The database workload module can be used to size most common database applications. SPM conveniently
provides a way to upload statistics files from existing database installations that can help fill in the various
workload parameters in this module. SPM supports Oracle statspack and automatic workload repository
(AWR) files as well as statspack4SQL. Select the Detailed Inputs checkbox and then click Browse to select
the relevant statistics file. This will populate most of the values in this dialog box.
Figure 38) Database statistics file import.
If a statistics file is not used, the workload characteristics can be entered manually. The performance inputs
fields are for specifying the basic characteristics of the database workload, such as throughput, operatingmix percentages, throughput growth, and maximum acceptable latency.
36
App Specification Inputs fields allow you to select the database type to use for this sizing request, the
protocol to use, and the project life, which aids in the definition of growth requirements.
Figure 40) App specification inputs fields.
The file system inputs section of the database sizer helps to define the capacity requirements for the
database application. The working set size parameter defines what percentage of the capacity is active at
any point in time.
37
The performance of the storage system also depends on the block size used by the application. The default
is 8KB, but other sizes can be specified, if necessary.
Figure 42) DB-specific inputs field.
Because detailed instruction on the technical and deployment characteristics of the various databases
supported by SPM is beyond the scope of this document, NetApp recommends engaging with a CSE in your
area when sizing any database application.
8.7
For workloads that dont fit any of the built-in application modules or for workloads that need more granular
control, use the custom application module.
Although the custom application module supports all protocols, only one protocol can be selected per
workload. If you must consider additional protocols, simply duplicate and modify the workload. Throughput
can be defined as IOPS or MB/s. Select the appropriate throughput type, because it affects how the
throughput value is interpreted.
The required capacity field defines the size of the entire dataset and is used for storage capacity planning if
the number of disks is not determined by performance requirements. Capacity growth can be considered
when selecting capacity reserve in the Forward Sizing workflow.
38
The rest of the fields covered are defaulted, but can be modified by clicking the Detailed Inputs checkbox.
When specifying a random read latency, make sure that it is reasonable and also meets the requirements of
your application. Enter the desired latency in milliseconds. This defaults to 20ms, which is considered a
reasonable value. Workloads associated with aggregates that only have hard disks (HDDs) have a floor of a
8ms latency. Workloads associated with aggregates, which have SSDs, have a floor of a 3ms latency.
Latencies set below these values might return an error.
Figure 44) Custom Application module: random read latency.
The I/O percent section defines the read-write breakdown of the workload. The options at the top of the I/O
percent parameters are presets that quickly adjust the percentages. If you know the breakdown of your
workload in greater detail, the percentages are adjustable. This defaults to 25% of each of the four
categories shown in Figure 45.
Figure 45) I/O percent workload parameters.
The working set size, which is generally applicable to workloads with random I/O, defines the size of the
active portion of data. This value affects how much data can be cached, which affects how much of the
workload is forced to disk. The smaller the working set, the greater the chance that data is kept in the cache.
The working set size is not often known for many applications, but its typically a smaller percentage of the
total data. Refer to Table 1 for guidelines to determine working set sizes based on workload type.
Table 1) Application working set size recommendations.
Application
Home directories
39
Application
Databases (OLTP)
Microsoft Exchange
Changing the size of the I/Os in the workload can affect the number of IOPS that the system will achieve.
However, total data throughput in MB/s might be greater. Set these values to be as close as possible to the
customer workload characteristics.
Figure 47) Custom Application module: I/O size parameters.
If Flash Pool is used for this application workload, the random overwrite percent selection is made available.
Random overwrite percent determines the impact that the random overwrite caching in SSD will have for the
workload and system performance.
Figure 48) Random overwrite percent selection.
After the application-specific portions of the workload are complete, there are additional points to consider
that are common to all applications, such as layout hints, aggregate types, and SnapMirror functionality. For
more details, refer to section 9.
9.1
Layout Options
A key strength delivered by SPM is support for extensive customization of storage layout and configuration
options, detailed in the sizing output reports. This functionality is particularly powerful in clustered Data
ONTAP opportunities, and we encourage exploration of the various layout configuration parameters.
40
The layout hint options within each of the application workload modules define how the workload should be
spread across the nodes in the system. Placing a workload on a shared aggregate means that more than
one workload can be placed on the same aggregate, if necessary.
Workloads can be split between the nodes on different aggregates, or they can be contained in a single
aggregate on a single node. Splitting the workload across multiple aggregates means that the workload can
be split across the nodes in the system. Selecting this box means that all the nodes in the system will get a
portion of the storage workload. This applies to both Data ONTAP 7-Mode and clustered Data ONTAP
systems. If the system configuration includes Flash Pool, the layout hints section designates whether the
workload should be placed on a Flash Pool aggregate.
Figure 49) Layout hints.
When defining workloads to be placed on a clustered Data ONTAP system, additional options are available.
The flexibility of clustered Data ONTAP allows client access to be split across nodes regardless of whether
the data is stored on one node or on many nodes.
Figure 50) Additional clustered Data ONTAP layout hints.
The following combinations, as shown in Figure 51, are possible depending on how the layout options are
set for the workload:
41
Distributed client access across all nodes, sent over the cluster interconnect to the node responsible
for the aggregate (B).
All four layouts are possible for clustered Data ONTAP system sizings; only A and C are applicable to Data
ONTAP 7-Mode systems.
Figure 51) Workload layout combinations.
9.2
SnapMirror Options
For each workload in a Data ONTAP 7-Mode sizing, during reverse sizing, there is an option to include
SnapMirror parameters related to the workload.
Figure 52) SnapMirror features.
9.3
Most of the workload modules ask for characteristics a customer should be able to provide. SPM translates
these user inputs into workload characteristics that it can use for sizing. These system-generated workloads
are not directly editable, but are indirectly changed by modifying the workload inputs. The system-generated
workload characteristics can be viewed after a workload is added to the current sizing. Clicking Edit under
Options to the right of a workload brings up the user-entered workload parameters, as well as the systemgenerated values and intermediate values. For example, completing the Exchange Workload entry produces
two system-generated workloads, as shown in Figure 53 and Figure 54.
42
43
Figure 57) Example of a node layout (clustered Data ONTAP, direct only).
Figure 58) Example of a node layout (clustered Data ONTAP distributed across cluster).
44
The node containing the aggregate and actually serving the data
Each workload shows how much system utilization it is contributing to the node, and the overall system
utilization, under node details, reflects the sum of all the workload utilizations along with additional overhead.
The node receiving client traffic and forwarding indirectly will have a lower utilization than the node receiving
the indirect traffic and accessing the storage.
If there are multiple workloads, depending on how the layout hints are adjusted, the screenshots in Figure 61
and Figure 62 show all of the workloads applied to the cluster.
Figure 61) Node-perspective system usage (clustered Data ONTAP, direct only).
45
Figure 62) Node-perspective system usage (clustered Data ONTAP, distributed across cluster).
If Flash Pool is used in the sizing, this table changes to show the disks required for both the SSD and HDD
portions of the Flash Pool aggregate, as seen in Figure 64. The expected impact of read caching of Flash
Pool is shown below.
Figure 64) Flash Pool drive calculations.
The shelf details section elaborates on the number of shelves, drive types, and number of drives needed for
the configuration.
46
Other outputs can also be produced if they are relevant to the sizing request. Some examples, such as the
example screenshot shown in Figure 67, can include information about the size and number of Flash Cache
modules or the disk-OPS-to-host-OPS ratio.
Figure 67) Additional information outputs.
When sizing for maximum throughput, SPM creates a table similar to the screenshot shown in Figure 70. In
addition to maximum throughput, SPM also lists the reason for the bottleneck that is limiting the performance
of the configured system.
47
11 Additional Resources
Additional information can be found in the SPM help menu, as well as in the SPM FAQ. For more
information, contact the sizing team or visit the Sizers Community page.
Version History
Version
Date
Version 0.1
March 2012
Original version
Version 1.0
March 2012
Version 1.1
September 2012
Version 1.2
December 2012
Version 1.3
December 2013
Version 1.4
January 2014
48
Refer to the Interoperability Matrix Tool (IMT) on the NetApp Support site to validate that the exact product
and feature versions described in this document are supported for your specific environment. The NetApp
IMT defines the product components and versions that can be used to construct configurations that are
supported by NetApp. Specific results depend on each customer's installation in accordance with published
specifications.
NetApp provides no representations or warranties regarding the accuracy, reliability, or serviceability of any
information or recommendations provided in this publication, or with respect to any results that may be
obtained by the use of the information or observance of any recommendations provided herein. The
information in this document is distributed AS IS, and the use of this information or the implementation of
any recommendations or techniques herein is a customers responsibility and depends on the customers
ability to evaluate and integrate them into the customers operational environment. This document and
the information contained herein may be used solely in connection with the NetApp products discussed
in this document.
49
NetApp provides no representations or warranties regarding the accuracy, reliability, or serviceability of any
information
orInc.
recommendations
provided
thisdocument
publication,
or reproduced
with respect
to any
that may
be
2014 NetApp,
All rights reserved. No
portions in
of this
may be
without
prior results
written consent
of NetApp,
Inc.
Specifications
subject
to information
change withoutornotice.
NetApp, the
logo, Go further, faster,
Data ONTAP,
Flash
obtained
by theare
use
of the
observance
of NetApp
any recommendations
provided
herein.
TheCache,
Flash
Pool, RAID-DP,
or registered
trademarks
NetApp, Inc. in theof
information
in this SnapManager,
document is SnapMirror,
distributedand
ASSnapshot
IS, andare
thetrademarks
use of this
information
or the ofimplementation
United States and/or other countries. Microsoft and SQL Server are registered trademarks of Microsoft Corporation. Oracle and Java
System Performance
Modeler
anytrademarks
recommendations
or techniques
herein
is a customers
responsibility
and
depends
the customers
are
of Oracle Corporation.
VMware
is a registered
trademark of
VMware, Inc. All
other
brands oron
products
are
ability to evaluate
and
integrateof them
into the holders
customers
operational
This document and
trademarks
or registered
trademarks
their respective
and should
be treatedenvironment.
as such. TR-4050-0114
the information contained herein may be used solely in connection with the NetApp products discussed
in this document.