Вы находитесь на странице: 1из 49

Technical Report

System Performance Modeler


Brett Albertson, Chris Wilson, NetApp
January 2014 | TR-4050

Abstract

NetApp system performance modeler (SPM) is the companys next-generation performance


sizing tool and is available to both NetApp employees and partners.

TABLE OF CONTENTS
1

Preface ................................................................................................................................................... 6

Overview of Performance Sizing and SPM ........................................................................................ 6

2.1

SPM Capabilities ............................................................................................................................................ 6

2.2

SPM and the NetApp Sizing Architecture ....................................................................................................... 6

2.3

Limitations of Performance Sizing .................................................................................................................. 7

SPM Quick-Start Tips ........................................................................................................................... 7


3.1

New System Deployment Use Case ............................................................................................................... 7

3.2

What-If Use Case ........................................................................................................................................... 7

3.3

Consolidation Use Case ................................................................................................................................. 7

3.4

Maximum System Performance Use Case ..................................................................................................... 8

SPM Workflows ..................................................................................................................................... 8


4.1

SPM Main Window ......................................................................................................................................... 8

4.2

Performing a New Forward Sizing .................................................................................................................. 9

4.3

Perform a New Reverse Sizing ..................................................................................................................... 15

Sizing with Flash AccelerationFlash Cache/Flash Pool ............................................................... 21


5.1

Changes to the User Interface ...................................................................................................................... 21

5.2

Creation of Flash Pool Enabled Aggregates ................................................................................................. 22

5.3

Flash Output Changes .................................................................................................................................. 23

Previous Sizings and Sharing ........................................................................................................... 23

Importing Workload Information ....................................................................................................... 25

Using the Workload Modules ............................................................................................................ 29

8.1

Virtual Desktop Infrastructure Module ........................................................................................................... 29

8.2

Microsoft Exchange 2007 Module................................................................................................................. 32

8.3

Microsoft Exchange 2010/2013 Calculator Import Module ........................................................................... 33

8.4

SMB 1.0 CIFS Home Directories Module ..................................................................................................... 34

8.5

SMB 2.x/3.0 CIFS Home Directories Module ................................................................................................ 35

8.6

Database Applications Module ..................................................................................................................... 36

8.7

Custom Application Module .......................................................................................................................... 38

Additional Sizing Options .................................................................................................................. 40


9.1

Layout Options ............................................................................................................................................. 40

9.2

SnapMirror Options....................................................................................................................................... 42

9.3

System-Generated Intermediate Workloads ................................................................................................. 42

10 Understanding SPM Sizing Output ................................................................................................... 43


2

System Performance Modeler

10.1 Suggested Configuration and Layout............................................................................................................ 43


10.2 Layout Recommendations ............................................................................................................................ 44
10.3 Controller/Node Perspective System Utilization ........................................................................................... 44
10.4 Drive Calculations and Flash Pool ................................................................................................................ 46
10.5 Adjustments for Drives and Other Outputs ................................................................................................... 46
10.6 Inputs Section ............................................................................................................................................... 47
10.7 Failed Configurations Section ....................................................................................................................... 47
10.8 Reverse Sizing Report .................................................................................................................................. 47

11 Additional Resources ......................................................................................................................... 48


Version History .......................................................................................................................................... 48

LIST OF TABLES
Table 1) Application working set size recommendations.............................................................................................. 39

LIST OF FIGURES
Figure 1) SPM main window. ......................................................................................................................................... 8
Figure 2) New sizing step 1 window. .............................................................................................................................. 9
Figure 3) New sizing step 2 window. ............................................................................................................................ 10
Figure 4) Forward sizing workload selection window. .................................................................................................. 11
Figure 5) Forward sizing pre-filter hardware configuration window. ............................................................................. 12
Figure 6) Detailed disk selection. ................................................................................................................................. 13
Figure 7) Detailed flash acceleration options. .............................................................................................................. 13
Figure 8) Forward sizing advanced configuration options. ........................................................................................... 14
Figure 9) Forward sizing results window. ..................................................................................................................... 14
Figure 10) Sizing method selection. ............................................................................................................................. 15
Figure 11) Reverse sizing choose workflow. ................................................................................................................ 16
Figure 12) Enter the number of HA pairs in clustered Data ONTAP. ........................................................................... 17
Figure 13) Reverse sizingFlash Cache options. ......................................................................................................... 17
Figure 14) Reverse sizingadding an aggregate.......................................................................................................... 18
Figure 15) Reverse sizingchoosing workload type. .................................................................................................... 19
Figure 16) Reverse sizingestimating system utilizations and latency output. ............................................................. 20
Figure 17) Reverse sizingestimating maximum throughput output............................................................................. 20
Figure 18) Flash options. ............................................................................................................................................. 21
Figure 19) Flash acceleration optionsAuto_Suggest. ................................................................................................. 21
Figure 20) Flash acceleration optionsManual............................................................................................................. 22
Figure 21) Flash Pool enabled aggregate. ................................................................................................................... 23
Figure 22) View menu containing history, saved inputs, and templates. ...................................................................... 24

System Performance Modeler

Figure 23) Sizing history dialog box. ............................................................................................................................ 24


Figure 24) Sizing search criteria................................................................................................................................... 25
Figure 25) Import workload capabilities. ....................................................................................................................... 25
Figure 26) Perfstat parsing options. ............................................................................................................................. 26
Figure 27) Online perfstat parser. ................................................................................................................................ 27
Figure 28) Perfstat file selection dialog box. ................................................................................................................ 28
Figure 29) Perfstat parser system configuration dialog box. ........................................................................................ 28
Figure 30) Perfstat parser workload characteristics dialog box. ................................................................................... 28
Figure 31) Offline perfstat parser intermediate file upload. .......................................................................................... 29
Figure 32) VDI workload inputs. ................................................................................................................................... 31
Figure 33) VDI detailed inputs. ..................................................................................................................................... 32
Figure 34) Exchange 2010/2013 spreadsheet upload component. .............................................................................. 34
Figure 35) Home directory specifications. .................................................................................................................... 34
Figure 36) User profiles................................................................................................................................................ 35
Figure 37) CIFS 2.x/3.0 home directory specifications. ................................................................................................ 36
Figure 38) Database statistics file import. .................................................................................................................... 36
Figure 39) Performance inputs fields. .......................................................................................................................... 37
Figure 40) App specification inputs fields. .................................................................................................................... 37
Figure 41) File system inputs fields. ............................................................................................................................. 38
Figure 42) DB specific inputs field. ............................................................................................................................... 38
Figure 43) Custom Application modulebasic inputs. .................................................................................................. 39
Figure 44) Custom Application modulerandom read latency. ..................................................................................... 39
Figure 45) I/O percent workload parameters. ............................................................................................................... 39
Figure 46) Custom Application moduleworking set size parameter............................................................................ 40
Figure 47) Custom Application moduleI/O size parameters. ...................................................................................... 40
Figure 48) Random overwrite percent selection. .......................................................................................................... 40
Figure 49) Layout hints. ............................................................................................................................................... 41
Figure 50) Additional clustered Data ONTAP layout hints. ........................................................................................... 41
Figure 51) Workload layout combinations. ................................................................................................................... 42
Figure 52) SnapMirror features. ................................................................................................................................... 42
Figure 53) Example of SnapMirror system-generated workloads................................................................................. 43
Figure 54) Example of SPM intermediate calculations. ................................................................................................ 43
Figure 55) Suggested configuration and layout. ........................................................................................................... 44
Figure 56) Example of a controller layout (7-Mode). .................................................................................................... 44
Figure 57) Example of a node layout (clustered Data ONTAP, direct only). ................................................................ 44
Figure 58) Example of a node layout (clustered Data ONTAP distributed across cluster). .......................................... 44
Figure 59) Controller perspective system usage (7-Mode). ......................................................................................... 45
Figure 60) Controller usage graph (7-Mode). ............................................................................................................... 45
Figure 61) Node-perspective system usage (clustered Data ONTAP, direct only). ...................................................... 45
Figure 62) Node-perspective system usage (clustered Data ONTAP, distributed across cluster). .............................. 46

System Performance Modeler

Figure 63) Drive calculations. ....................................................................................................................................... 46


Figure 64) Flash Pool drive calculations. ..................................................................................................................... 46
Figure 65) Flash Pool impacts...................................................................................................................................... 46
Figure 66) Drive calculation adjustments. .................................................................................................................... 47
Figure 67) Additional information outputs. .................................................................................................................... 47
Figure 68) Failed sizing example. ................................................................................................................................ 47
Figure 69) Reverse system utilization sizing report. ..................................................................................................... 47
Figure 70) Reverse maximum throughput sizing report. .............................................................................................. 48

System Performance Modeler

1 Preface
System performance modeler (SPM) is NetApps next-generation performance sizing tool, available to both
NetApp employees and partners. It is designed to simplify the process of performance sizing for NetApp FAS

systems running NetApp Data ONTAP 7G software; Data ONTAP 8.0, 8.1, and 8.2 operating in 7-Mode;
and clustered Data ONTAP 8.1 and 8.2. SPM integrates the previous legacy sizers functionality and new
features into an intuitive user interface and step-by-step process to support multiple workload requirements
and produce recommendations to meet customers performance needs.
This document is for NetApp employees and partners (pre- and postsales) who are interested in learning
more about how to use SPM as well as the benefits of and the theory behind SPMs development.

2 Overview of Performance Sizing and SPM


Selecting the proper system for a customer is more complicated than just selecting a system that meets
capacity requirements. Performance should be a requirement for any sale and is often more complicated to
plan for than capacity. Sizing is the process of obtaining or validating one or more system configurations that
can provide the capacity and performance resources necessary to meet customer requirements. Validation
using SPM versus educated guesses improves sales and customer confidence in the suggested system
configuration and reduces the time required to produce recommendations.

2.1

SPM Capabilities

SPM provides the ability to size systems using a single unified process that supports the following features
and more:

Forward and reverse sizing inputs and recommendations

Data ONTAP 7-Mode or clustered Data ONTAP systems

Multiple common applications

Heterogeneous workloads

Perfstat file input from existing systems

Up-to-date platform and storage performance characterizations

Saving and rehydration

Prior to SPM, standalone application-specific sizers were used to size each application to be deployed on the
system. SPM is designed to be more intuitive than the previous sizers, and it supports multiple workloads
within a single sizing by combining the various independent workload sizers into modules within the
workflow. The following application modules are supported by SPM:

Virtual desktop infrastructure (VDI)

Microsoft Exchange Server (2007 and 2010/2013)

SMB 1.0 and 2.x/3.0 Common Internet File System (CIFS) protocol home directories

Database applications

Microsoft SQL Server

Custom applications

2.2

SPM and the NetApp Sizing Architecture

The NetApp sizing architecture has many components, and SPM makes up only a portion of the larger
picture. SPM provides an interface to various underlying sizer models and application-specific logic. It is
implemented as a web application using the NetApp web framework. SPM collects the necessary system
configuration and workload parameters to send to the application logic and lower layers of the sizing
architecture. The common sizing infrastructure (CSI) is the heart of the sizing architecture. Various models of
the subsystems of Data ONTAP, disk types, and various controller models are contained within the CSI and
6

System Performance Modeler

are used to generate sizing results. The CSI combines real empirical data with system models to produce
realistic results.

2.3

Limitations of Performance Sizing

When using SPM, it is important to remember that the tool is a guide and that the accuracy of the
recommendations can vary greatly compared to reality, depending on the quality of information input, the
deployment of additional Data ONTAP features, and how workloads applied to the system might change over
time. NetApp storage systems have a plethora of features that cannot all be modeled when they interact
despite the fact that their interoperability is supported. Therefore, the accuracy of SPM recommendations is
variable, and that should be taken into consideration. Although SPM delivers a significant degree of
automation and dramatically simplifies the performance sizing process, it does not replace user experience
and the application of best practices.

3 SPM Quick Start Tips


Use the following steps to start working quickly with SPM:
1. Access SPM using your web browser: https://spm.netapp.com.
2. Log into the SPM site by entering your NetApp credentials; partners can use the credentials used for the
NetApp Support site.
3. Click New Sizing to create a new sizing request for any of the supported controllers and applications.

Forward Sizing provides a suggested configuration based on a workload.

Reverse Sizing provides the estimated performance for a specific configuration.

4. Navigate to View > History to review your previous sizings and rehydrate them.
5. Navigate to View > User Templates and System Templates to use predefined templates to guide your
sizing efforts.
6. Navigate to Help > Feedback to provide feedback to the performance sizing team.
Depending on the sizing use case, one of the following use case sections can help you get started quickly.

3.1

New System Deployment Use Case

If you are attempting to size a new storage environment, choose the SPM Forward Sizing workflow. Using
this workflow requires some knowledge of the customers workloads and applications. Completing the
Forward Sizing workflow provides recommendations for the number of nodes and number of disks, as well
as estimated utilizations of the recommended system. Refer to section 4.2 for detailed instructions on how to
complete a forward sizing.

3.2

What If Use Case

Often a customer would like to know what would happen if additional workloads were applied or hardware
changes were made to an existing storage system. The SPM Reverse Sizing workflow using the estimated
system utilization and latency can be very helpful in this situation. Because SPM supports perfstat import,
real data from a customer environment can be used in the workflow. After the desired system configuration
and workload settings are provided, SPM can provide the estimated system utilization and latencies that
should be expected after making the potential changes. More information on using the Reverse Sizing
workflow is provided in section 4.3.

3.3

Consolidation Use Case

As systems become more powerful, consolidating a few systems into a single system can be ideal. The
Forward Sizing workflow can provide recommendations for consolidation into a new system; if a system is
already deployed, Reverse Sizing workflow solving for system utilization and latency might be more useful. If
the older systems being considered for consolidation are NetApp storage systems, a perfstat can be
7

System Performance Modeler

captured and put into SPM to provide the statistics for the NetApp systems. If systems other than NetApp
systems are being considered for consolidation, the workload modules within SPM can be used to enter
additional workload details. Completing either workflow should provide an idea of what the performance of
the single system could be with multiple workloads applied to it. Additional information about forward sizing is
available in section 4.2, and reverse sizing information is available in section 4.3.

3.4

Maximum System Performance Use Case

It is important to have an idea of how much of a workload a system can handle. SPM can determine the
maximum system throughput using the Reverse Sizing workflow. In this workflow, it is possible to either
upload a perfstat to supply a controller configuration or specify one manually. After completing the workflow
for the reverse sizing in this mode, SPM provides an estimated maximum throughput for the controller
configuration and workloads. Additional information on using the Reverse Sizing workflow is available in
section 4.3.

4 SPM Workflows
This section describes various aspects of the SPM workflow.

4.1

SPM Main Window

Figure 1 highlights some of the primary features of SPMs main window.


Figure 1) SPM main window.

The following list describes items in Figure 1:

Customer. Displays the customers name as entered in step 1 of the New Sizing wizard.

Sizing Title. Displays the sizing title as entered in step 1 of the New Sizing wizard.

Workloads. Lists all the workloads added during the sizing workflow.

New Sizing. Initiates the workflow for a new sizing exercise.

System Performance Modeler

Search. Finds previous sizings based on multiple search criteria.

View. Opens previous sizing requests located in history or saved configurations. Sizing templates are
also available from this menu.

Help. Shows where additional information can be found, as well as a way to provide feedback to the
SPM team.

Toggle. Allows toggling between workflows.

Save For Later. Saves the workload and options for later use. The saved date can be retrieved from the
Saved Inputs option from the "View" menu.

Perform Sizing. Allows the user to actually perform the sizing. In the case of the Forward Sizing
workflow, it opens the hardware pre-filter window. In the case of the Reverse Sizing workflow, it submits
the sizing to CSI.

4.2

Performing a New Forward Sizing

As previously mentioned, SPM simplifies the sizing process by offering a step-by-step workflow to enter
sizing-related information and produce sizing results. This section describes the steps of a Forward Sizing
workflow. Reverse Sizing has a similar workflow.

Step 1: Enter Opportunity Information


This step allows you to enter information about yourself and the potential sales opportunity, as well as
provide notes for future reference regarding this sizing. Only the customer name is required to proceed past
this step. Any information entered in this step is reflected in the sizing output. Providing a sizing title in this
step can make it easier to search for this sizing later. The information entered in this step can be changed
later by selecting Customer Information in the View menu on the main SPM window.
Figure 2) New sizing step 1 window.

System Performance Modeler

Step 2: Select Sizing Workflow and Data Entry


Step 2 allows you to select either a Forward Sizing or a Reverse Sizing workflow. In this step you can also
select whether to use simple or detailed inputs.
After completing the Forward Sizing workflow and submitting the configuration, SPM provides
recommendations for the required number of disks and controllers and information on how data should be
spread across the controller aggregates.
The Reverse Sizing workflow has additional options depending on the information youre interested in
discovering. Details about Reverse Sizing are provided in section 4.3.
Figure 3) New sizing step 2 window.

Step 3: Choose a Workload Type


This step allows you to select an application workload from the list of supported application types. If your
workload does not exactly match one of the listed workloads, select Custom Application. The workload type
can also be loaded from different data sources such as a perfstat, your previous inputs, or a template. If a
perfstat is used, the Custom Application workload is selected by default and cannot be changed. The data
extracted from a perfstat is used to populate the parameters in Custom Application workloads.

10

System Performance Modeler

Figure 4) Forward sizing workload selection window.

Step 4: Enter Workload Information


The workload information you enter depends on the application selected. Details about each application
module are available in section 8. This step can be repeated for as many workloads as required.

Step 5: Perform Sizing


After you have added all of the workloads, submit your sizing request by clicking Perform Sizing.

Step 6: Prefilter Hardware Configuration


After selecting Perform Sizing, a new window will pop up, which allows you to select the desired version and
operating mode of Data ONTAP, as well as various controller, disk, and flash specifications.
Note:

You can select multiple platforms by highlighting the relevant platforms using the control or shift keys
while clicking. In clustered Data ONTAP sizings, clusters are assumed to be homogeneous in
platform type and configuration, and the systems must be deployed in HA pairs. SPM does not
currently support heterogeneous clusters or configurations that are not in HA pairs.

Also in this window is a checkbox for Degraded Failover Performance OK on HA takeover event. When this
checkbox is deselected, 50% additional utilization headroom is added to the controller so that the system is
capable of completely handling its own workload and its partners workload in case of a failover. This might
mean that the size of the recommended solution would be doubled in size.
Note:

Degraded failover performance is additive with system headroom. For example, if degraded failover
performance is not checked and headroom is set to 30%, then only (.5 * .7 = .35) 35% of the storage
controller will be utilized, resulting in much larger controllers being needed.

11

System Performance Modeler

You should select your disk type and the required flash acceleration options. Flash can significantly reduce
the number of disks required to achieve the desired performance, depending on the characteristics of the
workload. A convenient feature of SPM is the ability to size both with and without flash acceleration. This
provides the results in a single report and illustrates the effect of the acceleration modules on the predicted

performance of the system. When sizing for Flash Pool , SPM will identify the number of SSD drives
necessary to achieve the performance requirements of the workloads created in the subsequent steps. This
step does not place any specific workload on a Flash Pool aggregate. All options that are selected will be
modeled.
Figure 5) Forward sizing prefilter hardware configuration window.

At this point, you can either click Calculate Sizing or select the Detailed Inputs checkbox to use nondefault
options in your sizing. Whether you decide to use some of the detailed inputs or not, click Calculate Sizing
after you are satisfied with the options you have set.

Detailed Inputs: Disk Shelf and Drives


In this section, you can select the exact disk shelf and disks that you would like to use.

12

System Performance Modeler

Figure 6) Detailed disk selection.

Detailed Inputs: Flash


Selecting the Flash Acceleration Options checkbox in the Pre-Filter Hardware Configuration window enables
the Detailed Inputs dialog box.
The Auto_Suggest feature has been newly added to the flash acceleration options. This is the default option,
and when this option is selected, the CSI will determine the optimal number, type, and capacity of the Flash
Cache card and also the optimal number and capacity for Flash Pool, based on the workload
characteristics.
You can manually select the type of Flash Cache card, the number of cards, and the capacity of each card, if
you prefer to not use the automatic suggestion option.
Also, you can manually select the type of SSD disk and the amount of capacity of the Flash Pool card per
controller, if you prefer to not use the automatic suggestion option.
Figure 7) Detailed flash acceleration options.

Detailed Inputs: Advanced Options


Advanced configuration options can also be set in this step.
Note:

13

Changing these configuration options can affect the sizing results.

System headroom (%). Amount of CPU and other system resources that should be reserved and
unused while sizing to allow for future growth. Increasing the headroom can increase the platform
count if the supplied workloads exceed the headroom threshold, even if the workload can be
serviced with fewer systems.

Map to full shelves. Select this option if SPM should produce disk requirements equal to the
number of disks in a full shelf. The final disk count can be increased if the number of disks required
for performance, capacity, and spares does not equal full shelves.

Capacity reserve (%). Amount of disk space that should be reserved for future growth. This can
increase the number of disks required to meet capacity requirements.

System Performance Modeler


Note:

Spare disks per node. The number of disks that should be added as spares.
SPM automatically calculates the number of parity drives required.
System age. As system age increases, I/O operations can become less optimal, which ultimately
increases disk utilization. Adjusting the system age can increase the amount of disks required to
support the workload. The Empty System setting represents a new storage environments age.

Figure 8) Forward sizing advanced configuration options.

Step 7: Review SPM Output


After SPM completes the sizing, a results summary window appears displaying the sizing output. The top
portion of the results window shows the various configurations that were sized and are included in the report.
To view the complete sizing report, select View Report. Details about the sections in the sizing report are
available in section 10. If there are any issues with the output of SPM, clicking Feedback prepopulates a
feedback message with the sizing ID. Any questions or comments submitted through the feedback
mechanism are sent to the NetApp SPM team.
Figure 9) Forward sizing results window.

14

System Performance Modeler

4.3

Perform a New Reverse Sizing

The Reverse Sizing workflow for SPM is similar to the Forward Sizing workflow. However, it answers
different questions. Forward sizing is primarily focused on new sales; reverse sizing is focused on existing
installations. SPM provides two methods of reverse sizing:

Estimate resource utilizations and latencies, which answers the what if questions

Estimate maximum throughput, which provides the exact opposite of a forward sizinglike functionality

Starting with SPM 1.4, the reverse (formerly advanced) sizing workflow is supported for both 7-Mode and
clustered Data ONTAP.
When using the Reverse Sizing modes, only a single platform model can be selected.
Also, instead of solving for the number of disks, the aggregate sizes and types are user defined using the
aggregate attributes feature. There are no additional advanced parameters, such as capacity reserves of
spare disks, because aggregates are user defined and have already been determined.
Note:

You can toggle between the forward and Reverse Sizing workflows by clicking the Toggle button at
the bottom of the SPM window.

Figure 10) Sizing method selection.

This section describes how to complete a reverse sizing for both resource estimation and maximum
throughput calculations.

Step 1: Enter Opportunity Information


After clicking New Sizing, this step is the same as in a forward sizing. As shown in Figure 2, the window
allows you to enter information about the opportunity, which is then included in the output report and enables
easier searching.

Step 2: Select Sizing Workflow and Data Entry


In a reverse sizing, you must first select whether the inputs are to be manually entered or loaded from a
perfstat file. Then, a choice for the attribute on which you are going to estimate performance is needed. The
selection depends on the end result that is desired. The workflow for both options is similar, although the
outputs are different.

Resource Utilization and IO Latencies. This option is useful for answering a what if question. For
example, if a system is already deployed and a customer wants to determine what would happen if
another workload were deployed on it, using this sizing method will determine what the customer should
expect for overall system utilization and latency.

Maximum Throughput. This option is the exact opposite of a forward sizing. SPM provides an
estimated maximum throughput given a system configuration and one or more application workloads.

As shown in Figure 11, a perfstat file can be uploaded during this step.

15

System Performance Modeler

Figure 11) Reverse sizing choose workflow.

Step 3: Select Controller Configurations


Unlike with a forward sizing, only a single controller platform model can be selected in a reverse sizing. Also,
because SPM cannot determine the layout of the workloads, no CPU headroom or disk count parameters
are required. Flash Cache modules can also be included.
If this is a clustered Data ONTAP configuration, the number of HA pairs needs to be set as shown in Figure
12.

16

System Performance Modeler

Figure 12) Enter the number of HA pairs in clustered Data ONTAP.

The FlashCache options section enables you to select no Flash Cache or specify the exact type, number,
and size of the cards.
Figure 13) Reverse sizing: Flash Cache options.

In this step, aggregate configurations must be defined. For each aggregate that is part of the system being
modeled, add an aggregate and the number of disks in the Aggregate Attributes window using the New

Aggregate button. SPM assumes that the RAID type is NetApp RAID-DP technology and uses the default
RAID group size (16). For guidance on RAID group and disk spare configurations, refer to the Storage
Subsystem Technical FAQ.

17

System Performance Modeler

Figure 14) Reverse sizing: adding an aggregate.

Step 4: Choose a Workload Type


In this step, you can select an application workload from the list of supported application types. If your
workload doesnt exactly match one of the listed workloads, select Custom Application. If you are solving for
maximum throughput, the only option available is the Custom Application workload module.

18

System Performance Modeler

Figure 15) Reverse sizing: choosing workload type.

Step 5: Enter Workload Information


The workload information you enter depends on the application selected. Details about each application
module are available in section 8. This step can be repeated for as many workloads as needed.

Step 6: Perform Sizing or Save for Later


Now you can perform the sizing or save and continue later, just as with the Forward Sizing workflow. When
you're ready to submit, click Perform Sizing on the bottom of the main window.

Step 7: Review SPM Output


The output of SPM at this point depends on the option selected for reverse sizing. Figure 16 and Figure 17
show examples of both types of output.

19

System Performance Modeler

Figure 16) Reverse sizing: estimating system utilizations and latency output.

Figure 17) Reverse sizing: estimating maximum throughput output.

20

System Performance Modeler

The results of the reverse sizing are also captured in the report available from the results window. The report
is described in detail in section 10.

5 Sizing with Flash Acceleration: Flash Cache and Flash Pool


Starting with SPM 1.4, sizing with flash acceleration options sections have been updated. An Auto_Suggest
option has been added for both Flash Cache and Flash Pool options, in addition to the option to manually
specify the type and quantity.

5.1

Changes to User Interface

Flash Acceleration Options


When selecting the Pre-Filter Hardware Configuration option in a Forward Sizing workflow, the flash
accelerations options are presented as checkboxes to enable or disable SPM from analyzing each one. Both
Flash Cache and Flash Pool default to Auto_Suggest, which allows CSI to determine the optimal number,
type, and capacity of Flash Cache and also the optimal number and capacity for Flash Pool, based on the
workload characteristics.
Figure 18) Flash options.

When the Detailed Inputs checkbox is selected, the following more detailed options are allowed to be
selected.

Flash Acceleration Option: Auto_Suggest


The flash acceleration options section of the SPM wizard now includes the Auto_Suggest option, in addition
to the existing No and Manual options (Figure 19).
The Auto_Suggest option is enabled by default. When this option is enabled, CSI will determine the optimal
number, type, and capacity of Flash Cache and also the optimal number and capacity for Flash Pool, based
on the workload characteristics.
CSI will generate two outputs per platform/drive combination selected: one output with only Flash Cache and
the other with only Flash Pool, based on the workload characteristics.
Figure 19) Flash acceleration options: Auto_Suggest.

21

System Performance Modeler

Flash Acceleration Option: No


If you do not want flash options in the output, do not select the Flash Cache and Flash Pool checkboxes. CSI
will generate only one output per platform/drive combination selected, without Flash Cache and Flash Pool in
the suggested configuration.

Flash Acceleration Option: Manual


The Manual mode is available for the user to specify, if so desired, the number, type, and capacity of Flash
Cache and/or the number and the type of the SSD drives for Flash Pool to be used for the workloads.
When both the Flash Cache and Flash Pool manual modes are selected and inputs provided, CSI will
generate two outputs per platform/drive combination selected: one output with only Flash Cache and the
other output with only Flash Pool, based on the workload characteristics.
Figure 20) Flash acceleration options: manual.

5.2

Creation of Aggregates Enabled by Flash Pool

SPM 1.4 supports the creation of aggregates enabled by Flash Pool during reverse sizing in both Data
ONTAP 7-Mode and clustered Data ONTAP. A new field, Aggr Type, has been added to enable the selection
of the aggregate type to be created. The Aggr Type can be selected as either Normal or FlashPool Enabled.
When the Aggr Type is selected as FlashPool Enabled, two new fields will display in which the user can
specify the SSD drive type and SSD data drives.

22

System Performance Modeler

Figure 21) Aggregate enabled by Flash Pool.

5.3

Flash Output Changes

In older versions of SPM, when both Flash Cache and Flash Pool were enabled, CSI used to combine both
of them and generate one output per platform/drive combination selected.
With the addition of the Auto_Suggest option, the CSI outputs have also changed. When both the Flash
Cache and Flash Pool options are selected, either in Auto_Suggest or Manual mode, CSI will generate two
outputs per platform/drive combination selected: one output with only Flash Cache and the other with only
Flash Pool, based on the workload characteristics.

6 Previous Sizings and Sharing


SPM maintains a log of previously completed sizings that can be accessed for rehydration in a few ways.
Rehydrating a sizing brings back the configuration and allows you to modify and resubmit the sizing. This
feature is useful for fine-tuning existing sizings or for using existing sizings as a template from which to start.
The history can be found in the View menu, as shown in Figure 22. Sizing configurations, including controller
specifications and workloads, can also be saved and retrieved later also using the View menu.
The rehydration option is available by right-clicking a sizing found by using the Search option or in your
history of sizings, as shown in Figure 22.

23

System Performance Modeler

Figure 22) View menu containing history, saved inputs, and templates.

After rehydrating, sizing reports are only available to view or to send through e-mail after the sizing is
resubmitted. Sizings can also be shared with other users using this process. The other user would access
shared sizings through User Templates in the Open menu.
Figure 23) Sizing history dialog box.

The Search button in the main window allows you to search for your own sizings based on multiple criteria.
Your history of sizings and saved sizings is also maintained under the View menu.
For each sizing thats submitted, a unique sizing ID is generated so that it can be recalled later, if needed.
This is the same sizing ID used when sending feedback.

24

System Performance Modeler

Figure 24) Sizing search criteria.

7 Importing Workload Information


Workload information can be gathered and imported a variety of ways. SPM supports importing controller
and workload information from perfstat files as well as workload information from previous or saved sizings.
Figure 25 shows the import capabilities of SPM from the main window.
Figure 25) Import workload capabilities.

25

System Performance Modeler

Perfstat is a NetApp tool used to capture performance and configuration information from an existing
installation. SPM supports uploading perfstats to automatically fill in the controller and workload information
necessary for sizing. After a perfstat is uploaded, its possible to modify the controller and workload to model
changes to an existing system. SPM allows you to submit perfstat files in two ways: online and offline.
Figure 26) Perfstat parsing options.

Online Perfstat Parsing


For online perfstat parsing, NetApp internal users can provide a valid internal path to a perfstat file in their
home directory. With this method, the perfstat file is fetched by SPM without the need to upload a large file.
For all users, a perfstat file can be uploaded directly into SPM. If the perfstat file is large, consider using the
offline perfstat parser.

26

System Performance Modeler

Figure 27) Online perfstat parser.

Offline Perfstat Parsing


If a perfstat file is stored locally, the offline parser can be used. The offline perfstat parser uses a utility based

on Java that parses through the perfstat file and generates an intermediate file that SPM understands. The
parsed perfstat intermediate file is much smaller and therefore uploads more quickly compared to the
perfstat file. This method requires that the Java Runtime Environment be installed on the local machine for
the utility to work properly.
After the offline parser is selected and allowed to run, an additional HTML page loads, providing the interface
to the offline parser.
Use the following steps to parse a perfstat file.
1. Select a perfstat file to parse.

27

System Performance Modeler

Figure 28) Perfstat file selection dialog box.

After the perfstat file is parsed, a list of controllers in the perfstat file is made available.
2. Select the desired controller configurations and then select the workload characteristics.
Figure 29) Perfstat parser system configuration dialog box.

3. Select the desired iteration and option (Min, Max, or Average IOPS and CPU).
Figure 30) Perfstat parser workload characteristics dialog box.

28

System Performance Modeler

After the controllers and workloads are selected, the parser prompts for a location in which to save an
intermediate file (.spm file).
4. Save the file.
5. Close the parsing window after the file is saved. SPM then displays the Import workloads from Perfstat
page. Use the Browse button to browse for the intermediate file (the .spm file generated in step 3).
Figure 31) Offline perfstat parser intermediate file upload.

The information in the perfstat file to be used for the sizing should now be visible on the main SPM window
as a controller configuration and a workload configuration.

8 Using the Workload Modules


The following sections describe how to use each of the application-specific modules. Application-specific
variables are defined in each section; how the variables affect the sizing output is also described.
Some elements are common among all of the sizing modules, such as layout and NetApp SnapMirror
functionality. These elements are covered in the Additional Sizing Options section.

8.1

Virtual Desktop Infrastructure Module

The virtual desktop infrastructure (VDI) module provides an easy-to-use interface to size for multiple different
VDI environments. Most of the inputs require that the sizing module be available from the customers
proposed or existing environment. The protocol type is the storage protocol that will be used to host the

29

System Performance Modeler

virtual machines (VMs). Each protocol has different performance characteristics. The module also supports

VMware , Citrix, and Microsoft hypervisors as well as various cloning methods.


The following list describes the fields of the VDI interface.

The basic window will have the mandatory fields, and the details will all be set to default values;
however, the user can expand the detailed section and set personal values (see Figure 32).

Free aggregate space and free snap reserve define the amount of free space that must be kept in the

aggregate and the amount that should be reserved for NetApp Snapshot copies.

The number of input/output operations per second (IOPS) is the amount of I/O each user is estimated to
produce.

The C drive size (in GB) is the size of the main VM operating system drive.

VM memory size refers to the amount of memory per VM and is used to factor in the vswap storage
requirements.

Unique data per VM is the estimated space unique to each VM. Because of cloning and deduplication
technologies, VMs might not require much additional space. This helps define additional storage
requirements for the life of the VM.

Note:

Disk capacity requirements can vary significantly depending on the cloning and Snapshot
technologies employed. Make sure to understand these differences when sizing for VDI
environments.

The read and write workload estimates are the percentage of read and write I/O expected from the VMs.

Working set size defines the percentage of the total data that is considered active and can vary
depending on the customer environment.

Read and write I/O size defines the I/O size for the workload.

Random read latency defines the maximum allowed latency for reads.

30

System Performance Modeler

Figure 32) VDI workload inputs.

31

System Performance Modeler

Figure 33) VDI detailed inputs.

Sizing output can vary significantly based on differences found in vendors cloning technologies, so NetApp
recommends reviewing TR-3949: NetApp and VMware View Performance Report.
Detailed instructions on the deployment characteristics of VDI technologies are beyond the scope of this
document; therefore, NetApp recommends engaging with a consulting systems engineer (CSE) in your area
when sizing any VDI opportunity.

8.2

Microsoft Exchange 2007 Module

The information necessary to fill out the workload parameters needed to complete this module can be
acquired by filling out the Exchange 2007 Mailbox Server Role Storage Requirements Calculator.
Because providing detailed instructions on the technical and deployment characteristics of Microsoft
Exchange is beyond the scope of this document, NetApp recommends engaging with a CSE in your area
when sizing any Exchange opportunity.
32

System Performance Modeler

8.3

Microsoft Exchange 2010/2013 Calculator Import Module

In addition to the manual entry Microsoft Exchange application module, an additional module has been
included in SPM that supports the upload of the Microsoft Exchange 2010/2013 Mailbox Server Role
Requirements Calculator. This application module requires the Exchange 2010 Mailbox Server Role
Requirements Calculator or Exchange 2013 Server Role Requirements Calculator spreadsheet.
When sizing for Exchange using SPM, review the recommendations and best practices in TR-4166i: NetApp
System Performance Modeler and Microsoft Exchange Server 2010.
To complete a sizing, do as follows:
1. Enter the values in the Exchange Mailbox Server Role Requirements Calculator sheet.
In the Inputs sheet of the calculator, perform the following actions:
a. Under Backup Configuration, select Yes for Database and Log Isolation Configured. Otherwise, the
database and log will be placed on the same LUN, which is against NetApp best practices and will

result in a configuration that will not work with SnapManager for Exchange.
b. If the database size is less than the NetApp best practice of 2TB minimum, it will adversely affect the
performance of the system, because each database performs maintenance. Make sure that you set
the Maximum Database Size Configuration value to Custom. Set the Maximum Database Size (GB)
to a value greater than 2TB until the actual database size is close to 2048GB. This value can be
viewed in the LUN Requirements (in Exchange 2010 calculator) and Volume Requirements (in
Exchange 2013 calculator) and in column E under DB size + overhead.
c.

Under Backup Configuration, select the VSS Hardware Provider for Backup Methodology.

d. Under Exchange Data Configuration, select Yes for Dedicated Maintenance/Restore Volume.
e. Under Exchange I/O Configuration, specify the additional I/O or server requirements.
f.

Under Tier-1 User Mailbox Configuration, select Yes for Desktop Search Engines Enabled (for
Online Mode Clients).

g. Under User Mailbox Configuration, selecting Yes for Desktop Search Engines Enabled will affect the
IOPS accordingly.
h. Using the online retention settings affects the capacity calculations.
2. Upload the completed copy of the Mailbox calculator by clicking the Browse button in the basic inputs
section.

33

System Performance Modeler

Figure 34) Exchange 2010/2013 spreadsheet upload component.

3. After the application module workflow is completed, two workloads are generated within SPM, a primary
site and a secondary site. Modify the controller constraints and use the checkboxes next to the
generated workload to model the primary and secondary workloads independently.

8.4

SMB 1.0 CIFS Home Directories Module

Various factors must be considered for each home directory deployment. The key considerations for
architecting and sizing a CIFS home directory solution include the number of users, the number of
concurrent users, the space requirement for each user, and the network load. Additional factors, such as
virus scanning of the home directories, can also affect sizing recommendations.
SPM supports these various considerations in the CIFS home directory workload module. The initial
workload parameters include:

Workload Description. An appropriate description of the workload for easy reference.

Number of Users. The total number of users accessing the data.

User Type. Select from light, low, medium, or heavy; based on this, the other dependent values will be
set to default optimal values.

Home Directory Size (GB). The amount of space required for each user.

Figure 35) Home directory specifications.

34

System Performance Modeler

The following list describes various user parameters. The number of users along with the size of the home
directory defines the capacity requirements; the number of users along with type of user and concurrency
defines the performance requirements. The user type is based on the data transfer rate requirement per
user.

Light user = ~3kB/s

Low user = ~5kB/s

Medium user = ~10kB/s

Heavy user = ~20kB/s

Additional parameters include:

Concurrency (%). The percentage of active users at any point in time.

Throughput type. Choose between MB/s and IOPS, depending on what the basis of the sizing should
be. The default value is MB.

Random read latency (ms). If latency other than 20ms is required for this deployment, enter the value
here.

Sizing options. This option allows you to further customize the configuration based on the deployment
type designated as one of the following:

Fresh installation

Migration from non NetApp storage

NetApp upgrade

Figure 36) User profiles.

The CIFS home directory sizing workload module was implemented based on TR-3564i: Sizing of CIFSBased Home Directories. Although this document is no longer current, refer to it for additional reference
material.

8.5

SMB 2.x/3.0 CIFS Home Directories Module

If clustered Data ONTAP 8.2.x or later is selected, sizing for SMB 2.x and 3.0 is supported. Various factors
must be considered for each home directory deployment. The key considerations for architecting and sizing
a CIFS home directory solution include the number of users, the number of concurrent users, the space
requirement for each user, and the network load. Additional factors, such as virus scanning of the home
directories, can also affect the sizing recommendations.
SPM supports these various considerations in the CIFS home directory workload module. The initial
workload parameters include:

Workload Description. An appropriate description of the workload for easy reference.

Random Read latency (ms). Input the desired latency as specified in milliseconds. This defaults to
20ms, which is considered a reasonable value.

User Type. Sets how much usage each user will add. Select from light (3kB/s), medium (10kB/s), or
heavy (20kB/s); based on this, other dependent values will be set.

Number of Users. The total number of users accessing the data.

35

System Performance Modeler

Concurrency %: This specifies what percentage of the users would actually be using the system at one
time.

Home Directory Size (GB). The amount of space required for each user.

Figure 37) CIFS 2.x/3.0 home directory specifications.

8.6

Database Applications Module

The database workload module can be used to size most common database applications. SPM conveniently
provides a way to upload statistics files from existing database installations that can help fill in the various

workload parameters in this module. SPM supports Oracle statspack and automatic workload repository
(AWR) files as well as statspack4SQL. Select the Detailed Inputs checkbox and then click Browse to select
the relevant statistics file. This will populate most of the values in this dialog box.
Figure 38) Database statistics file import.

If a statistics file is not used, the workload characteristics can be entered manually. The performance inputs
fields are for specifying the basic characteristics of the database workload, such as throughput, operatingmix percentages, throughput growth, and maximum acceptable latency.

36

System Performance Modeler

Figure 39) Performance inputs fields.

App Specification Inputs fields allow you to select the database type to use for this sizing request, the
protocol to use, and the project life, which aids in the definition of growth requirements.
Figure 40) App specification inputs fields.

The file system inputs section of the database sizer helps to define the capacity requirements for the
database application. The working set size parameter defines what percentage of the capacity is active at
any point in time.

37

System Performance Modeler

Figure 41) File system inputs fields.

The performance of the storage system also depends on the block size used by the application. The default
is 8KB, but other sizes can be specified, if necessary.
Figure 42) DB-specific inputs field.

Because detailed instruction on the technical and deployment characteristics of the various databases
supported by SPM is beyond the scope of this document, NetApp recommends engaging with a CSE in your
area when sizing any database application.

8.7

Custom Application Module

For workloads that dont fit any of the built-in application modules or for workloads that need more granular
control, use the custom application module.
Although the custom application module supports all protocols, only one protocol can be selected per
workload. If you must consider additional protocols, simply duplicate and modify the workload. Throughput
can be defined as IOPS or MB/s. Select the appropriate throughput type, because it affects how the
throughput value is interpreted.
The required capacity field defines the size of the entire dataset and is used for storage capacity planning if
the number of disks is not determined by performance requirements. Capacity growth can be considered
when selecting capacity reserve in the Forward Sizing workflow.

38

System Performance Modeler

Figure 43) Custom Application module: basic inputs.

The rest of the fields covered are defaulted, but can be modified by clicking the Detailed Inputs checkbox.
When specifying a random read latency, make sure that it is reasonable and also meets the requirements of
your application. Enter the desired latency in milliseconds. This defaults to 20ms, which is considered a
reasonable value. Workloads associated with aggregates that only have hard disks (HDDs) have a floor of a
8ms latency. Workloads associated with aggregates, which have SSDs, have a floor of a 3ms latency.
Latencies set below these values might return an error.
Figure 44) Custom Application module: random read latency.

The I/O percent section defines the read-write breakdown of the workload. The options at the top of the I/O
percent parameters are presets that quickly adjust the percentages. If you know the breakdown of your
workload in greater detail, the percentages are adjustable. This defaults to 25% of each of the four
categories shown in Figure 45.
Figure 45) I/O percent workload parameters.

The working set size, which is generally applicable to workloads with random I/O, defines the size of the
active portion of data. This value affects how much data can be cached, which affects how much of the
workload is forced to disk. The smaller the working set, the greater the chance that data is kept in the cache.
The working set size is not often known for many applications, but its typically a smaller percentage of the
total data. Refer to Table 1 for guidelines to determine working set sizes based on workload type.
Table 1) Application working set size recommendations.

Application

Working Set as Percentage of File Set or Storage

Home directories

3%10% of total file set

39

System Performance Modeler

Application

Working Set as Percentage of File Set or Storage

Databases (OLTP)

5%20% of database size

Microsoft Exchange

100% of database size (Exchange 2003);


~80% of database size (Exchange 2007 and Exchange 2010)

Other e-mail applications

~20% of storage space

Figure 46) Custom Application module: working set size parameter.

Changing the size of the I/Os in the workload can affect the number of IOPS that the system will achieve.
However, total data throughput in MB/s might be greater. Set these values to be as close as possible to the
customer workload characteristics.
Figure 47) Custom Application module: I/O size parameters.

If Flash Pool is used for this application workload, the random overwrite percent selection is made available.
Random overwrite percent determines the impact that the random overwrite caching in SSD will have for the
workload and system performance.
Figure 48) Random overwrite percent selection.

After the application-specific portions of the workload are complete, there are additional points to consider
that are common to all applications, such as layout hints, aggregate types, and SnapMirror functionality. For
more details, refer to section 9.

9 Additional Sizing Options


This section describes additional sizing options that exist for all of the workload modules.

9.1

Layout Options

A key strength delivered by SPM is support for extensive customization of storage layout and configuration
options, detailed in the sizing output reports. This functionality is particularly powerful in clustered Data
ONTAP opportunities, and we encourage exploration of the various layout configuration parameters.
40

System Performance Modeler

The layout hint options within each of the application workload modules define how the workload should be
spread across the nodes in the system. Placing a workload on a shared aggregate means that more than
one workload can be placed on the same aggregate, if necessary.
Workloads can be split between the nodes on different aggregates, or they can be contained in a single
aggregate on a single node. Splitting the workload across multiple aggregates means that the workload can
be split across the nodes in the system. Selecting this box means that all the nodes in the system will get a
portion of the storage workload. This applies to both Data ONTAP 7-Mode and clustered Data ONTAP
systems. If the system configuration includes Flash Pool, the layout hints section designates whether the
workload should be placed on a Flash Pool aggregate.
Figure 49) Layout hints.

When defining workloads to be placed on a clustered Data ONTAP system, additional options are available.
The flexibility of clustered Data ONTAP allows client access to be split across nodes regardless of whether
the data is stored on one node or on many nodes.
Figure 50) Additional clustered Data ONTAP layout hints.

The following combinations, as shown in Figure 51, are possible depending on how the layout options are
set for the workload:

41

Remain on a single aggregate (do not split)

With direct access only to a single node (A).

Distributed client access across all nodes, sent over the cluster interconnect to the node responsible
for the aggregate (B).

Allowed to exist on multiple aggregates (able to be split)

With direct access only (C).

With fully distributed access across the cluster interconnect (D).

System Performance Modeler

All four layouts are possible for clustered Data ONTAP system sizings; only A and C are applicable to Data
ONTAP 7-Mode systems.
Figure 51) Workload layout combinations.

9.2

SnapMirror Options

For each workload in a Data ONTAP 7-Mode sizing, during reverse sizing, there is an option to include
SnapMirror parameters related to the workload.
Figure 52) SnapMirror features.

9.3

System-Generated Intermediate Workloads

Most of the workload modules ask for characteristics a customer should be able to provide. SPM translates
these user inputs into workload characteristics that it can use for sizing. These system-generated workloads
are not directly editable, but are indirectly changed by modifying the workload inputs. The system-generated
workload characteristics can be viewed after a workload is added to the current sizing. Clicking Edit under
Options to the right of a workload brings up the user-entered workload parameters, as well as the systemgenerated values and intermediate values. For example, completing the Exchange Workload entry produces
two system-generated workloads, as shown in Figure 53 and Figure 54.

42

System Performance Modeler

Figure 53) Example of SnapMirror system-generated workloads.

Figure 54) Example of SPM intermediate calculations.

10 Understanding SPM Sizing Output


This section explains how to interpret the output report produced by SPM. Most of the elements are the
same between Data ONTAP 7-Mode and clustered Data ONTAP. Specific elements are described in each
section. The examples shown are screenshots of the output produced by using the default workload options
in the custom application sizer module.

10.1 Suggested Configuration and Layout


Figure 55 is a screenshot from a sample report that shows six different configurations, including Flash
Cache, Flash Pool, and without any flash. Notice that the number of nodes varies based on the platform and
the amount and type of flash used.

43

System Performance Modeler

Figure 55) Suggested configuration and layout.

10.2 Layout Recommendations


Depending on the mode selected (7-Mode or clustered Data ONTAP), this section describes how workloads
are spread across the controllers or nodes in a cluster. The distribution across the controllers or nodes in a
cluster is ultimately dictated by the options selected in the layout hints section for each workload. For
clustered Data ONTAP, the table in this section also describes whether the traffic is direct or indirect. The
following three figures show screenshots of examples of the type of data found in the report. For more
information about how to affect the layout, refer to the Layout Options section.
Figure 56) Example of a controller layout (7-Mode).

Figure 57) Example of a node layout (clustered Data ONTAP, direct only).

Figure 58) Example of a node layout (clustered Data ONTAP distributed across cluster).

10.3 Controller/Node Perspective System Utilization


This section describes the estimated utilization at the individual controller level within the overall system. The
utilization reported by SPM is not the same as the utilization of any specific component within the system.
For example, 60% system utilization does not correspond to 60% CPU or 60% overall disk utilization. The
60% utilization means the sized throughput is 60% of the total system throughput given the current workload
characteristics. For each controller, there is a list of workloads applied to that system. Figure 59 is a
screenshot of a sample table from the report in which system utilization under workload details shows how
much a specific workload affects the node. System utilization under controller details is the overall system
utilization, which also includes some of the system overheads expected for this configuration. Figure 60 is
also a screenshot from the report that shows system utilization graphically.

44

System Performance Modeler

Data ONTAP 7-Mode


Figure 59) Controller perspective system usage (7-Mode).

Figure 60) Controller usage graph (7-Mode).

Clustered Data ONTAP


Figure 61 is a screenshot from the report that shows how each node is affected by the workload distributions.
Direct streams appear once in the table on the node that is receiving the stream. Indirect streams are
reflected on two nodes:

The node processing the client traffic

The node containing the aggregate and actually serving the data

Each workload shows how much system utilization it is contributing to the node, and the overall system
utilization, under node details, reflects the sum of all the workload utilizations along with additional overhead.
The node receiving client traffic and forwarding indirectly will have a lower utilization than the node receiving
the indirect traffic and accessing the storage.
If there are multiple workloads, depending on how the layout hints are adjusted, the screenshots in Figure 61
and Figure 62 show all of the workloads applied to the cluster.
Figure 61) Node-perspective system usage (clustered Data ONTAP, direct only).

45

System Performance Modeler

Figure 62) Node-perspective system usage (clustered Data ONTAP, distributed across cluster).

10.4 Drive Calculations and Flash Pool


This section describes how SPM determines the number of drives required to support the workload. Figure
63 is a screenshot of a table in the report that shows whether the number of drives is determined based on
the performance requirements or on the capacity requirements. It also shows the number of drives required
for RAID and other requirements. Choosing different drive types, capacities, and Flash Cache can affect
these numbers.
Figure 63) Drive calculations.

If Flash Pool is used in the sizing, this table changes to show the disks required for both the SSD and HDD
portions of the Flash Pool aggregate, as seen in Figure 64. The expected impact of read caching of Flash
Pool is shown below.
Figure 64) Flash Pool drive calculations.

The shelf details section elaborates on the number of shelves, drive types, and number of drives needed for
the configuration.

10.5 Adjustments for Drives and Other Outputs


Figure 66 is a screenshot of a table from the report that shows the total number of drives for the system
based on additional requirements, such as the number of spares requested or if the system should map to
full shelves.
Figure 65) Flash Pool impacts.

46

System Performance Modeler

Figure 66) Drive calculation adjustments.

Other outputs can also be produced if they are relevant to the sizing request. Some examples, such as the
example screenshot shown in Figure 67, can include information about the size and number of Flash Cache
modules or the disk-OPS-to-host-OPS ratio.
Figure 67) Additional information outputs.

10.6 Inputs Section


The inputs section of the report includes all of the controller and workload inputs used to generate the sizing
report.

10.7 Failed Configurations Section


Not all configurations are valid for sizing with SPM. If multiple controller configurations are specified for a
forward sizing, its possible that some of them are valid, and others are invalid. The failed configurations
section of the report provides a table of sizing configurations that failed to complete and the reason for the
failure, as shown in the screenshot in Figure 68.
Figure 68) Failed sizing example.

10.8 Reverse Sizing Report


If the Reverse Sizing workflow was used, the report contains a table with the requested sizing information.
When sizing for system utilization and latency, SPM creates a table similar to the screenshot shown in Figure
69.
Figure 69) Reverse system utilization sizing report.

When sizing for maximum throughput, SPM creates a table similar to the screenshot shown in Figure 70. In
addition to maximum throughput, SPM also lists the reason for the bottleneck that is limiting the performance
of the configured system.

47

System Performance Modeler

Figure 70) Reverse maximum throughput sizing report.

11 Additional Resources
Additional information can be found in the SPM help menu, as well as in the SPM FAQ. For more
information, contact the sizing team or visit the Sizers Community page.

Version History
Version

Date

Document Version History

Version 0.1

March 2012

Original version

Version 1.0

March 2012

Updated for SPM 1.0 general availability

Version 1.1

September 2012

Updated for SPM 1.1

Version 1.2

December 2012

Minor updates for SPM 1.2.1, specifically the Microsoft Exchange


Calculator capability

Version 1.3

December 2013

Updated for SPM 1.4

Version 1.4

January 2014

Updated for SPM 1.4.1

48

System Performance Modeler

Refer to the Interoperability Matrix Tool (IMT) on the NetApp Support site to validate that the exact product
and feature versions described in this document are supported for your specific environment. The NetApp
IMT defines the product components and versions that can be used to construct configurations that are
supported by NetApp. Specific results depend on each customer's installation in accordance with published
specifications.

NetApp provides no representations or warranties regarding the accuracy, reliability, or serviceability of any
information or recommendations provided in this publication, or with respect to any results that may be
obtained by the use of the information or observance of any recommendations provided herein. The
information in this document is distributed AS IS, and the use of this information or the implementation of
any recommendations or techniques herein is a customers responsibility and depends on the customers
ability to evaluate and integrate them into the customers operational environment. This document and
the information contained herein may be used solely in connection with the NetApp products discussed
in this document.

49

NetApp provides no representations or warranties regarding the accuracy, reliability, or serviceability of any
information
orInc.
recommendations
provided
thisdocument
publication,
or reproduced
with respect
to any
that may
be

2014 NetApp,
All rights reserved. No
portions in
of this
may be
without
prior results
written consent
of NetApp,
Inc.
Specifications
subject
to information
change withoutornotice.
NetApp, the
logo, Go further, faster,
Data ONTAP,
Flash
obtained
by theare
use
of the
observance
of NetApp
any recommendations
provided
herein.
TheCache,
Flash
Pool, RAID-DP,
or registered
trademarks
NetApp, Inc. in theof
information
in this SnapManager,
document is SnapMirror,
distributedand
ASSnapshot
IS, andare
thetrademarks
use of this
information
or the ofimplementation
United States and/or other countries. Microsoft and SQL Server are registered trademarks of Microsoft Corporation. Oracle and Java
System Performance
Modeler
anytrademarks
recommendations
or techniques
herein
is a customers
responsibility
and
depends
the customers
are
of Oracle Corporation.
VMware
is a registered
trademark of
VMware, Inc. All
other
brands oron
products
are
ability to evaluate
and
integrateof them
into the holders
customers
operational
This document and
trademarks
or registered
trademarks
their respective
and should
be treatedenvironment.
as such. TR-4050-0114
the information contained herein may be used solely in connection with the NetApp products discussed
in this document.

Вам также может понравиться