Академический Документы
Профессиональный Документы
Культура Документы
David Watts Duncan Furniss Scott Haddow Jeneea Jervay Eric Kern Cynthia Knight
ibm.com/redbooks
Redpaper
International Technical Support Organization IBM eX5 Portfolio Overview: IBM System x3850 X5, x3950 X5, and BladeCenter HX5 May 2010
REDP-4650-00
Note: Before using this information and the product it supports, read the information in Notices on page vii.
First Edition (May 2010) This edition applies to the IBM eX5 products announced as of March 30, 2010: IBM System x3850 X5 IBM System x3950 X5 IBM BladeCenter HX5
Copyright International Business Machines Corporation 2010. All rights reserved. Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp.
Contents
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix The team who wrote this paper . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix Now you can become a published author, too! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi Stay connected to IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xii Chapter 1. Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1 eX5 systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Positioning. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2.1 IBM System x3850 X5 and x3950 X5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2.2 IBM BladeCenter HX5. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3 Performance benchmark comparisons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4 Energy efficiency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.5 Services offerings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 2 3 3 4 5 6 6
Chapter 2. IBM eX5 technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.1 Intel Xeon 6500 and 7500 family processors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 2.1.1 Hyper-Threading Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 2.1.2 Turbo Boost Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 2.1.3 QuickPath Interconnect (QPI) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 2.1.4 Reliability, availability, and serviceability. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.1.5 Memory buffers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 2.1.6 I/O hubs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 2.1.7 Memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 2.2 eX5 chipset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 2.3 MAX5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 2.4 Scalability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 2.5 Partitioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 2.6 IBM eXFlash . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 Chapter 3. IBM System x3850 X5 and x3950 X5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 Product features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.1 IBM System x3850 X5 product features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.2 IBM System x3950 X5 product features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.3 Comparing the x3850 M2 to the x3850 X5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Target workloads. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 Models. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4 System architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.1 QPI Wrap Card . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5 Scalability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.6 Processor rules and options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.7 Memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.7.1 Memory speed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.7.2 Rules for configuring memory for performance. . . . . . . . . . . . . . . . . . . . . . . . . . . 3.7.3 Memory mirroring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.7.4 Memory sparing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 18 18 21 21 22 23 24 24 26 27 29 31 32 34 35 iii
3.7.5 Memory RAS features. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.8 Storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.8.1 Internal disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.8.2 SAS disk support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.8.3 IBM eXFlash and SSD disk support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.8.4 IBM eXFlash price-performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.8.5 RAID controllers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.8.6 External Storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.9 PCIe slots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.10 I/O cards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.10.1 Standard Emulex 10Gb Ethernet Adapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.10.2 Optional adapters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.11 Standard onboard features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.11.1 Onboard Ethernet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.11.2 Power supplies and fans . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.11.3 Environmental data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.11.4 Integrated Management Module (IMM). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.11.5 Integrated Trusted Platform Module (TPM). . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.11.6 Light path diagnostics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.12 Integrated virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.12.1 VMware ESXi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.12.2 Red Hat RHEV-H (KVM) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.12.3 Windows 2008 R2 Hyper-V. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.13 Operating system support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.14 Rack considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 4. IBM BladeCenter HX5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 Comparison to the HS22 and HS22V . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3 Target workloads. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4 Chassis support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.5 Models. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.6 System architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.7 Speed Burst Card . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.8 Scalability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.9 Intel processors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.10 Memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.10.1 Memory features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.10.2 Memory performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.11 Storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.11.1 Solid state drives (SSDs) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.11.2 Connecting to external SAS storage devices . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.11.3 BladeCenter Storage and I/O Expansion Unit (SIO) . . . . . . . . . . . . . . . . . . . . . . 4.12 I/O expansion cards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.12.1 CIOv . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.12.2 CFFh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.13 Standard on-board features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.13.1 UEFI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.13.2 Onboard network adapters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.13.3 Integrated Management Module (IMM). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.13.4 Video controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.13.5 Trusted Platform Module (TPM) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.14 Integrated virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
36 37 37 38 39 41 42 44 45 46 46 49 50 50 51 52 52 53 53 54 55 55 55 56 56 57 58 60 61 61 62 62 63 65 67 68 69 70 72 73 73 74 74 74 75 77 77 77 78 78 78 79
iv
IBM eX5 Portfolio Overview: IBM System x3850 X5, x3950 X5, and BladeCenter HX5
4.15 Partitioning capabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 4.16 Operating system support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 Chapter 5. Systems management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1 Supported versions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 Embedded firmware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.1 Unified Extensible Firmware Interface (UEFI). . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.2 Integrated Management Module (IMM). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3 UpdateXpress . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4 Deployment tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.1 Bootable Media Creator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.2 ServerGuide . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.3 ServerGuide Scripting Toolkit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.4 IBM Start Now Advisor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5 Configuration utilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.1 Advanced Settings Utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.2 Storage Configuration Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6 IBM Dynamic Systems Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.7 IBM Systems Director . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.7.1 BladeCenter Open Fabric Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.7.2 Active Energy Manager. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.7.3 Tivoli Provisioning Manager for Operating System Deployment. . . . . . . . . . . . . . 85 86 86 86 89 90 90 91 91 92 93 93 93 94 94 95 96 96 97
Abbreviations and acronyms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 Related publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Other publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Online resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . How to get Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Help from IBM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 101 101 102 102 102
Contents
vi
IBM eX5 Portfolio Overview: IBM System x3850 X5, x3950 X5, and BladeCenter HX5
Notices
This information was developed for products and services offered in the U.S.A. IBM may not offer the products, services, or features discussed in this document in other countries. Consult your local IBM representative for information on the products and services currently available in your area. Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM product, program, or service may be used. Any functionally equivalent product, program, or service that does not infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to evaluate and verify the operation of any non-IBM product, program, or service. IBM may have patents or pending patent applications covering subject matter described in this document. The furnishing of this document does not give you any license to these patents. You can send license inquiries, in writing, to: IBM Director of Licensing, IBM Corporation, North Castle Drive, Armonk, NY 10504-1785 U.S.A. The following paragraph does not apply to the United Kingdom or any other country where such provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of express or implied warranties in certain transactions, therefore, this statement may not apply to you. This information could include technical inaccuracies or typographical errors. Changes are periodically made to the information herein; these changes will be incorporated in new editions of the publication. IBM may make improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time without notice. Any references in this information to non-IBM Web sites are provided for convenience only and do not in any manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the materials for this IBM product and use of those Web sites is at your own risk. IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring any obligation to you. Information concerning non-IBM products was obtained from the suppliers of those products, their published announcements or other publicly available sources. IBM has not tested those products and cannot confirm the accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products. This information contains examples of data and reports used in daily business operations. To illustrate them as completely as possible, the examples include the names of individuals, companies, brands, and products. All of these names are fictitious and any similarity to the names and addresses used by an actual business enterprise is entirely coincidental. COPYRIGHT LICENSE: This information contains sample application programs in source language, which illustrate programming techniques on various operating platforms. You may copy, modify, and distribute these sample programs in any form without payment to IBM, for the purposes of developing, using, marketing or distributing application programs conforming to the application programming interface for the operating platform for which the sample programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore, cannot guarantee or imply reliability, serviceability, or function of these programs.
vii
Trademarks
IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business Machines Corporation in the United States, other countries, or both. These and other IBM trademarked terms are marked on their first occurrence in this information with the appropriate symbol ( or ), indicating US registered or common law trademarks owned by IBM at the time this information was published. Such trademarks may also be registered or common law trademarks in other countries. A current list of IBM trademarks is available on the Web at http://www.ibm.com/legal/copytrade.shtml The following terms are trademarks of the International Business Machines Corporation in the United States, other countries, or both:
BladeCenter Calibrated Vectored Cooling Dynamic Infrastructure eServer IBM Systems Director Active Energy Manager IBM iDataPlex Netfinity PowerPC POWER Redbooks Redbooks (logo) Redpaper ServerProven Solid System Storage System x System z Tivoli X-Architecture xSeries
The following terms are trademarks of other companies: Java, and all Java-based trademarks are trademarks of Sun Microsystems, Inc. in the United States, other countries, or both. Microsoft, Windows, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both. Intel Xeon, Intel, Itanium, Intel logo, Intel Inside logo, and Intel Centrino logo are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries. Linux is a trademark of Linus Torvalds in the United States, other countries, or both. Other company, product, or service names may be trademarks or service marks of others.
viii
IBM eX5 Portfolio Overview: IBM System x3850 X5, x3950 X5, and BladeCenter HX5
Preface
High-end workloads drive ever-increasing and ever-changing constraints. In addition to requiring greater memory capacity, these workloads challenge you to do more with less and to find new ways to simplify deployment and ownership. And although higher system availability and comprehensive systems management have always been critical, they have become even more important in recent years. Difficult challenges, such as these, create new opportunities for innovation. IBM eX5 portfolio delivers this innovation. This portfolio of high-end computing introduces the fifth generation of IBM X-Architecture technology. It is the culmination of more than a decade of x86 innovation and firsts that have changed the expectations of the industry. With this latest generation, eX5 is again leading the way as the shift toward virtualization, platform management, and energy efficiency accelerates. This IBM Redpaper publication introduces the new IBM eX5 portfolio and describes the technical detail behind each server.
ix
Units (DAU) program. Previously, she was a member of the Americas System x and BladeCenter Brand team and the Sales Solution Center, which focused exclusively on IBM Business Partners. She started her career at IBM in 1995. Eric Kern is a Senior Managing Consultant for IBM STG Lab Services. He currently provides technical consulting services for System x, BladeCenter, System Storage, and Systems Software. Since 2007, he has helped customers design and implement x86 server and systems management software solutions. Prior to joining Lab Services, he developed software for the BladeCenter's Advanced Management Module and for the Remote Supervisor Adapter II. He is a VMware Certified Professional and a Red Hat Certified Technician. Cynthia Knight is an IBM Hardware Design Engineer in Raleigh and has worked for IBM for 11 years. She is currently a member of the IBM eX5 design team. Previous designs include the Ethernet add-in cards for the IBM Network Processor Reference Platform and the Chassis Management Module for BladeCenter T. She was also the lead designer for the IBM BladeCenter PCI Expansion Units.
The team (left to right): David, Scott, Cynthia, Duncan, JJ, and Eric
Thanks to the following people for their contributions to this project: From IBM Marketing: Mark Chapman Michelle Gottschalk Kevin Powell David Tareen From IBM Development: Ralph Begun Jon Bitner Charles Clifton Candice Coletrane-Pagan David Drez Randy Kolvic Greg Sellman Matthew Trzyna x
IBM eX5 Portfolio Overview: IBM System x3850 X5, x3950 X5, and BladeCenter HX5
From other IBMers throughout the world: Randall Davis, IBM Australia Shannon Meier, IBM U.S. Keith Ott, IBM U.S. Andrew Spurgeon, IBM Australia/New Zealand
Comments welcome
Your comments are important to us! We want our papers to be as helpful as possible. Send us your comments about this paper or other IBM Redbooks publications in one of the following ways: Use the online Contact us review Redbooks form found at: ibm.com/redbooks Send your comments in an e-mail to: redbooks@us.ibm.com Mail your comments to: IBM Corporation, International Technical Support Organization Dept. HYTD Mail Station P099 2455 South Road Poughkeepsie, NY 12601-5400
Preface
xi
xii
IBM eX5 Portfolio Overview: IBM System x3850 X5, x3950 X5, and BladeCenter HX5
Chapter 1.
Introduction
The IBM eX5 product portfolio represents the fifth generation of servers built upon Enterprise X-Architecture. Enterprise X-Architecture is the culmination of bringing IBM technology, often derived from our experience in high end enterprise servers, to the x86 server market. And now with eX5, IBM scalable systems technology for Intel processor-based servers has also been delivered to blades and mid-sized x86 server systems. These servers can be expanded on demand, and configured by using a building block approach that optimizes system design for your workload requirements. As a part of the IBM Smarter Planet initiative, our Dynamic Infrastructure charter guides us to provide servers that improve service, reduce cost, and mange risk. These new servers scale to more CPU cores, memory, and I/O than previous systems, enabling them to handle greater workloads than the systems they supersede. Power efficiency and machine density are optimized, making them very affordable to own and operate. The ability to modify the memory capacity independently of the processors, and the new high speed local storage options means these system can be highly utilized, yielding the best return from your application investment. These systems allow you to grow in processing, I/O, and memory dimensions, so you can provision what you need now, and expand the system to meet future requirements. System redundancy and availability technologies are more advanced than previously available in the x86 systems The chapter contains the following topics: 1.1, eX5 systems on page 2 1.2, Positioning on page 3 1.3, Performance benchmark comparisons on page 5 1.4, Energy efficiency on page 6 1.5, Services offerings on page 6
Figure 1-1 eX5 family (top to bottom): BladeCenter HX5 (2-node), System x3690 X5, and System x3850 X5 / System x3950 X5
The IBM System x3850 X5 and x3950 X5 are 4U highly rack-optimized server. The x3850 X5 and the workload-optimized x3950 X5 are the new flagship servers of the IBM x86 server family. These systems are designed for maximum utilization, reliability, and performance for compute and memory intensive workloads. The IBM System x3690 X5 is a new 2U rack optimized server. This machine brings new features and performance to the mid tier. The IBM BladeCenter HX5 is a single wide (30 mm) blade server that follows the same design as all previous IBM blades. The HX5 brings unprecedented levels of capacity to high density environments. The HX5 is expandable to a double-wide (60 mm) two-node server. Note: In this first edition of this paper, we only provide technical details of the x3850 X5 and BladeCenter HX5 servers that were announced on 30 March, 2010. As further eX5 family members and capabilities are formally announced, we will update this paper. When compared to other machines in the System x portfolio, these systems represent the upper end of the spectrum, are suited for the most demanding x86 tasks, and can handle jobs which previously may have been run on other platforms. To assist with selecting the ideal system for a given workload, we have designed workload specific models for virtualization and database needs.
IBM eX5 Portfolio Overview: IBM System x3850 X5, x3950 X5, and BladeCenter HX5
1.2 Positioning
Table 1-1 gives an overview of the features of the systems that are described in this paper.
Table 1-1 Maximums for the eX5 systems Maximum configurations Processors (with or without MAX5) Memory DIMMs 1-node 2-node 1-node 1-node with MAX5 2-node 2-node with MAX5 Maximum memory 1-node 1-node with MAX5 2-node 2-node with MAX5 Disk drives (non-SSD) (with or without MAX5) Solid state disks (with or without MAX5) Standard 1 Gb Ethernet interfaces (with or without MAX5) Standard 10 Gb Ethernet interfaces (with or without MAX5)a 1-node 2-node 1-node 2-node 1-node 2-node 1-node 2-node x3850 X5 / x3950 X5 4 8 64 96 128 192 1024 GB 1536 GB 2048 GB 3072 GB 8 16 16 32 2 4 2 4 x3690 X5 2 4 32 64 64 128 512 GB 1024 GB 1024 GB 2048 GB 16 32 24 48 2 4 2 4 HX5 2 4 16 40 32 80 128 GB 320 GB 256 GB 640 GB 0 0 2 4 2 4 0 0
a. Depends on the model. See Table 3-2 on page 23 for the IBM System x3850 X5.
Chapter 1. Introduction
Figure 1-2 IBM System x3850 X5 with the MAX5 memory expansion unit attached
Two x3850 X5 servers can be connected together for a single system image with eight processors and 2 TB of RAM, and with the addition of MAX5 memory expansion units, up to 3 TB of RAM. Table 1-2 compares the number of processor sockets, cores, and memory capacity of the eX4 and eX5 systems.
Table 1-2 Comparing the x3850 M2 and x3950 M2 with the eX5 servers Processor sockets Previous generation servers (eX4) x3850 M2 x3950 M2 Next generation server (eX5) x3850 X5 x3850 X5 two-nodes x3850 X5 with MAX5 x3950 X5 two-nodes with MAX5 4 8 4 8 32 64 32 64 1024 GB 2048 GB 1536 GB 3072 GB 4 8 24 48 256 GB 512 GB Processor cores Maximum memory
IBM eX5 Portfolio Overview: IBM System x3850 X5, x3950 X5, and BladeCenter HX5
Next generation server (eX5) HX5 (30 mm) HX5 two-node (60 mm) 2 4 16 32 128 GB 256 GB
Chapter 1. Introduction
Table 1-4 The x3850 X5 gains over x3850 M2 Benchmark SPECint_rate SPECfp_rate TPC-C VMmark x3850 X5 performance gains over the x3850 M2 2x 3.5x 3x 3.6x
Chapter 2.
Intel development name Maximum processors per server CPU cores per processor Last level cache (MB) Memory DIMMs per processor (maximum)
a. The Xeon 6500 series processors cannot be connected through QPI to make a 4-processor system. Only IBM systems with MAX5 can be connected through EXA scaling to make 4-processor systems.
IBM eX5 Portfolio Overview: IBM System x3850 X5, x3950 X5, and BladeCenter HX5
Hyper-Threading Technology is designed to improve server performance by exploiting the multi-threading capability of operating systems and server applications in such a way as to increase the use of the on-chip execution resources available on these processors. Applications types that make the best use of Hyper-Threading are virtualization, databases, e-mail, Java, and Web servers. For more information about Hyper-Threading Technology, go to the following Web address: http://www.intel.com/technology/platform-technology/hyper-threading/
Processor
Processor
Processor
Processor
Memory
Core Chipset
I/O
Figure 2-1 Shared front side bus, in the IBM x360 and x440; with snoop filter in the x365 and x445
The front-side bus carries all reads and writes to the I/O devices, and all reads and writes to memory. Also, before a processor can use the contents of its own cache, it must know whether an other processor has the same data stored in its cache. This process is described as snooping the other processors caches; and puts a lot of traffic on the front-side bus. To reduce the amount of cache snooping on the front side bus, the core chipset can include a
snoop filter, also referred to as a cache coherency filter. This filter is a table that keeps track
of the starting memory locations of the 64 byte chunks of data that are read in to cache, called cache lines, or the actual cache line itself, and one of four states: modified, exclusive, shared, or invalid (MESI). The next step in the evolution was to divide the load between a pair of front-side busses, as shown in Figure 2-2. Servers that implemented this design included the IBM System x3850 and x3950 (the M1 version).
Processor
Processor
Processor
Processor
Memory
Core Chipset
I/O
Figure 2-2 Dual independent busses, as in the x366 and x460 (later called the x3850 and x3950)
This approach had the effect of reducing congestion on each front-side bus, when used with a snoop filter. It was followed by independent processor busses, shown in Figure 2-3. Servers implementing this design included IBM System x3850 M2 and x3950 M2.
Processor
Processor
Processor
Processor
Memory
Core Chipset
I/O
10
IBM eX5 Portfolio Overview: IBM System x3850 X5, x3950 X5, and BladeCenter HX5
Instead of a parallel bus connecting the processors to a core chipset, which functions as both a memory and I/O controller, the Xeon 6500 and 7500 family processors implemented in IBM eX5 servers include a separate memory controller to each processor. Processor-to-processor communications is carried over shared-clock, or coherent QPI links, and I/O is transported over non-coherent QPI links through I/O hubs. This information is shown in Figure 2-4.
I/O Hub
I/O
Memory
Processor
Processor
Memory
Memory
Processor
Processor
Memory
I/O Hub
I/O
Figure 2-4 Figure 2-4 Quick path interconnect, as in the eX5 portfolio
In previous designs, the entire range of memory was accessible through the core chipset by each processor, a shared memory architecture. This design creates a non-uniform memory access (NUMA) system, in which some of the memory is directly connected to the processor where a given thread is running, and the rest must be accessed over a QPI link through another processor. Similarly, I/O can be local to a processor, or remote through another processor. For QPI use, Intel has modified the MESI cache coherence protocol to include a forwarding state, so when a processor asks to copy a shared cache line, only one other processor responds. For more information about QPI, go to the following Web address: http://www.intel.com/technology/quickpath/
11
QPI clock failover: In the event of a clock failure on a coherent QPI link, the processor on the other end of the link can take over providing the clock. QPI clock failover is not required on the QPI links from processors to I/O hubs, because these links are asynchronous. SMI packet retry: If a memory packet has errors or cannot be read, the processor can request that the packet be resent from the memory buffer. Scalable memory interconnect (SMI) retry: if there is an error on an SMI link, or a memory transfer fails, the command can be retried. SMI lane failover: When an SMI link exceeds the preset error threshold, it is disabled, and memory transfers are routed through the other SMI link to the memory buffer. New to the Xeon 7500 and 6500 processors is the machine check architecture (MCA), a RAS feature that has previously only been available for other processor architectures, such as Intel Itanium, IBM POWER and other reduced instruction set computing (RISC) processors, and mainframes. Implementation of the MCA requires hardware support, firmware support such as found in the Unified Extensible Firmware Interface (UEFI), and operating system support. The MCA enables the handling of system errors that otherwise require the operating system to be halted. For example, if a memory location in a DIMM is no longer functioning properly and it cannot be recovered by the DIMM or memory controller logic, then MCA will log the failure and prevent that memory location from being used. If the memory location was in use by a thread at the time, the process that owns the thread is terminated. Microsoft, Novell, Red Hat, VMware, and other operating system vendors have announced support for the Intel MCA on the Xeon processors.
12
IBM eX5 Portfolio Overview: IBM System x3850 X5, x3950 X5, and BladeCenter HX5
2.1.7 Memory
The memory used in the eX5 systems is DDR3 SDRAM registered DIMMs. All memory runs at 1066 MHz or less, depending on the processor. Unlike the Xeon 5500 and 5600 family processors, each with their single scalable memory interconnect (SMI), the 6500 and 7500 processors have two SMIs, and memory therefore must be installed in matched pairs. For better performance, or for systems connected together, memory must be installed in sets of four. For more information regarding each of the eX5 systems, see the following sections: 3.7, Memory on page 29 4.10, Memory on page 68
2.3 MAX5
Memory Access for eX5, shortened to MAX5, is the name give to the memory and scalability subsystems that can be added to eX5 servers. MAX5 for the rack-mounted systems (x3850 X5, x3950 X5, and x3690 X5) is in the form of a 1U device that attaches below the server. For the BladeCenter HX5, MAX5 is implemented in the form of a expansion blade that adds 30mm to the width of the blade (the width of one extra blade bay). Figure 2-5 on page 14 shows the x3850 X5 with the MAX5 attached.
13
Figure 2-5 IBM System x3850 X5 with MAX5 (the MAX5 is the 1U unit below the main system)
14
IBM eX5 Portfolio Overview: IBM System x3850 X5, x3950 X5, and BladeCenter HX5
2.4 Scalability
The eX5 servers currently support two-node scaling using QPI links as shown in Figure 2-7.
Server
System scaling
QPI scaling
Server
Figure 2-7 Types of scaling with eX5 systems
In announcements later in 2010, we plan to offer further scaling options using IBM EXA technology to extend the available memory available to the system using MAX5 memory expansion units.
2.5 Partitioning
Scaled systems can be made to operate as two independent systems, and as a single system, without removing the cables (or in the case of the HX5, the side-scale connector). This capability is called partitioning, and is also referred to as IBM FlexNode technology. Partitioning is accomplished through the integrated management module (IMM) in rack-mount servers, or through the Advanced Management Module (AMM) in the IBM BladeCenter chassis for the HX5. Figure 2-8 depicts an HX5 system scaled to two nodes, and partitioned into two independent servers.
The IMM and AMM can be accessed remotely, so partitioning can be done without touching the systems. Because the IMM and AMM can be accessed through command-line interfaces as well as through a Web browser, partitioning can be scripted, or controlled through a systems management utility such as IBM Systems Director. Partitioning can allow you to qualify two system types with little additional work, and allows you more flexibility in system types, for better workload optimization.
15
For more information regarding each of the eX5 systems, see the following sections: 3.8, Storage on page 37 4.11, Storage on page 72
16
IBM eX5 Portfolio Overview: IBM System x3850 X5, x3950 X5, and BladeCenter HX5
Chapter 3.
17
18
IBM eX5 Portfolio Overview: IBM System x3850 X5, x3950 X5, and BladeCenter HX5
Physical specifications are as follows: Width: 440 mm (17.3 inches) Depth: 712 mm (28.0 inches) Height: 173 mm (6.8 inches) or 4 rack units (4U) Minimum configuration: 35.4 kg (78 lb) Maximum configuration: 49.9 kg (110 lb) The x3850 X5 is shown in Figure 3-1.
Figure 3-1 Front view of the x3850 X5 showing eight 2.5-inch SAS drives
In Figure 3-1 above, two SAS backplanes have been installed (at the right of the server), each one supporting four 2.5-inch SAS disks, which is eight disks in total. Notice the orange colored bar on each disk drive. This bar denotes that the disks are hot-swappable. The color coding used throughout the system is orange for hot-swap and blue for non-hot-swap. Changing a hot-swappable component requires no downtime; changing a non-hot-swappable component requires that the server be powered off before removing that component. Figure 3-2 on page 20 shows the major components inside the server and on the front panel of the server.
19
Two 1975W rear-access hot-swap, redundant power supplies Eight memory cards for 64 DIMMs total eight 1066MHz DDR3 DIMMs per card
Dual-port 10Gb Ethernet Adapter (PCIe slot 7) Six available PCIe 2.0 slots
DVD drive Two front USB ports Light path diagnostics Eight SAS 2.5 drives or two eXFlash SSD units
The connectors at the back of the server are shown in Figure 3-3. Six available PCIe slots 10 Gigabit Ethernet ports (standard on most models) Power supplies (redundant at 220V power)
Serial port
Video port
20
IBM eX5 Portfolio Overview: IBM System x3850 X5, x3950 X5, and BladeCenter HX5
Memory
PCIe subsystem
SAS controller
Ethernet controller
21
x3850 X5 Eight 2.5-inch internal drive bays or 16 1.8-inch solid state drive bays Support for SATA, SSD ICH10 chipset USB: six external ports, two internal No SuperIO No PS/2 keyboard/mouse connectors No diskette drive controller 2x 120mm 2x 60mm 2x 120mm in power supplies 1975 W hot-swap, full redundancy high voltage, 875 W low voltage Rear access Two power supplies standard / two maximum (most models) Configuration restrictions at 110 V.
ICH7 chipset USB: five external ports, one internal No SuperIO No PS/2 Keyboard/Mouse connectors No diskette drive controller 4x 120mm 2x 92mm 2x 80mm in power supplies 1440 W hot swap, full redundancy high voltage, 720 W low voltage. Rear access Two power supplies standard / Two maximum
Fans
22
IBM eX5 Portfolio Overview: IBM System x3850 X5, x3950 X5, and BladeCenter HX5
3.3 Models
This section lists the currently available models. The x3850 X5 and x3950 X5 (both machine type 7145) have a three-year warranty. For information about the recent models, consult tools such as the Configurations and Options Guide (COG) or Standalone Solutions Configuration Tool (SSCT). These are available from the Configuration tools Web site: http://www.ibm.com/systems/x/hardware/configtools.html Table 3-2 lists the models of the x3850 X5 that have been announced. (In the table, Mem is memory, Std is standard, and Max is maximum.)
Table 3-2 Models of the x3850 X5: four socket scalable server 10Gb Ethernet standardb No Yes Yes Yes Yes Yes 10Gb Ethernet standardb Yes
Intel Xeon processors (two standard; maximum of four) E7520 4C 1.86 GHz, 18 MB L3, 95W E7520 4C 1.86 GHz, 18 MB L3, 95W E7530 6C 1.86 GHz,12 MB L3, 105W E7540 6C 2.0 GHz, 18 MB L3, 105W X7550 8C 2.0 GHz, 18 MB L3, 130W X7560 8C 2.27GHz, 24 MB L3, 130W
Standard memory 2x 2 GB 4x 4 GB 4x 4 GB 4x 4 GB 4x 4 GB 4x 4 GB
Drive bays Std / Max None 4x 2.5 / 8 4x 2.5 / 8 4x 2.5 / 8 4x 2.5 / 8 4x 2.5 / 8
a. The x character in the seventh position of the machine model denotes the region-specific character. For example, U indicates U.S., and G indicates EMEA. b. Emulex 10Gb Ethernet Adapter is installed in PCIe slot 7.
Table 3-2 lists the workload-optimized models of the x3950 X5 that have been announced. (In the table, Mem is memory, Std is standard, and Max is maximum.)
Table 3-3 Models of the x3950 X5: workload optimized models
Modela
Standard memory
Databased workload optimized models 7145-5Dx X7560 8C 2.27GHz, 24 MB L3, 130W 1066 8x 4GB 4 cards No 8x 1.8 / 16c
a. The x character in the seventh position of the machine model denotes the region-specific character. For example, U indicates U.S., and G indicates EMEA. b. Emulex 10Gb Ethernet Adapter is installed in PCIe slot 7. c. Is a standard with one 8-bay eXFlash SSD backplane; one additional eXFlash backplane is optional
23
x4 x8 x8
Memory Card 7
3 4 1 0 8 7 1 6 5 0 2
MB1
MB2
0 Nehalem EX CPU3 2 3 1 3
1 Nehalem EX CPU4 0 2
13 4
MB1
MB2
MB1
0 2 1 1 4 3 0 7 8
MB2
1 5 6
MB1
0 2 1 1 4 3 0 7 8
MB2
1 5 6
Memory Card 3
3 4 1 0 8 7 1 6 5 0 2
0 2
1 4 2
2 0 3 3 Nehalem EX CPU2 1
13 4
MB1
MB2
MB1
MB2
MB1
0 2 1 1 4 3 0 7 8
MB2
1 5 6
MB1
0 1 2 1 4 3 0 7 8
MB2
1 5 6
ESI x4
DVD iBMC, TPM 4 rear 2 front 2 internal Dual 1Gb Slot 5 Slot 6 Slot 7 Dual 10Gb
QPI
Memory Card 2
Memory Card 4
I/O HUB1
x4 5809 EN x8 x8 x8
x8
24
IBM eX5 Portfolio Overview: IBM System x3850 X5, x3950 X5, and BladeCenter HX5
The QPI Wrap Cards are installed in the QPI bays at the back of the server as shown in Figure 3-6. QPI Wrap Cards are not needed in a 2-node configuration. When the QPI Wrap Cards are installed, no external QPI ports are available.
Figure 3-7 on page 26 shows a block diagram of how the QPI Wrap Cards are used to complete the QPI mesh.
25
CPU3
QPI wrap
CPU4
QPI
QPI
Full mesh between four processors is completed with two QPI Wrap Cards
CPU1
QPI wrap
CPU2
3.5 Scalability
In this section, we describe how the x3850 X5 can be expanded to increase the number of processors. Although the x3850 X5 architecture allows for a number of scalable configurations including the use of a MAX5 memory drawer, the server currently supports only two configurations: A single x3850 X5 server with four processor sockets. This configuration is sometimes referred to as a single-node server. Two x3850 X5 servers connected together to form a single image eight-socket server. This configuration is sometimes referred to as a two-node server. The 2-node configuration uses native Intel QPI scaling. The two servers are physically connected together with a set of external QPI cables. The cables are connected to the server through the QPI bays, shown in Figure 3-6 on page 25. Figure 3-8 shows the cable routing.
x3850 X5
QPI 1-2
QPI 3-4
No QPI ports are visible on the rear of the server. The QPI scalability cables have long rigid connectors allowing them to be inserted into the QPI bay until they connect with the QPI ports
26
IBM eX5 Portfolio Overview: IBM System x3850 X5, x3950 X5, and BladeCenter HX5
located a few inches inside on the planar. No other option is required to complete the QPI scaling of two x3850 X5 servers into a two-node complex. Note: The Intel E7520 and E7530 processors cannot be used to scale to an 8-way 2-node complex when using native QPI scalability cables. They support a maximum of four processors. At the time of writing, the following models use those processors: 7145-ARx 7145-1Rx 7145-2Rx These models are planned to support IBM EXA scaling with the availability of the MAX5 memory expansion drawer. Figure 3-9 shows the QPI links that are used to connect two x3850 X5 servers together. Both nodes must have four processors, and all processors must be identical.
1 4
3 2
3 2
1 4
QPI Lin ks
Scaling is managed through software running on the Integrated Management Module (IMM) that is embedded into the x3850 X5 servers. The IMM provides a Web-based user interface that shows the current scaling configuration, and provides capabilities to scale or not use the FlexNode technology. FlexNode scaling offers the ability to have two nodes physically scaled and wired together but the decision to remove scaling and have the two nodes act as two independent computers. The requirement for FlexNode scaling is that the two nodes are scaled using the EXA ports, which means only the two-node x3850 X5 is capable of FlexNode scaling. For the two-node x3850 X5 scaled through the QPI ports, when those cables are connected, the two nodes act as one system until the cables are physically disconnected.
27
Table 3-5 shows the option part numbers for the supported processors. In a two-node system you must have eight processors, which must all be identical.
Table 3-5 Available processor options for the x3850 X5 Part number 49Y4300 49Y4302 59Y6103 49Y4304 49Y4305 49Y4306 49Y4301 49Y4303 Feature code 4513 4517 4527 4521 4523 4525 4515 4519 Intel Xeon model X7560 X7550 X7542 E7540 E7530d E7520d L7555 L7545 Speed 2.26 GHz 2.00 GHz 2.66 GHz 2.00 GHz 1.86 GHz 1.86 GHz 1.86 GHz 1.86 GHz Cores 8 8 6 6 6 4 8 6 L3 cache 24 MB 18 MB 18 MB 18 MB 12 MB 18 MB 24 MB 18 MB GT/s / Memory speeda x6.4 / 1066 MHz x6.4 / 1066 MHz x5.86 / 978 MHz x6.4 / 1066 MHz x5.86 / 978 MHz x4.8 / 800 MHz x5.86 / 978 MHz x5.86 / 978 MHz Power (watts) 130W 130W 130W 105W 105W 95W 95W 95W HT
b
a. GT/s is giga transfers per second. For an explanation, see 3.7.1, Memory speed on page 31. b. Intel Hyper-Threading Technology c. Intel Turbo Boost Technology d. Scalable to four socket maximum, and therefore cannot be used in a two-node x3850 X5 complex that is scaled with native QPI cables.
With the exception of the E7520, all processors listed in Table 3-5 support Intel Turbo Boost Technology. When a processor is operating below its thermal and electrical limits, Turbo Boost dynamically increases the clock frequency of the processor by 133 MHz on short and regular intervals until an upper limit is reached. See 2.1.2, Turbo Boost Technology on page 9 for more information. With the exception of the X7542, all processors shown in the table support Intel Hyper-Threading Technology. Hyper-Threading Technology is an Intel technology that is used to improve the parallelization of workloads. When Hyper-Threading is engaged in the BIOS, for each processor core that is physically present, the operating system addresses two. For more information, see 2.1.1, Hyper-Threading Technology on page 8. All processor options include the heat-sink and CPU installation tool. The x3850 X5 includes at least two CPUs as standard. Two CPUs are required to access all seven of the PCIe slots (shown in Figure 3-4 on page 24): Either CPU 1 or CPU 2 is required for operation of PCIe slots 5-7 Either CPU 3 or CPU 4 is required for operation of PCIe Slots 1-4 All CPUs are also required to access all memory cards, as explained in 3.7, Memory on page 29. Population guidelines are as follows: Each CPU requires a minimum of two DIMMs to operate. All processors must be identical. Only configurations of two, three, or four processors are supported. A processor must be installed in socket 1 or 2 for system to successfully boot. A processor is required in socket 3 or 4 to use PCIe slots 1-4. See Figure 3-4 on page 24.
28
IBM eX5 Portfolio Overview: IBM System x3850 X5, x3950 X5, and BladeCenter HX5
When populating more than two sockets, you must use a QPI Wrap Card Kit (part number 49Y4379), which contains two wrap cards. See 3.7, Memory on page 29. Consider the X7542 processor for CPU frequency-dependent workloads because it has the highest core frequency of the available processor models. If high processing capacity is not required for your application but high memory bandwidth is required, consider using four processors with fewer cores or a lower core frequency rather than two processors with more cores or a higher core frequency. Having four processors will enable all memory channels and will maximize memory bandwidth. This is discussed in 3.7, Memory on page 29.
3.7 Memory
Memory is installed in the x3850 X5 in memory cards. Up to eight memory cards can be in the server; each card holds eight DIMMs. Therefore, the x3850 X5 supports up to 64 DIMMs. Figure 3-10 shows the memory card.
DIMM 1
DIMM 8
Figure 3-10 Memory card
The memory card contains two memory buffer chips (under heat sinks as shown in Figure 3-10). Standard models contain two or more memory cards. Additional cards can be configured for IBM System x3850 X5 and x3950 X5 Memory card. See Table 3-6.
Table 3-6 IBM System x3850 X5 and x3950 X5 Memory card Part number 46M0071 Feature code 5102 Description IBM x3850 X5 and x3950 X5 Memory Expansion Card
29
The memory cards are installed in the server as shown in Figure 3-11. Each processor is electrically connected to two memory cards as shown (for example, processor 1 is connected to memory cards 1 and 2). Memory cards 1 - 8 Processors 1 - 4
Table 3-7 shows the memory DIMM options for the x3850 X5.
Table 3-7 Memory ordering information Part number 44T1480 44T1481 46C7448 46C7482 46C7483 Feature code 3963 3964 1701 1706 1707 Capacity 1 GB 2 GB 4 GB 8 GB 16 GB Description 1x 1GB PC3-10600 CL9 ECC DDR3-1333 LP RDIMM 1x 2GB PC3-10600 CL9 ECC DDR3-1333 LP RDIMM 1x 4GB PC3-8500 CL7 ECC DDR3-1066 LP RDIMM 1x 8GB PC3-8500 CL7 ECC DDR3-1066 LP RDIMM 1x 16GB PC3-8500 CL7 ECC DDR3-1066 LP RDIMM Memory speeda 1333 MHzb 1333 MHzb Ranks Single x8 Dual x8 Quad x8 Quad x8 Quad x4
a. Memory speed is also controlled by the memory bus speed as specified by the processor model selected. The actual memory bus speed is the lower of both the processor memory bus speed and the DIMM memory bus speed. b. Although 1333 MHz memory is supported in the x3850 X5, it runs at a maximum speed of 1066 MHz
30
IBM eX5 Portfolio Overview: IBM System x3850 X5, x3950 X5, and BladeCenter HX5
Notes: Memory options must be installed in matched pairs. Single options cannot be installed, so the options shown above need to be ordered in quantities of two. The maximum memory speed supported by Xeon 7500 and 6500 (Nehalem-EX) processors is 1066 MHz (1333 MHz is not supported).
31
32
IBM eX5 Portfolio Overview: IBM System x3850 X5, x3950 X5, and BladeCenter HX5
Memory configurations
Relative performance
Each processor: 2 memory controllers 2 DIMMs per channel 8 DIMMs per MC Each processor: 2 memory controllers 1 DIMM per channel 4 DIMMs per MC Each processor: 2 memory controllers 2 DIMMs per channel 4 DIMMs per MC Each processor: 2 memory controllers 1 DIMM per channel 2 DIMMs per MC Each processor: 1 memory controller 2 DIMMs per channel 8 DIMMs per MC Each processor: 1 memory controller 1 DIMM per channel 4 DIMMs per MC Each processor: 1 memory controller 2 DIMMs per channel 4 DIMMs per MC Each processor: 1 memory controller 1 DIMM per channel 2 DIMMs per MC
1 2 3 4 5 6 7 8
1.0
Memory card DIMMs Channel Memory buffer SMI link Memory controller
Mem Ctrl 1
Mem Ctrl 2
Mem Ctrl 1
Mem Ctrl 2
0.94
Mem Ctrl 1
0.61
1
1 0.94
Mem Ctrl 1
Mem Ctrl 2
0.58 0.51
0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 1 2 3 4 5 6 7 8 Configuration
0.61 0.58 0.51 0.47 0.31
Mem Ctrl 1
Mem Ctrl 2
Mem Ctrl 1
Mem Ctrl 2
Mem Ctrl 1
Mem Ctrl 2
0.29
Mem Ctrl 1
Mem Ctrl 2
Mem Ctrl 1
Mem Ctrl 2
Figure 3-12 Relative memory performance based on DIMM placement (one processor and two memory cards shown)
33
In a system with fewer than four processors installed or fewer than eight memory cards installed, there are fewer memory channels, therefore less bandwidth and lower performance. Another factor that affects memory performance is the speed of the memory bus. On the x3850 X5, the memory bus speed can be 1066 MHz, 978 MHz or 800 MHz, and is dependant on the model of the processors installed and which particular memory DIMMs are installed: Each processor has a maximum memory bus speed as shown in Table 3-5 on page 28. Each memory DIMM has a maximum memory speed as shown in Table 3-7 on page 30 The memory speed of all memory channels is dictated by the slower of (a) the SMI link of processors and (b) the slowest memory DIMM installed in any memory card in the server.
34
IBM eX5 Portfolio Overview: IBM System x3850 X5, x3950 X5, and BladeCenter HX5
Memory mirroring, or full array memory mirroring (FAMM) redundancy provides the user with a redundant copy of all code and data addressable in the configured memory map. Memory mirroring works within the chipset by writing data to two memory ports on every memory write cycle. Two copies of the data are kept, similar to the way RAID-1 writes to disk. Reads are interleaved between memory ports. The system automatically uses the most reliable memory port as determined by error logging and monitoring. If errors occur, only the alternate memory port is used, until bad memory is replaced. Because a redundant copy is kept, mirroring results in having only half the installed memory available to the operating system. FAMM does not support asymmetrical memory configurations and has the additional requirements that each port be populated in identical fashion. For example, you must install 2 GB of identical memory equally and symmetrically across the two memory ports to achieve 1 GB of mirrored memory. FAMM enables other enhanced memory features such as Unrecoverable Error (UE) recovery and is required for support of memory hot replace. Memory mirroring is independent of the operating system. Note: Hot-replace memory is not supported on the x3850 X5.
35
36
IBM eX5 Portfolio Overview: IBM System x3850 X5, x3950 X5, and BladeCenter HX5
that memory out of the possible array and boot with the remaining installed memory. The system POST will then continue to log the error to the SEL flagging the affected DIMMs on each subsequent reboot until the user enters UEFI and re-enables the affected memory rows representing the DIMMs. Power down the server and replace the faulty DIMMs before the memory rows are enabled again. There is a single case where the only bank that is populated in the system is the affected bank. No deterministic method exists to detect that the user has actually replaced the affected DIMMs. Therefore, in this single case, POST clears the error automatically and attempts to boot with the existing memory under the assumption that the affected DIMMs flagged in the SEL have been replaced. Because real-world runtime-detected UEs do not typically affect the entire DIMM device, booting the affected memory is possible if it was never replaced. In this case, the error might be redirected at a future time. Memory sparing For details about memory sparing, see 3.7.4, Memory sparing on page 35. Note: Redundant bit steering (RBS) is not supported on the x3850 X5. The reason is because the integrated memory controller of the Intel Xeon 7500 processors does not support the feature.
3.8 Storage
In this section, we look at internal storage and RAID options for the x3850 X5, with suggestions of where to find details about supported external storage arrays. Topics in this section are as follows: Internal disks SAS disk support IBM eXFlash and SSD disk support IBM eXFlash price-performance RAID controllers External Storage
37
Figure 3-13 Front view of the x3850 X5 showing eight 1.8-inch SAS drives
The SAS backplane uses a short SAS cable (included with the part number 59Y6135), and is always controlled by the RAID adapter in the dedicated slot behind the disk cage, never from an adapter in the PCIe slots. The required power/signal Y cable is included with the x3850 X5. Up to two SAS backplanes (each holding up to four disks) can connect to a RAID controller installed in the dedicated RAID slot. Support RAID controllers are listed in Table 3-10 on page 39. For more information each RAID controller, see 3.8.5, RAID controllers on page 42.
38
IBM eX5 Portfolio Overview: IBM System x3850 X5, x3950 X5, and BladeCenter HX5
Table 3-10 RAID controllers compatible with SAS backplane and SAS disk drives Part number 44E8689 46M0831 46M0829 Feature code 3577 0095 0093 Description ServeRAID BR10i (standard on most models; see 3.3, Models on page 23) ServeRAID M1015 SAS/SATA Controller ServeRAID M5015 SAS/SATA Controller
Table 3-11 lists the 2.5-inch SAS 10K and 15K RPM disk drives and the 2.5-inch solid-state drive (SSD) that are supported in the x3850 X5. These drives are supported with the SAS hard disk backplane, 59Y6135.
Table 3-11 Supported 2.5-inch SAS drives and 2.5-inch SSDs Part number 42D0632 42D0637 42D0672 42D0677 43W7714 Feature code 5537 5599 5522 5536 3745 Description IBM 146 GB 10K 6Gbps SAS 2.5-inch SFF Slim-HS HDD IBM 300 GB 10K 6Gbps SAS 2.5-inch SFF Slim-HS HDD IBM 73 GB 15K 6Gbps SAS 2.5-inch SFF Slim-HS HDD IBM 146 GB 15K 6Gbps SAS 2.5-inch SFF Slim-HS HDD IBM 50 GB SATA 2.5-inch SFF Slim-HS High IOPS SSD
The 2.5-inch drives require less space than 3.5-inch drives, consume half the power, produce less noise, can seek faster, and offer increased reliability. Note: As listed in Table 3-11, the 2.5-inch 50 GB SSD is also supported with the standard SAS backplane and the option SAS backplane part number 59Y6135. It is not compatible with the 1.8-inch SSD eXFlash backplane, 59Y6213. A typical configuration could be two 2.5-inch SAS disks for the operating system and two High IOPS disks for data. Only the 2.5-inch High IOPS SSD disk can be used on the SAS backplane. The 1.8-inch disks for the eXFlash cannot be used on the SAS backplane.
39
external storage. Host bus adapters, switches, controller shelves, disk shelves, cabling, and of course the disks themselves, all carry a cost. They might even require an upgrade to the machine room infrastructure, for example a new rack or racks, additional power lines, or perhaps additional cooling infrastructure. Remember also that storage acquisition cost is only a part of the total cost of ownership (TCO). TCO takes the ongoing cost of management, power, and cooling for the additional storage infrastructure detailed previously. SSDs use only a fraction of the power, generate only a fraction of the heat that spinning disks do, and because they fit in the x3850 X5 chassis, are managed by the server administrator. IBM eXFlash is optimized for heavy mix of read and write operations, such as transaction processing, media streaming, surveillance, file copy, logging, backup and recovery, and business Intelligence. In addition to their superior performance, eXFlash offers superior uptime with three times the reliability of mechanical disk drives. SSDs have no moving parts to fail and use Enterprise Wear-Leveling to extend life even further. All operating systems listed in ServerProven for the x3850 X5 are supported for use with eXFlash. Figure 3-14 shows an x3850 X5 with one of two eXFlash units installed.
The eXFlash SSD backplane uses two long SAS cables, which are included with the backplane option. If two eXFlash backplanes are installed, four cables are required. The eXFlash backplane is always connected to a RAID controller in one of PCIe slots 1 - 4, never from the custom slot used with the SAS backplane. In a system with two eXFlash backplanes installed, two controllers are required in PCIe slots 1 - 4 to control the drives, however up to four controllers can be used. Use two RAID controllers per backplane to ensure that peak I/O operations per second (IOPS) can be reached. Although use of a single controller results in a functioning solution, peak IOPS can be reduced by a factor of approximately 50%. Both RAID and non-RAID controllers can be used. The IBM 6Gb SSD Host Bus Adapter is optimized for read-intensive environments; RAID controllers are a better choice for environments with a mix of read and write activity.
40
IBM eX5 Portfolio Overview: IBM System x3850 X5, x3950 X5, and BladeCenter HX5
The failure rate of SSDs is low because, in part, the drives are non-mechanical. The 50 GB High IOPS SSD is a Single Level Cell (SLC) device with Enterprise Wear Leveling and as a consequence of both of these technologies, the additional layer of protection provided by a RAID controller may not always be necessary in every customer environment.
41
ServeRAID BR10i (standard on most models) ServeRAID M1015 SAS/SATA Controller ServeRAID M5015 SAS/SATA Controller IBM 6Gb SSD Host Bus Adapter
RAID levels 0 and 1 are standard on all models (except 7145-ARx) with the integrated BR10i ServeRAID controller. Model 7145-ARx has no RAID capability standard. All servers, even those that are not standard with the BR10i (model 7145-ARx), include the blue mounting bracket (see Figure 3-15 on page 44) which allows for easy installation of a supported RAID controller in the dedicated x8 PCIe slot behind the disk cage. Only RAID controllers that are supported with the 2.5-inch SAS backplane can be used in this slot. See Table 3-10 on page 39 for a summary of these supported options.
ServeRAID BR10i
The ServeRAID-BR10i has the following specifications: LSI 1068e-based adapter Two internal mini-SAS SFF-8087 connectors SAS 3Gbps PCIe x8 host bus interface Fixed 64 KB stripe size Supports RAID-0, RAID-1, and RAID-1E No battery, no onboard cache
42
IBM eX5 Portfolio Overview: IBM System x3850 X5, x3950 X5, and BladeCenter HX5
Up to 64 logical volumes LUN sizes up to 64 TB Configurable stripe size up to 1 MB Compliance with Disk Data Format (DDF) configuration on disk (COD) Self-Monitoring, Analysis, and Reporting Technology, (S.M.A.R.T.) support RAID-6, RAID-60, and self encrypting drives (SED) technology are optional upgrades to ServeRAID M5015 adapter with the addition of the ServeRAID M5000 Series Advanced Feature Key, part number 46M0930, feature 5106.
43
Latch
This card installs in a special PCIe slot, and is not installed in one of the seven PCIe slots for other expansion cards.
Edge clips
Figure 3-15 ServeRAID M5015 SAS/SATA Controller
RAID 6, RAID 60, and encryption are a further optional upgrade for the M5015 through the ServeRAID M5000 Series Advance Feature Key. For details about the adapters, see the following information: ServeRAID Adapter Quick Reference, TIPS0054 http://www.redbooks.ibm.com/abstracts/tips0054.html ServeRAID M1015 SAS/SATA Controller for System x, TIPS0740 http://www.redbooks.ibm.com/abstracts/tips0740.html ServeRAID M5015 and M5014 SAS/SATA Controllers for IBM System x, TIPS0738 http://www.redbooks.ibm.com/abstracts/tips0738.html
44
IBM eX5 Portfolio Overview: IBM System x3850 X5, x3950 X5, and BladeCenter HX5
All slots are PCI Express 2.0, full height and are not hot-swap. PCI Express 2.0 has several improvements over PCI Express 1.1 (as implemented in the x3850 M2). The chief benefit is the enhanced throughput. PCI Express 2.0 is rated for 500 MBps per lane; PCI Express 1.1 is rated for only 250 MBps per lane. Note the following information about the slots: Slot 1 can accommodate a double-wide x16 card, but access to slot 2 is then blocked. Slot 2 is described as x4 (x8 mechanical). This is sometimes shown as x4 (x8) and means that the slot is only capable of x4 speed, but is physically large enough to accommodate an x8 card. Any x8-rated card physically fits in the slot, but runs at only x4 speed. Slot 7 has been extended in length to 106-pins, making it a non-standard connector. It still accepts PCI Express x8, x4, and x1 standard adapters, and is the only slot that is compatible with the extended edge connector on the Emulex 10Gb Ethernet Adapter, which is standard with most models. Slots 5-7, the onboard Broadcom based Ethernet dual-port chip and the custom slot for the RAID controller, are on the first PCIe bridge and require either CPU 1 or 2 to be installed and operational. Slots 1-4 are on the second PCIe bridge and require CPU 3 or 4 to be installed and operational. The order in which to add cards to balance bandwidth between the two PCIe controllers is shown in Table 3-16 on page 46, however this installation order assumes cards are installed in matched pairs, or that they have similar throughput capabilities.
45
Table 3-16 Order for adding cards Installation order 1 2 3 4 5 6 7 PCIe slot 1 5 3 6 4 7 2 Slot width x16 PCIe slot x8 PCIe slot x8 PCIe slot x8 PCIe slot x8 PCIe slot x8 PCIe slot x4 PCIe slot Slot bandwidtha 8 GBps 5 GBps 5 GBps 5 GBps 5 GBps 5 GBps 2 GBps
a. This column correctly shows bandwidth expressed as GB for gigabyte. A single PCIe 2.0 lane provides a unidirectional bandwidth of 500 MBps.
Two additional 2x4 power connectors are provided on the planar for high power adapters such as graphics cards. If there is a requirement to use a x16 PCIe card not shown as supported in ServerProven, the SPORE process can be engaged. To determine whether a vendor has qualified any x16 cards with the x3850 X5, see IBM ServerProven: http://www.ibm.com/servers/eserver/serverproven/compat/us/serverproven If the preferred vendors logo is displayed, click it to assess options that the vendor has qualified on the x3850 X5. Caveats on support for such third-party options can be found in 3.10.2, Optional adapters on page 49.
46
IBM eX5 Portfolio Overview: IBM System x3850 X5, x3950 X5, and BladeCenter HX5
Figure 3-16 Top view of slot 6 and 7 showing how slot 7 is slightly longer than slot 6
The Emulex 10Gb Ethernet Adapter in the x3850 X5 has been customized with a special type of connector called an extended edge connector. The card itself is colored blue instead of green to indicate that it is non-standard and cannot be installed in a standard x8 PCIe slot. At time of writing, only the x3850 X5 and the x3690 X5 have slots that are compatible with the custom-built Emulex 10Gb Ethernet Adapter shown in Figure 3-17.
Figure 3-17 The Emulex 10Gb Ethernet Adapter has a blue circuit board and a longer connector
The Emulex 10Gb Ethernet Adapter is a customer-replaceable unit (CRU). To replace the adapter (for example, under warranty), order the CRU number as shown in Table 3-17. The table also shows the regular Emulex 10Gb Virtual Fabric Adapter for IBM System x option, which differs only in the connector type (standard x8), and color of the circuit board (green).
Table 3-17 Emulex adapter part numbers Option Description Emulex 10Gb Ethernet Adapter for x3850 X5 Emulex 10Gb Virtual Fabric Adapter for IBM System x Part number None 49Y4250 Feature code Not applicable 5749 CRU number 49Y4202 Not applicable
47
General details about this card can be found in Emulex 10Gb Virtual Fabric Adapter for IBM System x, TIPS0762, available at the following Web address: http://www.redbooks.ibm.com/abstracts/tips0762.html Important: Although these cards are functionally identical, the availability of iSCSI and FCoE upgrades for one does not automatically mean availability for both. Check availability of iSCSI and FCoE feature upgrades with your local IBM representative. The features of the Emulex 10Gb Ethernet Adapter for x3850 X5 are as follows: Dual-channel, 10 Gbps Ethernet controller Near line-rate 10 Gbps performance Two SFP+ empty cages to support either SFP+ SR or twin-ax copper connections: SFP+ SR link is with SFP+ SR optical module with LC connectors SFP+ twin-ax copper link is with SFP+ direct attached copper module/cable Note: Servers that include the Emulex 10Gb Ethernet Adapter do not include transceivers. These must be ordered separately if needed. TCIP/IP stateless offloads TCP chimney offload Based on Emulex OneConnect technology FCoE support as a future feature entitlement upgrade Hardware parity, CRC, ECC, and other advanced error checking PCI Express 2.0 x8 host interface Low-profile form-factor design IPv4/IPv6 TCP, UDP checksum offload VLAN insertion and extraction Support for jumbo frames up to 9000 bytes Preboot eXecutive Environment (PXE) 2.0 network boot support Interrupt coalescing Load balancing and failover support Deploy and manage this and other Emulex OneConnect-based adapters with OneCommand Manager Interoperable with BNT 10Gb Top of Rack (ToR) switch for FCoE functions Interoperable with Cisco Nexus 5000, Brocade 10Gb Ethernet switches for NIC/FCoE SFP+ transceivers are not included with the server and must be ordered separately. Table 3-18 lists compatible transceivers.
Table 3-18 Transceiver ordering information Option number 49Y4218 49Y4216 Feature code 0064 0069 Description QLogic 10Gb SFP+ SR Optical Transceiver Brocade 10Gb SFP+ SR Optical Transceiver
48
IBM eX5 Portfolio Overview: IBM System x3850 X5, x3950 X5, and BladeCenter HX5
Tools such as the Configurations and Options Guide (COG) or SSCT contain information about supported part numbers. A selection of System x tools including those we mentioned are located on the following Configuration tools site: http://www.ibm.com/systems/x/hardware/configtools.html See the ServerProven site for a complete list of the available options: http://www.ibm.com/systems/info/x86servers/serverproven/compat/us/ In any circumstance where the list of options above differs from those shown in ServerProven, ServerProven should be used as the definitive resource. The main function of ServerProven is to show the options which have been successfully tested by IBM with a System x server.
49
Another useful page in ServerProven site is the list of vendors. On the home page for ServerProven, click the industry leaders link as shown in Figure 3-18.
You may use the following Web address: http://www.ibm.com/systems/info/x86servers/serverproven/compat/us/serverproven This page lists the third party vendors who have themselves performed testing of their own options with our servers. This support information means that those vendors agree to support the combinations shown in those particular pages. Tip: To see the tested hardware, click the logo of the vendor. Clicking the About link, below the logo, takes you to a separate about page. Although IBM support the rest of the System x server, technical issues traced to the vendor card is in most circumstances directed to the vendor for resolution.
50
IBM eX5 Portfolio Overview: IBM System x3850 X5, x3950 X5, and BladeCenter HX5
Six strategically located hot-swap/redundant fans, combined with efficient airflow paths, provide highly effective system cooling for the eX5 systems, known as IBM Calibrated Vectored Cooling technology. The fans are arranged to cool three separate zones, with one pair of redundant fans per zone. The fans automatically adjust speeds in response to changing thermal requirements, depending on the zone, redundancy, and internal temperatures. When the temperature inside
51
the server increases, the fans speed up to maintain the proper ambient temperature. When the temperature returns to a normal operating level, the fans return to their default speed. All x3850 X5 system fans are hot-swap, except for fan three in the bottom x3850 X5 of a two-node complex, when QPI cables have been used to directly link the two servers.
Electrical input: 100 - 127 V, 200 - 240 V, 50 - 60 Hz Approximate input kilovolt-amperes (kVA): Minimum: 0.25 kVA Typical: 0.85 kVA Maximum: 1.95 kVA (110 V ac) Maximum: 2.17 kVA (220 V ac)
52
IBM eX5 Portfolio Overview: IBM System x3850 X5, x3950 X5, and BladeCenter HX5
SMI detection and generation (system interrupts) Serial port text console redirection System LED control (power, HDD, activity, alerts, and heartbeat) The features are as follows: An embedded Web server, which gives you remote control from any standard Web browser (No additional software is required on the remote administrators workstation.) A command-line interface (CLI), which the administrator can use from a Telnet session Secure Sockets Layer (SSL) and Lightweight Directory Access Protocol (LDAP) Built-in LAN and serial connectivity that supports virtually any network infrastructure Multiple alerting functions to warn systems administrators of potential problems through e-mail, IPMI PETs, and SNMP
53
Full details about the functionality and operation of Light Path Diagnostics in this system can be found in the IBM System x3850 X5 Problem Determination and Service Guide available from: http://www.ibm.com/support/docview.wss?uid=psg1MIGR-5083418
54
IBM eX5 Portfolio Overview: IBM System x3850 X5, x3950 X5, and BladeCenter HX5
55
56
IBM eX5 Portfolio Overview: IBM System x3850 X5, x3950 X5, and BladeCenter HX5
Chapter 4.
57
4.1 Introduction
The IBM BladeCenter HX5 supports up to four Intel Xeon Nehalem EX four-core, six-core, or eight-core processors. The HX5 supports up to 32 DIMM modules and up to four solid state drives (SSDs). Figure 4-1 shows the following two configurations: Single wide HX5 with two processor sockets and 16 DIMM sockets Double wide HX5 with four processor sockets and 32 DIMM sockets Note: The HX5 can support the attachment of a MAX5 memory expansion unit which can increase the number of DIMM sockets available to the system. The MAX5 for the IBM BladeCenter HX5 is planned to be announced later in 2010.
HX5 2-socket
HX5 4-socket
Figure 4-1 IBM BladeCenter HX5 two-socket and four-socket blade servers
12 MB or 18 MB or 24 MB (shared between cores) 978 MHz or 800 MHz (processor SMI link speed dependant) 16 DIMM slots / 128 GB maximum using 8 GB DIMMs 32 DIMM slots / 256 GB maximum using 8 GB DIMMs
58
IBM eX5 Portfolio Overview: IBM System x3850 X5, x3950 X5, and BladeCenter HX5
HX5 2-socket Optional 1.8-inch solid state drives (SSDs); Non-hot-swap (require an additional SSD carrier) Two Up to 100 GB using two 50 GB SSDs One CIOv One CFFh
HX5 4-socket Optional 1.8-inch solid state drives (SSDs); Non-hot-swap (require an additional SSD carrier) Four Up to 200 GB using four 50 GB SSDs Two CIOv Two CFFh
The components on the system board of the HX5 are shown in Figure 4-2. CIOv and CFFh Expansion slots Memory
59
I/O expansion
SAS controller
60
IBM eX5 Portfolio Overview: IBM System x3850 X5, x3950 X5, and BladeCenter HX5
61
4.5 Models
The models of the BladeCenter HX5 are shown in Table 4-4.
Table 4-4 Models of the type 9782 Server modelsa Intel Xeon model and cores E7520 4C L7555 8C E7530 6C E7540 6C E7540 6C Scalable to four socket Yes Yes Yes Yes Yes Width (mm) Clock speed (GHz) 1.86 1.86 1.86 2.0 2.0 Includes 10GbE cardb No No No No Yes Max memory speed (MHz) 800 978 978 978 978 CPU (std/max) Memory (std GB/max GBc) 2x 4 / 128 2x 4 / 128 2x 4 / 128 2x 4 / 128 2x 4 / 128
30 30 30 30 30
a. This column lists worldwide, generally available variant (GAV) model numbers. They are not orderable as listed and must be modified by country. The U.S. GAV model numbers use the following nomenclature: xxU. For example, the U.S. orderable part number for 7870-A2x is 7870-A2U. See the product-specific Official IBM announcement letter for other country-specific GAV model numbers. b. Column refers to Emulex Virtual Fabric Adapter Expansion Card (CFFh). c. Maximum system memory capacity of 128 GB is based on using 16x 8 GB memory DIMMs.
62
IBM eX5 Portfolio Overview: IBM System x3850 X5, x3950 X5, and BladeCenter HX5
DDR-3 DI MM 1 DDR-3 DI MM 2 DDR-3 DI MM 3 DDR-3 DI MM 4 DDR-3 DI MM 5 DDR-3 DI MM 6 DDR-3 DI MM 7 DDR-3 DI MM 8 Channels CFFh I/O expansion connector CIOv Blade Exp Connector
M em ory buffer M em ory buffer Intel X eon CPU 1 M em ory buffer M em ory buffer
QPI Link s
SMI links
P CI -E x8 P CI -E x8
PCI-E Gen 2 x4 E SI P CI -E Gen 1 x4 Hyper USB 2.0 Visor ICH10 south brid ge LPC
TPM 1. 2
FPGA PCI-E x 1 H8 USB 2.0 1. 12.0 S PI IM M S PI Syst em Flash IMM Flash NAND Flash Video DRA M
5709S Ethernet
Figure 4-4 on page 64 shows the block diagram of how the Speed Burst Card attaches to the system.
63
Note: The Speed Burst Card is not required for an HX5 with only one processor installed. It is also not needed for a 2-node configuration (a separate card is available for 2-node, as discussed in 4.8, Scalability on page 65).
Speed Burst Memory buffer Memory buffer Intel Xeon CPU 1 Memory buffer Memory buffer QPI Links Intel Xeon CPU 2 Memory buffer Memory buffer Memory buffer Memory buffer
Figure 4-5 shows where the Speed Burst Card is installed on the HX5.
64
IBM eX5 Portfolio Overview: IBM System x3850 X5, x3950 X5, and BladeCenter HX5
4.8 Scalability
In this section, we explain how the HX5 can be expanded to increase the number of processors. The HX5 blade architecture allows for a number of scalable configurations including the use of a MAX5 memory expansion blade, but the blade currently supports two configurations: A single HX5 server with two processor sockets: This is a standard 30 mm blade, also known as single wide server. Two HX5 servers connected together to form a single image four-socket server: This server is a 60 mm blade, also known as double wide server. The two configurations rely on native QPI scaling. In the 2-node configuration, the two HX5 servers are physically connected together and a 2-node scalability card is attached to the side of the blades, which provides the path for the QPI scaling. The two servers are connected together using a 2-node scalability card as shown in Figure 4-6. The scalability card is immediately adjacent the processors and provides a direct connection between the processors in the two nodes.
Scalability card
Figure 4-6 Two-node HX5 with the 2-node Scalability Card indicated
With a single HX5 two-socket server, the scalability card on the side of the blade is a single-node loopback card called the Speed Burst Card, and the mechanical package allows for the blade to reside in a single slot in the BladeCenter chassis. See 4.4, Chassis support on page 61 for details. The double-wide configuration is shown in Figure 4-6 and is two HX5 servers connected together. This configuration consumes two blade slots and has the 2-node scalability card attached. The scaling is performed through QPI scaling. The 2-node scalability card does not ship with the server and must be ordered separately as listed in Table 4-6 on page 66.
65
Table 4-6 HX5 2-Node Scalability Kit Part number 46M6975 Feature code 1737 Description IBM HX5 2-Node Scalability Kit
The HX5 2-Node Scalability Kit contains the 2-node scalability card plus the necessary hardware to physically attach the two HX5 servers together. Figure 4-7 shows the block diagram of a 2-node HX5 and the location of the HX5 2-Node Scalability Card.
Memory buffer
Memory buffer
Memory buffer Intel Xeon CPU 1 Memory buffer Memory buffer QPI Links Intel Xeon CPU 2
Memory buffer
Memory buffer Memory buffer Intel Xeon CPU 1 Memory buffer Memory buffer QPI Links Intel Xeon CPU 2
66
IBM eX5 Portfolio Overview: IBM System x3850 X5, x3950 X5, and BladeCenter HX5
Memory speed is dependent on the SMI link of the processor (the SMI link speed is listed in Table 4-7 as a GT/s value), and also on the limits of the low-power scalable memory buffers used in the HX5: If the SMI link speed is 6.4 GT/s, the memory in the HX5 can operate at a maximum of 978 MHz. If the SMI link speed is 5.86 GT/s, the memory in the HX5 can operate at a maximum of 978 MHz. If the SMI link speed is 4.8 GT/s, the memory in the HX5 can operate at a maximum of 800 MHz. These memory speeds are indicated in Table 4-8. When installing two processors in a standalone HX5 server, add the IBM HX5 1-Node Speed Burst Card, 59Y588. For details about this feature, see 4.4, Chassis support on page 61. The option information is listed in Table 4-9 on page 68.
67
Table 4-9 HX5 1-Node Speed Burst Card Part number 59Y5889 Feature code 1741 Description IBM HX5 1-Node Speed Burst Card
4.10 Memory
The HX5 2-socket has eight DIMM sockets per processor (a total of 16 DIMM sockets), supporting up to 128 GB of memory when using 8 GB DIMM modules. The HX5 uses registered Double Data Rate-3 (DDR-3) very-low-profile (VLP) DIMMs and provides ECC and advanced Chipkill memory protection. Table 4-10 lists the memory options for the HX5. Note: Memory must be installed in pairs of two identical DIMMs. The options in the table, however, are for single DIMMs.
Table 4-10 Memory options Part number 44T1486 44T1596 46C7499 Feature code 1916 1908 1917 Capacity (GB) 1x 2 1x 4 1x 8 Description PC3-10600 ECC DDR3 1333 MHz VLP (2Rx8, 1.5V) PC3-10600 CL9 ECC DDR3 1333 MHz VLP (2Rx8, 1.5V) PC3-8500 CL7 ECC DDR3 1066 MHz VLP (4Rx8, 1.5V) Rank Dual Dual Quad Speed (MHz) 1333 1333 1066
Although the speed of the supported memory DIMMs is as high as 1333 MHz, the actual memory bus speed is a function of the processor and the memory buffers. In the case of the Xeon 7500 and 6500 processors, memory speed is 978 MHz or lower. See Table 4-8 on page 67 for specifics. Each processor controls eight DIMMs and four memory buffers as shown in Figure 4-8. To make use of all 16 DIMM sockets, both processors must be installed. If only one processor is installed, only eight DIMM sockets can be used.
DDR-3 DI MM 1 DDR-3 DI MM 2 DDR-3 DI MM 3 DDR-3 DI MM 4 DDR-3 DI MM 5 DDR-3 DI MM 6 DDR-3 DI MM 7 DDR-3 DI MM 8 Memory c hannel s
M em ory buffer M em ory buffer Intel X eon CPU 1 M em ory buffer M em ory buffer
QPI Link s
SMI links
I ntel 7500 I/ O Hu b
Figure 4-8 Portion of the HX5 block diagram showing the processors, memory buffers and DIMMs
68
IBM eX5 Portfolio Overview: IBM System x3850 X5, x3950 X5, and BladeCenter HX5
The physical locations of the 16 memory DIMM sockets is shown in Figure 4-9.
Scalability connector Control panel connector DIMM 5 DIMM 6 DIMM 7 DIMM 8
TIGHTEN SCREWS ALTERNATELY
Microprocessor 1
Microprocessor 2
DIMM 9 DIMM 10 DIMM 11 DIMM 12 Battery DIMM 14 DIMM 16 DIMM 15 DIMM 13 Blade expansion connector
Memory mirroring
Memory mirroring works much like disk mirroring and provides high server availability in the event of a memory DIMM failure. It is roughly equivalent to RAID-1 in disk arrays, in that memory is divided into two halves and one half mirrors the other half. If 8 GB is installed, the operating systems sees 4 GB after memory mirroring is enabled on the system. Mirroring is handled at the hardware level through UEFI and no operating system support is required. When memory mirroring is enabled, the memory of a DIMM quadrant (a grouping of 4 DIMMs such as DIMMs 1-4, 5-8, and so on) of a CPU is duplicated onto the memory of the other DIMM quadrant of the CPU. Note: Memory must be identical across the two CPUs to enable memory mirroring
Chipkill
Chipkill memory technology, an advanced form of error checking and correcting (ECC) from IBM, is available for the HX5 blade. Chipkill protects the memory in the system from any single memory chip failure. It also protects against multi-bit errors from any portion of a single memory chip. The HX5 does not support redundant bit steering, because the integrated memory controller of the Intel Xeon 7500 processors does not support the feature.
69
Lock step
HX5 memory operates in lock step mode. Lock step is a memory protection feature that involves the pairing of two memory DIMMs. The paired DIMMs can perform the same operations and the results are compared. If any discrepancies exist between the results, a memory error is signaled. Lock step mode gives a maximum of 64 GB of usable memory with one CPU installed, and 128 GB of usable memory with 2 CPUs installed (using 8 GB DIMMs). Memory must be installed in pairs of two identical DIMMs per processors, installed. Although the DIMM pairs installed can be of different sizes, the pairs must be of the same speed.
Memory sparing
With memory sparing, when a DIMM fails, inactive memory is automatically turned on until the failing DIMM is replaced. Sparing provides a degree of redundancy in the memory subsystem, but not to the extent that mirroring does. In contrast to mirroring, sparing leaves more memory for the operating system. In sparing mode, the trigger for fail-over is a preset threshold of correctable errors. When this threshold is reached, the content is copied to its spare. The failed DIMM or rank is then taken offline, with the spare counterpart activated for use. The HX5 supports DIMM sparing and this feature is enabled in UEFI. Rank sparing is not supported on the HX5. With DIMM sparing, a DIMM pair within the quadrant can be used as a spare of the other DIMM pair within the quadrant. The DIMM pairs must be identical so that memory sparing can be enabled. The size of the DIMMs that is used for sparing is subtracted from the usable capacity presented to the operating system.
70
IBM eX5 Portfolio Overview: IBM System x3850 X5, x3950 X5, and BladeCenter HX5
Table 4-11 DIMM installation for the HX5 Processor 1 Number of processors Buffer Number of DIMMs DIMM 1 DIMM 2 Buffer DIMM 3 DIMM 4 Buffer DIMM 5 DIMM 6 Buffer DIMM 7 DIMM 8 Buffer DIMM 10 DIMM 9 Processor 2 Buffer DIMM 11 DIMM 12 Buffer DIMM 13 DIMM 14 Buffer DIMM 15 x DIMM 16 x x x x x x
1 1 1 1 2 2 2 2 2 2 2 2
2 4 6 8 2 4 6 8 10 12 14 16
x x x x x x x x x x x x x x x x x x x x x x x x
x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x
In a two-node (four-socket) configuration with two HX5 servers, follow the memory installation sequence in both nodes. Memory must be populated to have a balance for each processor in the configuration. For best performance, use the following general guidelines: Install as many DIMMs as possible. You can get the best performance by installing DIMMs in every socket. Each processor must have identical amounts of RAM. Spread out the memory DIMMs to all memory buffers. In other words, install one DIMM to a memory buffer before beginning to install a second DIMM to that same buffer. See Table 4-11 for DIMM placement. Memory DIMMs must be installed in order of DIMM size with largest DIMMs first, then next largest DIMMs, and so on. Placement must follow the DIMM socket installation shown in Table 4-11. To maximum memory performance, select a processor with the highest memory bus speed (as listed in Table 4-8 on page 67). The lower value of the processors memory bus speed and the DIMM speed determines how fast the memory bus can operate. Every memory bus will operate at this speed.
71
4.11 Storage
The storage system on the HX5 blade is based on the use of the SSD Expansion Card for IBM BladeCenter HX5 which contains an LSI 1064E SAS Controller and two 1.8-inch micro SATA drive connectors. The SSD Expansion Card allows the attaching of two 1.8-inch solid state drives (SSDs). If two SSDs are installed, the HX5 supports RAID-0 or RAID-1 capability. Installation of the SSDs in the HX5 requires the SSD Expansion Card for IBM BladeCenter HX5. Only one SSD Expansion Card is needed for either one or two SSDs. Ordering details are listed in Table 4-12.
Table 4-12 SSD Expansion Card for IBM BladeCenter HX5 Part number 46M6908 Feature code 5765 Description SSD Expansion Card for IBM BladeCenter HX5
Top side of the SSD Expansion Card with one SSD installed in drive bay 0
Under side of the SSD Expansion Card showing PCIe connector and LSI 1064E controller
Figure 4-10 SSD Expansion Card for the HX5 (left: top view; right: underside view)
The SSD Expansion Card can be installed in the HX5 in combination with a CIOv I/O expansion card and CFFh I/O expansion card, as shown in Figure 4-11 on page 73.
72
IBM eX5 Portfolio Overview: IBM System x3850 X5, x3950 X5, and BladeCenter HX5
Figure 4-11 Placement of SSD expansion card in combination with a CIOv card and CFFh card
73
SAS 1
PCIe x4
SerDes connector
DS3200
Figure 4-12 Connecting a SAS Connectivity Card and external SAS solution
4.12.1 CIOv
The CIOv I/O expansion connector provides I/O connections through the midplane of the chassis to modules located in bays 3 and 4 of a supported BladeCenter chassis. The CIOv slot is a second-generation PCI Express 2.0 x8 slot. A maximum of one CIOv I/O expansion card is supported per HX5. A CIOv I/O expansion card can be installed on a blade server at the same time as a CFFh I/O expansion card is installed in the blade.
74
IBM eX5 Portfolio Overview: IBM System x3850 X5, x3950 X5, and BladeCenter HX5
Table 4-14 lists the CIOv expansion cards that are supported in the HX5.
Table 4-14 Supported CIOv expansion cards Part number 44X1945 46M6065 46M6140 43W4068 44W4475 Feature code 1462 3594 3598 1041 1039 Description QLogic 8Gb Fibre Channel Expansion Card (CIOv) QLogic 4Gb Fibre Channel Expansion Card (CIOv) Emulex 8Gb Fibre Channel Expansion Card (CIOv) SAS Connectivity Card (CIOv) Ethernet Expansion Card (CIOv)
See IBM ServerProven compatibility Web site for the latest information about the expansion cards supported by the HX5: http://ibm.com/servers/eserver/serverproven/compat/us/ CIOv expansion cards are installed in the CIOv slot in the HX5 2-socket, as shown in Figure 4-13.
CIOv card
Figure 4-13 The HX5 type 7872 showing the CIOv I/O Expansion card position
4.12.2 CFFh
The CFFh I/O expansion connector provides I/O connections to high-speed switch modules that are located in bays 7, 8, 9, and 10 of a BladeCenter H or BladeCenter HT chassis, or to switch bay 2 in a BladeCenter S chassis. The CFFh slot is a second-generation PCI Express x16 (PCIe 2.0 x16) slot. A maximum of one CFFh I/O expansion card is supported per blade server. A CFFh I/O expansion card can be installed on a blade server at the same time as a CIOv I/O expansion card is installed in the server. Table 4-15 on page 76 lists the supported CFFh I/O expansion cards.
75
Table 4-15 Supported CFFh expansion cards Part number 39Y9306 44X1940 46M6001 42C1830 44W4465 44W4466 46M6164 46M6168 49Y4235 44W4479 Feature code 2968 5485 0056 3592 5479 5489 0098 0099 5755 5476 Description QLogic Ethernet and 4Gb Fibre Channel Expansion Carda QLogic Ethernet and 8Gb Fibre Channel Expansion Card 2-Port 40Gb InfiniBand Expansion Card QLogic 2-port 10Gb Converged Network Adapter Broadcom 4-port 10Gb Ethernet CFFh Expansion Carda Broadcom 2-Port 10Gb Ethernet CFFh Expansion Carda Broadcom 10Gb 4-port Ethernet Expansion Card Broadcom 10Gb 2-port Ethernet Expansion Card Emulex Virtual Fabric Adapter 2/4 Port Ethernet Expansion Card
See the IBM ServerProven compatibility Web site for the latest information about the expansion cards supported by the HX5: http://ibm.com/servers/eserver/serverproven/compat/us/ CFFh expansion cards are installed in the CFFh slot in the HX5, as shown in Figure 4-14.
CFFh card
Figure 4-14 The HX5 type 7872 showing the CFFh I/O Expansion card position
A CFFh I/O expansion card requires that a supported high speed I/O module or a Multi-switch Interconnect Module is installed in bay 7, 8, 9, or 10 of the BladeCenter H or BladeCenter HT chassis.
76
IBM eX5 Portfolio Overview: IBM System x3850 X5, x3950 X5, and BladeCenter HX5
In a BladeCenter S chassis, the CFFh I/O Expansion Card requires a supported switch module in bay 2. When used in a BladeCenter S chassis, a maximum of two ports are routed from the CFFh I/O Expansion Card to the switch module in bay 2. See the IBM BladeCenter Interoperability Guide for the latest information about the switch modules that are supported with each CFFh I/O expansion card: http://www.ibm.com/support/docview.wss?uid=psg1MIGR-5073016
4.13.1 UEFI
The HX5 2-socket uses an integrated Unified Extensible Firmware Interface (UEFI) next-generation BIOS. Capabilities include: Human readable event logs; no more beep codes Complete setup solution by allowing adapter configuration function to be moved to UEFI Complete out-of-band coverage by Advance Settings Utility to simplify remote setup To use all of the features of UEFI requires a UEFI-aware operating system and adapters. UEFI is fully backward compatible with BIOS. For more information about UEFI, see the IBM white paper, Introducing UEFI-Compliant Firmware on IBM System x and BladeCenter Servers, available from: http://www.ibm.com/support/docview.wss?uid=psg1MIGR-5083207
77
78
IBM eX5 Portfolio Overview: IBM System x3850 X5, x3950 X5, and BladeCenter HX5
41Y8278
1776
As shown in Figure 4-15, the flash drive plugs into the Hypervisor Interposer, which in turn is attached to the system board near the processors. The Hypervisor Interposer is included as standard with the HX5. Hypervisor Interposer
79
Figure 4-16 shows the AMM status page with two HX5 blades in slots six and seven with a scalability card attached to the two blades. When these blades are in scaled mode, the power buttons, video buttons, and media tray buttons all synchronize.
Figure 4-17 on page 81 shows the two HX5 blades from the AMM Web interface in an unpartitioned state. As the Available actions menu indicates, these two blades can be selected and a new partition can be created, enabling the scaling between these blades.
80
IBM eX5 Portfolio Overview: IBM System x3850 X5, x3950 X5, and BladeCenter HX5
Figure 4-18 on page 82 shows the result of creating a partition using the two HX5 blades. The mode is shown as Partition and the two blades are scaled together as a dual node, 4-socket system. The menu shows the available options that can be performed on the current partition. The following options are available: Power Off Partition This option powers off all blades in the partition (in this example, it powers off blade 6 and blade 7). Power On Partition This option powers on all blades in the partition. Remove Partition This option removes the partitioning of the selected partitions. The nodes in those partitions then are listed in the Unassigned Nodes section. Toggle Stand alone / Partition Mode This option keeps the selected partition in place, but moves the nodes into either a set of stand-alone blades or into a single partitioned system. The mode in Figure 4-18 on page 82 shows the current state, which can be either Partition or Stand-alone state.
81
82
IBM eX5 Portfolio Overview: IBM System x3850 X5, x3950 X5, and BladeCenter HX5
Figure 4-19 shows the two HX5 blades configured in a Stand-alone mode. In this mode the blades will boot separately and can run two separate operating systems.
The flexible partitioning of the HX5 systems through the AMM can also be scripted by using the AMMs command-line interface. This approach can allow for scheduled partitioning or for a mass configuration partition. Several use case examples for flexible partitioning are as follows: A single hardware configuration is designed at the rack level. It includes scalability cards for all of the HX5 blades, but the solution usage of the rack design might or might not require scaling. A single hardware configuration can be designed and reused multiple times for many separate workloads, with the scaling turned on or off, on one or more of the HX5 blades, depending on the needs and requirements of the solution. The repurposing of hardware. Say a given software application is currently unable take advantage of anything larger then a single node system. Now, a future version of this software is developed to take advantage of the dual node capabilities. The initial configuration of the HX5 blades is setup in a stand-alone mode, but when the software is upgraded the HX5 blades are repurposed to run in a dual-node configuration.
83
84
IBM eX5 Portfolio Overview: IBM System x3850 X5, x3950 X5, and BladeCenter HX5
Chapter 5.
Systems management
The eX5 hardware platform includes an entire family of systems management software applications and utilities to help with the management and maintenance of the systems. This chapter provides an overview of these applications. There are two specific systems management areas that we will cover in this section: embedded firmware, and external applications. The embedded firmware includes the Unified Extensible Firmware Interface, the Integrated Management Module, and the embedded diagnostics. The external applications include: ServerGuide, UXSPI, Bootable Media Creator, DSA, IBM Systems Director, Active Energy Manager, BladeCenter Open Fabric Manager, VM Control, Network Control, Storage Control, and TPM for OSD. The chapter contains the following topics: 5.1, Supported versions on page 86 5.2, Embedded firmware on page 86 5.3, UpdateXpress on page 90 5.4, Deployment tools on page 90 5.5, Configuration utilities on page 93 5.6, IBM Dynamic Systems Analysis on page 94 5.7, IBM Systems Director on page 95 The systems management home page IBM Systems is: http://www.ibm.com/systems/management/
85
For further information, including links and user guides, see the IBM ToolsCenter Web page: http://www.ibm.com/support/docview.wss?uid=psg1TOOL-CENTER
86
IBM eX5 Portfolio Overview: IBM System x3850 X5, x3950 X5, and BladeCenter HX5
No cryptic event logs Beep codes, which are covered completely by light path diagnostics No limits on number of adapter cards and no 1801 resource-errors Enables OS to reduce fatal errors through new cache-writeback-retry capabilities Enabled multinode power capping Ability to move adapter configuration to main UEFI configuration (all configuration can be accessed through the F1 setup menu) Ability to run in 64-bit native mode Complete support for BIOS-enabled operating systems To use all features that UEFI offers requires a UEFI-aware operating system and adapters. UEFI is fully backward compatible with BIOS. The follow operating systems are UEFI aware: Microsoft Windows Server 2008 x64 SUSE Linux Enterprise Server 11 Red Hat Enterprise Linux 6 Figure 5-1 shows the UEFI boot panel.
87
Figure 5-2 shows the initial setup panel for UEFI which is reached by pressing the F1 key during the boot process. From this panel you can view the system information, change the system settings, change the date and time, change the boot options, view the event log, or add and remove system passwords.
The System Summary panel show in Figure 5-3 is where you can find the model and serial number information as well as information about the processor speeds.
88
IBM eX5 Portfolio Overview: IBM System x3850 X5, x3950 X5, and BladeCenter HX5
Figure 5-4 shows the System Settings menu. From here, you can drill down into the sub-menus to perform additional configuration of the eX5 system.
For more information about UEFI, see the IBM white paper, Transitioning to UEFI and IMM, which is available from the following Web address: http://www.ibm.com/support/docview.wss?uid=psg1MIGR-5079769
89
Is used with IBM Systems Director to provide secure alerts and status, helping to reduce unplanned outages. Uses standards-based alerting, which enables upward integration into a wide variety of enterprise management systems. For more information about IMM, see the IBM white paper, Transitioning to UEFI and IMM, available at the following Web address: http://www.ibm.com/support/docview.wss?uid=psg1MIGR-5079769
5.3 UpdateXpress
IBM UpdateXpress can help reduce the total cost of ownership of the IBM eX5 hardware by providing an effective and simple way to update device drivers, server firmware, and firmware of supported options. UpdateXpress is available for download at no charge. UpdateXpress consists of the UpdateXpress System Pack Installer (UXSPI) and the UpdateXpress System Packs (UXSPs). The UXSPI can be used to automatically download the appropriate UXSPs from the IBM Web site or it can be used to point to a previously downloaded UXSP. The UpdateXpress System Pack Installer can be run under an operating system to update both the firmware and device drivers of the system. The currently supported operating systems and distributions are as follows: Microsoft Windows Server 2000 Microsoft Windows Server 2003 Microsoft Windows Server 2008 SUSE Linux Enterprise Server 11 SUSE Linux Enterprise Server 10 SUSE Linux Enterprise Server 9 Red Hat Enterprise Linux 5 Red Hat Enterprise Linux 4 Red Hat Enterprise Linux 3 For additional information, including downloading the tools and updates, go to the following Web address: http://www.ibm.com/support/docview.wss?uid=psg1SERV-XPRESS The Web site also has a link to the IBM UpdateXpress System Pack Installer Users Guide.
90
IBM eX5 Portfolio Overview: IBM System x3850 X5, x3950 X5, and BladeCenter HX5
Table 5-2 Deployment tool usage Action Update firmware on a system prior to installing an operating system Update firmware on a system that already has an operating system installed Install a Windows operating system Install a Linux operating system Install a Windows or Linux operating system in an unattended mode Deployment tool Bootable Media Creator UpdateXpress System Pack Installer ServerGuide IBM ServerGuide Scripting Toolkit IBM ServerGuide Scripting Toolkit
5.4.2 ServerGuide
IBM ServerGuide is an installation assistant that can simplify the process of installing a Windows operating system and configuring an eX5 server. ServerGuide can also help with the installation of the latest device drivers and other system components.
91
ServerGuide can accelerate and simplify the installation of eX5 servers in the following ways: Assists with installing Windows based operating systems and provides updated device drivers based on the hardware detected. Reduces rebooting that is required during hardware configuration and Windows operating system installation, allowing you to get your eX5 server up and running sooner. Provides a consistent server installation using IBM best practices for installing and configuring an eX5 server. Provides access to additional firmware and device drivers that may not be applied at the install time such as adapter cards that are added to the system later. ServerGuide can be downloaded from the IBM ServerGuide Web site: http://www.ibm.com/support/docview.wss?uid=psg1SERV-GUIDE
Microsoft Windows Server 2008, Standard, Enterprise, Datacenter, and Web Editions Microsoft Windows Server 2008 x64, Standard, Enterprise, Datacenter, and Web Editions Microsoft Windows Server 2008, Standard, Enterprise, and Datacenter Editions without Hyper-V Microsoft Windows Server 2008 x64, Standard, Enterprise, and Datacenter Editions without Hyper-V Microsoft Windows Server 2008 R2 x64, Standard, Enterprise, Datacenter, and Web Editions To download the Scripting Toolkit or the IBM ServerGuide Scripting Toolkit Users Reference go to the IBM ServerGuide Scripting Toolkit Web site: http://www.ibm.com/support/docview.wss?uid=psg1SERV-TOOLKIT
93
Modify the iSCSI boot settings. To modify the iSCSI settings through ASU, you must first manually configure the iSCSI settings through the server setup utility. Remotely modify all of the settings through an Ethernet connection. Download the latest version and the Advanced Settings Utility Users Guide from the Advanced Settings Utility Web site: http://www.ibm.com/support/docview.wss?uid=psg1TOOL-ASU
94
IBM eX5 Portfolio Overview: IBM System x3850 X5, x3950 X5, and BladeCenter HX5
and installed on a system where persistent use of DSA is required. The DSA Portable Edition can be downloaded and run without modifying any system files or configurations. The following operating systems and distributions are supported: Microsoft Windows Server 2008 Microsoft Windows Server 2003 or 2003 R2 RedHat Enterprise Linux 3 RedHat Enterprise Linux 4 RedHat Enterprise Linux 5 SUSE Linux Enterprise Server 9 SUSE Linux Enterprise Server 10 SUSE Linux Enterprise Server 11 VMware 3.5 VMware 4.0 Download the latest version of the software and the Dynamic System Analysis Installation and Users Guide from the IBM Dynamic System Analysis (DSA) Web address: http://www.ibm.com/support/docview.wss?uid=psg1SERV-DSA
95
96
IBM eX5 Portfolio Overview: IBM System x3850 X5, x3950 X5, and BladeCenter HX5
97
Network sensitive image replication Image replication between two separate TPMfOSD servers can be accomplished through a scheduled replication where the bandwidth is controlled or a series of command-line export utilities can be used to produce a differences file that can be sent to the subordinate TPMfOSD server. Redeployment This feature provides the ability to create a hidden partition on the server that can be used to perform a full restoration of the reference image through a boot option on the server. For additional information about TPMfOSD, see the following publications: Architecting a Highly Efficient Image Management System with Tivoli Provisioning Manager for OS Deployment, REDP-4294 Vista Deployment Using Tivoli Provisioning Manager for OS Deployment, REDP-4295 Deploying Linux Systems with Tivoli Provisioning Manager for OS Deployment, REDP-4323 Tivoli Provisioning Manager for OS Deployment in a Retail Environment, REDP-4372 Implementing an Image Management System with Tivoli Provisioning Manager for OS Deployment: Case Studies and Business Benefits, REDP-4513 Deployment Guide Series: Tivoli Provisioning Manager for OS Deployment V5.1, SG24-7397
98
IBM eX5 Portfolio Overview: IBM System x3850 X5, x3950 X5, and BladeCenter HX5
99
QPI RAID RAM RAS RDMA RHEL RISC ROC RPM RSA RTS SAN SAS SATA SCM SCSI SDRAM SED SEL SFP SIO SLC SMI SMP SNMP SOA SPORE SR SSCT SSD SSIC SSL STG TB TCG TCO TCP TOE TPM TSM
Quick Path Interconnect redundant array of independent disks random access memory reliability, availability, and serviceability Remote Direct Memory Access Red Hat Enterprise Linux reduced instruction set computer RAID-on-card revolutions per minute Remote Supervisor Adapter request to send storage area network Serial Attached SCSI Serial ATA Storage Configuration Manager Small Computer System Interface static dynamic RAM self-encrypting drive System Event Log small form-factor pluggable Storage and I/O Single Level Cell synchronous memory interface symmetric multiprocessing Simple Network Management Protocol Service Oriented Architecture ServerProven Opportunity Request for Evaluation short range Standalone Solution Configuration Tool solid state drive System Storage Interoperation Center Secure Sockets Layer Server & Technology Group terabyte Trusted Computing Group total cost of ownership Transmission Control Protocol TCP offload engine Trusted Platform Module Technical Support Management
user datagram protocol Unrecoverable Error Unified Extensible Firmware Interface Uniform Resource Locator universal serial bus virtual LAN very low profile virtual machine virtual machine file system vital product data Virtualization Technology worldwide port name
100
IBM eX5 Portfolio Overview: IBM System x3850 X5, x3950 X5, and BladeCenter HX5
Related publications
The publications listed in this section are considered particularly suitable for a more detailed discussion of the topics covered in this paper.
IBM Redbooks
For information about ordering these publications, see How to get Redbooks on page 102. Note that some of the documents referenced here may be available in softcopy only. Emulex 10Gb Virtual Fabric Adapter for IBM System x, TIPS0762 High availability virtualization on the IBM System x3850 X5, TIPS0771 Implementing an IBM System x iDataPlex Solution, SG24-7629 Implementing an Image Management System with Tivoli Provisioning Manager for OS Deployment: Case Studies and Business Benefits, REDP-4513 Implementing IBM Systems Director 6.1, SG24-7694 Implementing IBM Systems Director Active Energy Manager 4.1.1, SG24-7780 ServeRAID Adapter Quick Reference, TIPS0054 ServeRAID M1015 SAS/SATA Controller for System x, TIPS0740 ServeRAID M5015 and M5014 SAS/SATA Controllers for IBM System x, TIPS0738 ServeRAID-BR10il SAS/SATA Controller v2 for IBM System x, TIPS0741 Tivoli Provisioning Manager for OS Deployment in a Retail Environment, REDP-4372 Vista Deployment Using Tivoli Provisioning Manager for OS Deployment, REDP-4295 Architecting a Highly Efficient Image Management System with Tivoli Provisioning Manager for OS Deployment, REDP-4294 Deploying Linux Systems with Tivoli Provisioning Manager for OS Deployment, REDP-4323 Deployment Guide Series: Tivoli Provisioning Manager for OS Deployment V5.1, SG24-7397
Other publications
These publications are also relevant as further information sources: Problem Determination and Service Guide - IBM System x3850 X5 and x3950 X5 http://www.ibm.com/support/docview.wss?uid=psg1MIGR-5083418 Installation and User's Guide - IBM System x3850 X5 and x3950 X5 http://www.ibm.com/support/docview.wss?uid=psg1MIGR-5083417 Rack Installation Instructions - IBM System x3850 X5 and x3950 X5 http://www.ibm.com/support/docview.wss?uid=psg1MIGR-5083419
101
Online resources
These Web sites are also relevant as further information sources: IBM eX5 home page http://ibm.com/systems/ex5 IBM System x3850 X5 home page http://ibm.com/systems/x/hardware/enterprise/x3850x5 IBM U.S. Announcement letter for the x3850 X5 and x3950 X5 (March 30, 2010) http://ibm.com/common/ssi/cgi-bin/ssialias?infotype=dd&subtype=ca&&htmlfid=897/ ENUS110-022 IBM BladeCenter HX5 home page http://ibm.com/systems/bladecenter/hardware/servers/hx5 IBM U.S. Announcement letter for the HX5 (March 30, 2010) http://ibm.com/common/ssi/cgi-bin/ssialias?infotype=dd&subtype=ca&&htmlfid=897/ ENUS110-068 IBM System x Configuration and Options Guide http://ibm.com/support/docview.wss?uid=psg1SCOD-3ZVQ5W IBM ServerProven http://ibm.com/systems/info/x86servers/serverproven/compat/us/
102
IBM eX5 Portfolio Overview: IBM System x3850 X5, x3950 X5, and BladeCenter HX5
Back cover
High performance servers based on the new Intel Xeon 7500 family of processors Detailed technical information about each server and option Scalability, partitioning, and systems management details
High-end workloads drive ever-increasing and ever-changing constraints. In addition to requiring greater memory capacity, these workloads challenge you to do more with less and to find new ways to simplify deployment and ownership. And although higher system availability and comprehensive systems management have always been critical, they have become even more important in recent years. Difficult challenges, such as these, create new opportunities for innovation. IBM eX5 portfolio delivers this innovation. This portfolio of high-end computing introduces the fifth generation of IBM X-Architecture technology. It is the culmination of more than a decade of x86 innovation and firsts that have changed the expectations of the industry. With this latest generation, eX5 is again leading the way as the shift toward virtualization, platform management, and energy efficiency accelerates. This IBM Redpaper publication bookintroduces the new IBM eX5 portfolio and describes the technical detail behind each server.