Вы находитесь на странице: 1из 7

Blade server

From Wikipedia, the free encyclopedia

Jump to: navigation, search

IBM HS20 blade server. Two bays for 2.5" (6.4 cm) SCSI hard drives appear in the upper left area of the image. A blade server is a stripped down server computer with a modular design optimized to minimize the use of physical space and energy. Whereas a standard rack-mount server can function with (at least) a power cord and network cable, blade servers have many components removed to save space, minimize power consumption and other considerations, while still having all the functional components to be considered a computer. A blade enclosure, which can hold multiple blade servers, provides services such as power, cooling, networking, various interconnects and management. Together, blades and the blade enclosure form the blade system. (Different blade providers have differing principles regarding what to include in the blade itself, and in the blade system altogether.) In a standard server-rack configuration, 1U (one rack unit, 19" [48 cm] wide and 1.75" [4.45 cm] tall) defines the minimum possible size of any equipment. The principal benefit and justification of blade computing relates to lifting this restriction so as to reduce size requirements. The most common computer rack form-factor is 42U high, which limits the number of discrete computer devices directly mountable in a rack to 42 components. Blades do not have this limitation; As of 2009, densities of up to 128 discrete servers per rack are achievable with blade systems.

Contents
[hide]

1 Blade enclosure o 1.1 Power o 1.2 Cooling o 1.3 Networking

2 Storage 3 Other blades 4 Uses 5 History 6 Blade Models 7 See also 8 External links 9 References

[edit] Blade enclosure


The enclosure (or chassis) performs many of the non-core computing services found in most computers. Non-blade systems typically use bulky, hot and space-inefficient components, and may duplicate these across many computers that may or may not perform at capacity. By locating these services in one place and sharing them between the blade computers, the overall utilization becomes more efficient. The specifics of which services are provided may vary by vendor.

HP BladeSystem c7000 enclosure (populated with 16 blades), with two 3U UPS units below.

[edit] Power
Computers operate over a range of DC voltages, but utilities deliver power as AC, and at higher voltages than required within computers. Converting this current requires one or more power supply units (or PSUs). To ensure that the failure of one power source does not affect the operation of the computer, even entry-level servers have redundant power supplies, again adding to the bulk and heat output of the design. The blade enclosure's power supply provides a single power source for all blades within the enclosure. This single power source may come as a power supply in the enclosure or as a

dedicated separate PSU supplying DC to multiple enclosures.[1][2] This setup reduces the number of PSUs required to provide a resilient power supply. The popularity of blade servers, and their own appetite for power, has led to an increase in the number of rack-mountable UPS units, including units targeted specifically towards blade servers (such as the BladeUPS).

[edit] Cooling
During operation, electrical and mechanical components produce heat, which a system must displace to ensure the proper functioning of its components. Most blade enclosures, like most computing systems, remove heat by using fans. A frequently underestimated problem when designing high-performance computer systems involves the conflict between the amount of heat a system generates and the ability of its fans to remove the heat. The blade's shared power and cooling means that it does not generate as much heat as traditional servers. Newer blade-enclosures feature variable-speed fans and control logic, or even liquid cooling-systems[3][4] that intelligently adjust to meet the system's cooling requirements. At the same time, the increased density of blade-server configurations can still result in higher overall demands for cooling with racks populated at over 50% full. This is especially true with early-generation blades. In absolute terms, a fully populated rack of blade servers is likely to require more cooling capacity than a fully populated rack of standard 1U servers. This is because one can fit up to 128 blade servers in the same rack that will only hold 42 1U rack mount servers.[5]

[edit] Networking
Manufacturers of computers increasingly ship their products with high-speed, integrated network interfaces, and most are expandable to allow for the addition of connections that are faster, more resilient and run over different media (copper and fiber). These may require extra engineering effort in the design and manufacture of the blade, consume space in both the installation and capacity for installation (empty expansion slots) and hence result in more complexity. Highspeed network topologies require expensive, high-speed integrated circuits and media, while most computers do not utilize all the bandwidth available. The blade enclosure provides one or more network buses to which the blade will connect, and either presents these ports individually in a single location (versus one in each computer chassis), or aggregates them into fewer ports, reducing the cost of connecting the individual devices. Available ports may be present in the chassis itself, or in networking blades.[6][7] Functionally, a blade chassis can have two types of networking modules: switching or passthrough.

[edit] Storage

While computers typically use hard disks to store operating systems, applications and data, these are not necessarily required locally. Many storage connection methods (e.g. FireWire, SATA, ESATA, SCSI, SAS DAS, FC and iSCSI) are readily moved outside the server, though not all are used in enterprise-level installations. Implementing these connection interfaces within the computer presents similar challenges to the networking interfaces (indeed iSCSI runs over the network interface), and similarly these can be removed from the blade and presented individually or aggregated either on the chassis or through other blades. The ability to boot the blade from a storage area network (SAN) allows for an entirely disk-free blade, an example of which implementation is the Intel Modular Server System. This allows more board space to be devoted to extra memory or additional CPUs. Depending on vendors, some blade servers may include or exclude internal storage devices.

[edit] Other blades


Since blade enclosures provide a standard method for delivering basic services to computer devices, other types of devices can also utilize blade enclosures. Blades providing switching, routing, storage, SAN and fibre-channel access can slot into the enclosure to provide these services to all members of the enclosure. Systems administrators can use storage blades where a requirement exists for additional local storage.[8][9][10]

[edit] Uses
Blade servers function well for specific purposes such as web hosting, virtualization, and cluster computing. Individual blades are typically hot-swappable. As users add more processing power, memory and I/O bandwidth to blade servers, they deal with larger and more diverse workloads. Although blade server technology in theory allows for open, cross-vendor solutions, the stage of development of the technology as of 2009 means users encounter fewer problems when using blades, racks and blade management tools all from the same vendor. Eventual standardization of the technology might result in more choices for consumers;[11][12] as of 2009 increasing numbers of third-party software vendors have started to enter this growing field.[13] Blade servers do not, however, provide the answer to every computing problem. One can view them as a form of productized server-farm that borrows from mainframe packaging, cooling, and power-supply technology. Very large computing tasks may still require server farms of blade servers, and because of blade servers' high power density, can suffer even more acutely from the HVAC problems that affect large conventional server farms.

[edit] History

Developers placed complete microcomputers on cards and packaged them in standard 19-inch racks in the 1970s soon after the introduction of 8-bit microprocessors. This architecture operated in the industrial process control industry as an alternative to minicomputer controlsystems. Early models stored programs in EPROM and were limited to a single function with a small realtime executive. The VMEbus architecture (ca. 1981) defined a computer interface which included implementation of a board-level computer installed in a chassis backplane with multiple slots for pluggable boards to provide I/O, memory, or additional computing. The PCI Industrial Computer Manufacturers Group PICMG developed a chassis/blade structure for the then emerging Peripheral Component Interconnect bus PCI which is called CompactPCI. Common among these chassis based computers was the fact that the entire chassis was a single system. While a chassis might include multiple computing elements to provide the desired level of performance and redundancy, there was always one board in charge, one master board coordinating the operation of the entire system. PICMG expanded the CompactPCI specification with the use of standard Ethernet connectivity between boards across the backplane. The PICMG 2.16 CompactPCI Packet Switching Backplane specification was adopted in Sept 2001 (PICMG specifications). This provided the first open architecture for a multi-server chassis. PICMG followed with the larger and more feature-rich AdvancedTCA specification targeting the telecom industry's need for a high availability and dense computing platform with extended product life (10+ years). While AdvancedTCA system and boards typically sell for higher prices than blade servers, AdvancedTCA suppliers claim that low operating-expenses and total-cost-of-ownership can make AdvancedTCA-based solutions a cost-effective alternative for many building blocks of the next generation telecom network. The first commercialized blade server architecture[citation needed] was invented by Christopher Hipp and David Kirkeby and their US 6411506 was assigned to Houston-based RLX Technologies.[14] RLX, which consisted of mostly former Compaq Computer Corp employees, including Hipp and Kirkeby, shipped the first commercial blade server in 2001[citation needed] and were acquired by Hewlett Packard (HP) in 2005.[15] In February 2006, Blade.org was established to increase the number of blade platform solutions available for customers and to accelerate the process of bringing them to market. It is a collaborative organization and developer community focused on accelerating the development and adoption of IBM blade server platforms. The name blade server appeared when a card included the processor, memory, I/O and nonvolatile program storage (flash memory or small hard disk(s)). This allowed manufacturers to package a complete server, with its operating system and applications, on a single card / board / blade. These blades could then operate independently within a common chassis, doing the work of multiple separate server boxes more efficiently. In addition to the most obvious benefit of this packaging (less space-consumption), additional efficiency benefits have become clear in power, cooling, management, and networking due to the pooling or sharing of common infrastructure to supports the entire chassis, rather than providing each of these on a per server box basis.

The research firm IDC identified [16] the major players in the blade market as HP, IBM and Dell. Other companies selling blade servers include AVADirect, Sun, Egenera, Supermicro, Hitachi, Fujitsu-Siemens, Rackable (Hybrid Blade), Verari Systems and Intel (by way of reselling the IBM Blade chassis).

[edit] Blade Models


Though many independent professional computer manufacturers such as Supermicro offer blade solutions, the blade server market continues to be dominated by large public IT companies such as HP, which, as of Q1 2010, owns 52.4% market share, with IBM coming in 2nd with 35.1%.[17] Other main competitors include Sun Microsystems, Dell and Cisco. HP's current line consists of two chassis models, the c3000 which holds up to 8 half height ProLiant line blades (also available in tower form), and the c7000 (10U) which holds up to 16 half height ProLiant blades. Dell's latest solution, the M1000e is a 10U modular enclosure and holds up to sixteen half-height blade servers of the PowerEdge line available from Dell.

[edit] See also


Server computer Blade PC Multibus

[edit] External links


Wikimedia Commons has media related to: Blade servers

Comprehensive source for Blade Information - Blade Server Technology overview BladeCenter blade servers - x86 processor-based servers Using Blade Servers to Increase Energy Efficiency - Energy Efficiency Blade Server Consolidation & Virtualization

[edit] References
1. 2. 3. 4. 5. 6. 7. 8. 9. 10. ^ HP BladeSystem p-Class Infrastructure ^ Sun Blade Modular System ^ Sun Power and Cooling ^ HP Thermal Logic technology ^ HP BL2x220c ^ Sun Independent I/O ^ HP Virtual Connect ^ IBM BladeCenter HS21 ^ HP storage blade ^ Verari Storage Blade

11. ^ http://www.techspot.com/news/26376-intel-endorses-industrystandard-blade-design.html TechSpot 12. ^ http://news.cnet.com/2100-1010_3-5072603.html CNET 13. ^ http://www.theregister.co.uk/2009/04/07/ssi_blade_specs/ The Register 14. ^ US patent 6411506, Christopher Hipp & David Kirkeby, "High density web server chassis system and method", published 2002-06-25, issued 2002-06-25, assigned to RLX Technologies 15. ^ "HP Will Acquire RLX To Bolster Blades". www.informationweek.com. October 03, 2005. http://www.informationweek.com/news/global-cio/showArticle.jhtml?articleID=171202558. Retrieved 2009-07-24. 16. ^ IDC press release 17. ^ http://www.idc.com/getdoc.jsp?containerId=prUS22360110

Retrieved from "http://en.wikipedia.org/wiki/Blade_server" Categories: Server hardware | 2001 introductions Hidden categories: Articles containing potentially dated statements from 2009 | All articles containing potentially dated statements | All articles with unsourced statements | Articles with unsourced statements from July 2009

Вам также может понравиться