Академический Документы
Профессиональный Документы
Культура Документы
V100R005C10
Product Description
Issue 01
Date 2015-11-11
and other Huawei trademarks are trademarks of Huawei Technologies Co., Ltd.
All other trademarks and trade names mentioned in this document are the property of their respective
holders.
Notice
The purchased products, services and features are stipulated by the contract made between Huawei and the
customer. All or part of the products, services and features described in this document may not be within the
purchase scope or the usage scope. Unless otherwise specified in the contract, all statements, information,
and recommendations in this document are provided "AS IS" without warranties, guarantees or
representations of any kind, either express or implied.
The information in this document is subject to change without notice. Every effort has been made in the
preparation of this document to ensure accuracy of the contents, but all statements, information, and
recommendations in this document do not constitute a warranty of any kind, express or implied.
Website: http://e.huawei.com
Purpose
This document describes the basic information about the product for users to learn the
FusionCompute.
Intended Audience
This document is intended for:
l Technical support engineers
l Maintenance engineers
Symbol Conventions
The symbols that may be found in this document are defined as follows:
Symbol Description
Symbol Description
Change History
Changes between document issues are cumulative. The latest document issue contains all the
changes made in earlier issues.
Issue 01 (2015-11-11)
This is the first official release.
Contents
2 Product Architecture..................................................................................................................... 6
2.1 Software Components.....................................................................................................................................................7
2.2 Interfaces and Protocols..................................................................................................................................................8
3 Deployment Scheme................................................................................................................... 10
3.1 System Requirements................................................................................................................................................... 11
3.2 Software Deployment Scheme..................................................................................................................................... 18
4 Product Functions........................................................................................................................ 21
4.1 Virtual Computing........................................................................................................................................................ 22
4.2 Virtual Network............................................................................................................................................................ 26
4.3 Virtual Storage.............................................................................................................................................................. 27
4.4 Compatibility................................................................................................................................................................ 29
4.5 Availability................................................................................................................................................................... 30
4.6 Security......................................................................................................................................................................... 30
5 System Principle.......................................................................................................................... 32
5.1 Communication Principles........................................................................................................................................... 33
5.2 Time Synchronization Mechanism............................................................................................................................... 38
6 Reliability......................................................................................................................................39
6.1 Software Reliability...................................................................................................................................................... 40
6.2 Architecture Reliability................................................................................................................................................ 42
7 Technical Specifications.............................................................................................................44
A Glossary........................................................................................................................................47
A.1 A-E...............................................................................................................................................................................48
A.2 F-J................................................................................................................................................................................ 49
A.3 K-O.............................................................................................................................................................................. 51
A.4 P-T............................................................................................................................................................................... 52
A.5 U-Z...............................................................................................................................................................................53
1 Overview
The FusionCompute provides high system security and reliability and reduces operational
costs. It helps carriers and enterprises build secure, green, and energy-saving data centers.
Figure 1-1 shows the FusionCompute position in the Huawei cloud computing solution.
API
FusionSphere
eBackup UltraVR
FusionCompute
Virtual resource scheduling FusionSphere SOI
Availability Security Scalability
Hardware infrastructure
Cloud Facilities
Cloud facilities refer to the auxiliaries and space required by the cloud data center, including
the power supply system, fire-fighting system, wiring system, and cooling system.
Huawei has been devoted to continuously enhancing the competitiveness of the data center
based on the concept called SAFE, which focuses on smartness, availability, flexibility, and
efficiency.
FusionSphere
The FusionSphere virtualizes hardware resources using the virtualization software deployed
on physical servers, so that one physical server can be used as multiple virtual servers. The
FusionSphere allocates all current workloads to VMs on some servers, so that new
applications and solutions can be deployed on the servers whose original workloads are
migrated to other VMs.
l Hardware Infrastructure Layer
Hardware infrastructure consists of servers, storage devices, network devices, and
security devices. These resources allow customers to build different scale systems and
expand its capacity based on actual needs and to use applications ranging from entry
level to enterprise level. Various devices provide customers with multiple and flexible
choices.
l FusionManager
The FusionManager monitors and manages hardware and software of cloud computing.
It provides automatic resource provisioning and automatic operation and maintenance
(O&M) for the infrastructure. Additionally, it provides a web user interface (UI) to
administrators to operate and manage the resources in the system.
l FusionStorage
FusionStorage is the distributed storage software that integrates storage and computing
capabilities. It can be deployed on general-purpose x86 servers to consolidate the local
disks on all the servers into a virtual storage resource pool that provides the block
storage function.
l FusionSphere SOI
FusionSphere System Operation Insight (SOI) collects and displays VM performance
indicators in the FusionSphere cloud system, models and analyzes the collected data,
makes predictions on future performance changes based on the collected data, and
provides suggestions on system performance management.
l eBackup
The VM backup scheme uses the Huawei eBackup backup software combined with the
snapshot backup function and the Changed Block Tracking (CBT) backup function of
the FusionCompute to back up VM data.
l UltraVR
UltraVR is a piece of disaster recovery (DR) management software. By using the
asynchronous remote replication feature of the underlying storage system, FusionCloud
UltraVR provides Huawei virtual machines (VMs) data protection and DR of critical
data.
FusionCube
The FusionCube is an integrated cloud infrastructure platform developed for large- and
medium-sized enterprises to improve core business operating efficiency.
1.2 Features
Unified Virtualization Platform
FusionCompute uses virtualization management software to create high-performance,
operable, and manageable virtual machines (VMs) over computing resources. FusionCompute
provides the following functions:
l Allocates VM resources on demand.
l Supports various operating systems (OSs).
l Isolates VMs to ensure quality of service (QoS).
Big Cluster
A cluster supports up to 128 hosts and 3000 VMs.
Rights Management
FusionCompute provides rights management and allows users to manage system resources
based on their roles and rights.
Comprehensive O&M
FusionCompute provides various operation and maintenance (O&M) tools to control and
manage services, improving O&M efficiency. FusionCompute provides the following
operation and maintenance tools:
l Black box
The black box enables carriers or enterprises to rapidly locate faults based on logs and
program heaps. It reduces the time to locate fault and improves O&M efficiency.
l Automatic health check
The automatic health check helps FusionCompute automatically detect system faults in a
timely manner and generate alarms, ensuring timely O&M of VMs.
l Web interfaces
FusionCompute provides web interfaces, through which users can monitor and manage
all hardware resources, virtual resources, and service provisioning.
Cloud Security
FusionCompute complies with local information security laws and regulations. It adopts
various security measures and policies to provide end-to-end protection to user access,
management and maintenance, data, networks, and virtualization.
2 Product Architecture
FusionManager
REST
VRM
SOAP
CNA
FusionCompute
Module Function
FusionManager
REST
FusionCompute
3 Deployment Scheme
PC Requirements
A PC is required for FusionCompute software installation and initial configuration. Table 3-1
lists the PC requirements.
Item Requirements
Memory ≥ 2 GB
Hard disk l The partition for installing the operating system (OS) has more
than 1 GB free space.
l Except the partition for the OS, at least one partition has more than
2 GB free space.
Host Requirements
Table 3-2 lists the host requirements.
NOTE
If servers to be used are used before, restore the servers to factory settings before configuring the BIOS.
Hard disk or USB When hard disks are used, ensure that the hard disk space is at least 16
disk drive GB. If the VRM VM uses the local storage to create disks, the hard
disk space must be at least 96 GB.
When the USB disk drive is used, ensure that its space is at least 4
GB.
Item Requirements
Redundant array l Configure hard disks 1 and 2 as RAID 1 for installing the host OS
of independent to improve storage reliability.
disks (RAID) When setting the boot device in the host BIOS, set the first boot
device to a RAID 1 disk.
l If the host has multiple disks, you are advised to set up RAID 5
with all the disks except disks 1 and 2.
l If the customer has special requirements on RAID, follow the
requirements.
l If FusionStorage is deployed, some hard disks on hosts are used by
FusionStorage, and other hard disks need to be added as local
storage, then the disks to be added as local storage must be
configured as RAID.
NOTE
The RAID cards on certain servers must be configured with RAID; otherwise,
the host OS cannot be installed. For details about specific requirements for
RAID cards, see the product documentation delivered with the server.
Boot device Set the first boot device to a hard disk or a USB flash drive.
If the first boot device is a hard disk, you are advised to select a RAID
1 disk. If no RAID 1 disk is available, you are advised to retain the
default boot sequence unless the customer has special requirements
for the boot sequence.
NOTICE
Ensure that the first boot device of the host is the location where the host OS is
installed. Otherwise, the host may fail to start or use another OS.
NIC Pre-boot l An onboard NIC driver is used if another OS has been previously
execution installed on the server.
environment l PXE is enabled for NIC1 and disabled for other NICs.
(PXE)
NOTE
l If both common NICs and iNICs are deployed on a host, set PXE to
Enabled for the first iNIC and Disabled for other NICs. Otherwise, the
PXE function may become unavailable.
l After host installation is complete in the system, disable PXE for all the
NICs to prevent mis-installation caused by a PXE server on the
management network when any host restarts.
BIOS clock The BIOS clock is set to the local Coordinated Universal Time
(UTC).
For example, if the local time is Beijing time (UTC +8) 2013-10-20
08:15:20, set the BIOS clock to 2013-10-20 00:15:20.
Table 3-4 lists the requirements for advanced CPU configurations of the host BIOS.
Table 3-4 Requirements for advanced CPU configurations of the host BIOS
NOTE
The configuration item name may vary depending on different servers or BIOS versions. Therefore,
check the configuration item information before configuration. If a server does not support a technology
or function, do not configure the function.
NOTE
If the VRM node is deployed on a VM, set the space of the system disk on the VRM VM to 80 GB.
If the VRM VM uses local storage on the host, you are advised to set the disk using RAID 1 on the host
(for example, the disk where the host OS is installed) to a data store and create the VRM VM using the
data store to improve VRM VM reliability.
– If multiple IP SAN devices are used, configure the cascading between the IP SAN
devices after software installation.
– If NAS devices are used, configure the shared directories (data stores) and a list of
hosts that can access the shared directories.
– The operating system (OS) compatibility of some non-Huawei SAN devices varies
depending on LUN spaces. For example, if the storage space of a LUN on a certain
SAN device is greater than 2 TB, certain OSs can identify only 2 TB storage space
on the LUN. Therefore, read the storage device product documentation to
understand the Novell SUSE Linux OS compatibility of the non-Huawei SAN
devices before you use the devices.
– If SAN devices are used, use Internet Small Computer Systems Interface (iSCSI) to
connect hosts and storage devices. The iSCSI connection does not require
additional switches, thereby reducing costs.
l Local storage resources
If local storage resources are used, only the free space of the disk on which the host OS
is installed and other bare disks can be used as data stores.
NOTE
Local storage resources are provided only to the host on which the local disks are located. Note the
following points if local storage resources are used:
l If the system uses only the non-virtualized local storage resources, VM migration cannot be
performed.
l Configure local storage resources based on the host computing capability to prevent excess storage
resources when all computing resources on a host are used up.
You are advised to deploy shared storage for service VMs.
Network Requirements
The Simple Network Management Protocol (SNMP) and Secure Shell (SSH) are required for
the switch to enhance security. For SNMP, V3 is recommended.
The Spanning Tree Protocol (STP) protocol is disabled on the switch. Otherwise, an alarm
claiming that the host is faulty may be wrongly reported.
Table 3-6 lists the requirements for communication between network planes in the system.
Service plane Specifies the plane used by user Communication between VRMs
VMs on which services are and host on the service plane is
deployed. normal if VRM nodes are
required to assign IP addresses to
service plane NICs.
Site Cluster 1
Management Computing nodes
nodes
Host Host Host
VRM (active)
(Host)
(Host) …
Storage
…
resources
Cluster n
Computing nodes
Host Host Host
…
NOTE
Storage resources can be provided by storage area network (SAN) devices, network attached storage
(NAS) devices, or local storage devices.
Deployment Scheme
Table 3-8 describes the FusionCompute management modes deployment schemes.
4 Product Functions
VM Resource Management
VM Resource Management allows administrators to create VMs using a VM template or in a
custom manner, and manage cluster resources. This feature provides the following functions:
automatic resource scheduling (including load balancing mode and dynamic energy-saving
mode), VM life cycle management (including creating, deleting, starting, restarting,
hibernating, and waking up VMs), storage resource management (including managing
common disks and shared disks), VM security management (including using custom
VLANs), and online VM QoS adjustment (including setting CPU QoS and memory QoS).
l VM life cycle management
The VM life cycle management function allows users to adjust the VM status based on
service load. User can perform the following operations to a VM:
– Create, delete, start, stop, restart, or query a VM.
After receiving a VM creation request, FusionCompute selects proper physical
resources to create the VM based on the specified requirements. The requirements
include the VM specifications (such as the number of vCPUs, memory size, and
system disk size), image specifications, and network specifications.
After the VM is created, FusionCompute monitors the VM running status and its
attributes.
Users can stop, restart, and delete VMs as required.
– Hibernate or wake up a VM.
Users can hibernate idle VMs when the service load is light and wake up the
hibernated VMs when the service load is heavy. This improves system resource
utilization.
l VM template management
VM templates can be customized for creating VMs.
l CPU Quality of service (QoS)
The CPU QoS ensures optimal allocation of computing resources for VMs and prevents
resource contention between VMs due to different service requirements. It effectively
increases resource utilization and reduces costs.
During creation of VMs, the CPU QoS is specified based on the service to be deployed.
After the VMs are created, the dynamically binds the vCPUs to physical CPUs based on
the CPU QoS. In this way, a pool of physical CPUs that are bound to different vCPUs
with the same CPU QoS is created on a server.
The CPU QoS determines the VM computing power. The system ensures the VM CPU
QoS by setting the minimum computing capability and the computing capability upper
limit for VMs.
CPU QoS contain the following parameters:
– CPU quota
CPU quota defines the proportion based on which CPU resources to be allocated to
each VM when multiple VMs compete for physical CPU resources.
This section uses a host (physical server) that uses a single-core, 2.8 GHz CPU as
an example to describe how CPU quota works. Three VMs (A, B, and C) run on the
host, and their quotas are set to 1000, 2000, and 4000, respectively. When the CPU
workloads of the VMs are heavy, the system allocates CPU resources to the VMs
based on the CPU quotas. VM A with 1000 CPU quota can obtain a computing
capability of 400 MHz. VM B with 2000 CPU quota can obtain a computing
capability of 800 MHz. VM C with 4000 CPU quota can obtain a computing
Users can add or delete vCPUs for VMs based on the service load, irrespective of
whether the VMs are running or stopped. This allows computing resources to be adjusted
in a timely manner.
l Adjusting the memory size for VMs in the running or stopped state
Users can increase or decrease the memory size for VMs based on the service load,
irrespective of whether the VMs are running or stopped. This allows memory resources
to be adjusted in a timely manner.
l Adding or deleting NICs for VMs in the running or stopped state
Users can add or delete virtual NICs for running or stopped VMs to meet service
requirement for NICs.
l Attaching virtual disks for VMs in the running or stopped state
Users can expand the storage capacity for VMs, irrespective of whether the VMs are
running or stopped.
NOTE
When a VM uses virtualized storage and is in the running or stopped state, users can expand the
VM storage capacity by enlarging the capacity of existing disks on the VM.
VM Live Migration
FusionCompute supports VM migration between the hosts that share the same data stores
without interrupting services. This reduces the service interruption time caused by server
maintenance and saves energy for data centers.
The QoS function does not support traffic control among VMs on the same host.
DVS
Each host connects to a distributed virtual switch (DVS), which functions as a physical
switch. The DVS connects to VMs through a virtual port in the downstream direction and to
the host NIC in the upstream direction. The DVS implements network communication
between hosts and VMs.
In addition, the DVS ensures unchanged network configuration for VMs when the VMs are
migrated across hosts.
Dual-stack is defined in RFC 4213. Both IPv4 and IPv6 protocols run on the both terminal
devices and network nodes, thereby implementing communication between IPv4 and IPv6
nodes. Nodes that have both IPv4 and IPv6 stacks deployed are named dual-stack nodes. They
can receive and send both IPv4 and IPv6 packets, and can communicate with IPv4 nodes over
the IPv4 network and communicate with IPv6 nodes over the IPv6 network.
The port on a dual-stack device can have only one IPv4 or IPv6 address configured, or have
both IPv4 and IPv6 addresses configured.
To assign an IPv6 address to a VM, you can deploy a third-party DHCPv6 server, use a
hardware gateway to implement stateless automatic assignment, or uses static IP address
injection.
l Logical unit numbers (LUNs) on storage area network (SAN) storage, including Internet
Small Computer Systems Interface (iSCSI) and fibre channel (FC) SAN storage
l File systems on network attached storage (NAS) devices
l FusionStorage storage resource pools
l Local hard disks
l Local RAM disks
Data stores support the following file system formats:
l Virtual Image Management System (VIMS)
The VIMS is a high-performance file system that is designed for storing VM files. The
VIMS data can be stored on any Small Computer System Interface (SCSI)-based local or
shared storage device, such as FC, Fibre Channel over Ethernet (FCoE), and iSCSI SAN
devices.
l Network File System (NFS)
The NFS runs on NAS devices. FusionSphere supports the NFS V3 protocol. It can
connect to the custom NFS disks on the NFS server and have these disks attached to
meet storage requirements.
l EXT4
FusionSphere supports virtualization of server local disks.
VM Snapshot
Users can save the static data of a VM at a specific moment as a snapshot. The data includes
information about all disks attached to the VM at the snapshot-taking moment. The snapshot
can be used to restore the VM to the state it was in when the snapshot was taken. This
function applies to data backup and disaster recovery systems (such as eBackup and UltraVR)
to improve system security and availability.
4.4 Compatibility
For details about the compatibility for servers, I/O devices, storage devices. and operating
systems (OSs), log in to compatibility check assistant.
Table 4-1 lists OSs that support the PVSCSI.
Type OS
4.5 Availability
VM Live Migration
In FusionCompute, this feature enables VMs to be migrated from one host to any host across
computing clusters. During the migration, services are not interrupted. If the migration fails,
the VM on the destination server will be destroyed. The user can still use the VM on the
source server. This reduces the service interruption time by migrating VMs from the physical
server to be maintained to another physical server and saving energies for the data center.
VM Fault-based Migration
If a VM becomes faulty, FusionCompute automatically restarts the VM. In the process of
creating VMs, the user can enable the high availability (HA) function to automatically restart
a VM when the VM is faulty. The system periodically checks the status of VMs. When the
system detects that the physical server on which a VM runs is faulty, the system will migrate
the VM to another server and restart the VM so that the VM can be restored in a timely
manner. Because the restarted VM will be recreated and loaded with the operating system like
a physical server, the data that was not saved when the VM encountered the error is lost.
The system can detect errors on the hardware and system software that cause VM failures.
4.6 Security
Virtual Network Access Control
The range of the network VMs connect to can be divided by configured virtual local area
network (VLAN) IDs of network interface cards (NICs) on VMs. Also, the system supports
the Dynamic Host Configuration Protocol (DHCP) quarantine function to improve system
security.
l The port group to which a VM NIC belongs to can be dynamically modified, and thereby
the NIC VLAN ID can also be dynamically changed.
l The DHCP function can be enabled or disabled on the port group to which a VM NIC
belongs to.
l When the NIC VLAN ID is dynamically changed, the NIC VLAN can also be changed
by binding a new VLAN to the NIC without add a NIC.
l The system also provides the network security function to prevent VMs from invalid
DHCP servers.
5 System Principle
This section uses IP storage area network (SAN) devices as an example to describe how storage devices
in the FusionCompute system communicate with other planes.
VRM
eth0
BMC
eth1
Management
Service plane
plane
Host eth0
SAN
BMC
Storage A1~A4
eth1
plane B1~B4
VRM
eth0
BMC
Management
Service plane
plane
Host eth1
eth0
SAN
VRM
eth0
BMC
Management
Service plane
plane
eth4 eth5
Host bond0
eth0 SAN
eth1
Storage A1~A4
BMC eth2
plane B1~B4
eth3
NOTE
Man Network port eth0 on The network port eth0 on each node are assigned to the
age the host management plane VLAN, which then become the default
men VLAN of the management plane.
t Network port eth0 on
plan the active and standby
e Virtualization
Resource
Management (VRM)
nodes
BMC network ports The switch port connected to the BMC network port on
on the VRM and host each node is assigned to the BMC plane VLAN, which
then becomes the of the default VLAN for the BMC plane.
NOTE
The BMC network ports can be assigned to an independent BMC
plane or to the same VLAN as management network ports. The
specific assignment depends on the actual network plan.
Stor Storage network ports The storage plane is divided into four VLANs:
age A1, A2, A3, A4, B1, l A1 and B1 are assigned to VLAN 1.
plan B2, B3, and B4 on the
e SAN storage devices l A2 and B2 are assigned to VLAN 2.
l A3 and B3 are assigned to VLAN 3.
Storage network port
eth1 on the host l A4 and B4 are assigned to VLAN 4.
l eth1 is assigned to VLAN 1, VLAN 2, VLAN 3 and
VLAN 4.
Serv Service network port The service plane is divided into multiple VLANs to
ice eth0 on the host isolate VMs. All data packets from different VLANs are
plan forwarded over the service network ports on the CNA. The
e data packets are marked with VLAN tags and sent to the
service network port of the switch on the access layer.
Man Network port eth0 on The network port eth0 on each node are assigned to the
age the host management plane VLAN, which then become the default
men VLAN of the management plane.
t Network port eth0 on
plan the active and standby
e VRM nodes
BMC network ports The switch port connected to the BMC network port on
on the VRM and host each node is assigned to the BMC plane VLAN, which
then becomes the of the default VLAN for the BMC plane.
NOTE
The BMC network ports can be assigned to an independent BMC
plane or to the same VLAN as management network ports. The
specific assignment depends on the actual network plan.
Stor Storage network ports The storage plane is divided into four VLANs:
age A1, A2, A3, A4, B1, l A1 and B1 are assigned to VLAN 1.
plan B2, B3, and B4 on the
e SAN storage devices l A2 and B2 are assigned to VLAN 2.
l A3 and B3 are assigned to VLAN 3.
Storage network ports
eth2 and eth3 on the l A4 and B4 are assigned to VLAN 4.
host l eth2 is assigned to VLAN 1 and VLAN 2.
l eth3 is assigned to VLAN 3 and VLAN 4.
The network port eth2 can communicate with ports A1,
A2, B1, and B2 over the layer 2 network. The network port
eth3 can communicate with ports A3, A4, B3, and B4 over
the layer 2 network. This allows computing resources to
access storage resources through multiple paths. (Each
computing server has eight Internet Small Computer
Systems Interface (iSCSI) links to connect to the same
storage device.) Therefore, the storage network reliability
is ensured.
Serv Service network port The service plane is divided into multiple VLANs to
ice eth1 on the host isolate VMs. All data packets from different VLANs are
plan forwarded over the service network ports on the CNA. The
e data packets are marked with VLAN tags and sent to the
service network port of the switch on the access layer.
Ma Network ports bond0 The network ports eth0 and eth1 on each node
na eth0 and eth1 on are assigned to the management plane VLAN,
ge the host which then become the default VLAN of the
me management plane.
nt Network port -
pla eth0 on the active
ne and standby
VRM nodes
Sto Storage network - The storage plane is divided into four VLANs:
rag ports A1, A2, A3, l A1 and B1 are assigned to VLAN 1.
e A4, B1, B2, B3,
pla and B4 on the l A2 and B2 are assigned to VLAN 2.
ne SAN storage l A3 and B3 are assigned to VLAN 3.
devices l A4 and B4 are assigned to VLAN 4.
Storage network - l eth2 is assigned to VLAN 1 and VLAN 2.
ports eth2 and l eth3 is assigned to VLAN 3 and VLAN 4.
eth3 on the host
The network port eth2 can communicate with
ports A1, A2, B1, and B2 over the layer 2
network. The network port eth3 can
communicate with ports A3, A4, B3, and B4
over the layer 2 network. This allows computing
resources to access storage resources through
multiple paths. (Each computing server has eight
iSCSI links to connect to the same storage
device.) Therefore, the storage network
reliability is ensured.
Host VRM
VM
6 Reliability
VM Live Migration
The live migration feature allows users to migrate VMs from one physical server to another
physical server without interrupting services. The VM manager provides quick recovery of
memory data and memory sharing technologies to ensure that the VM data remains
unchanged before and after the live migration. The VM live migration applies to the following
scenarios:
l Before performing operation and maintenance (O&M) operations on a physical server,
system maintenance engineers can relocate VMs from this physical server to another
physical server. This minimizes risk of service interruption during the O&M process.
l Before upgrading a physical server, system maintenance engineers can relocate VMs
from this physical server to other physical servers. This minimizes risk of service
interruption during the upgrade process. After the upgrade is complete, system
maintenance engineers can relocate the VMs to the original physical server.
l System maintenance engineers can relocate VMs from a light-loaded server to other
servers and then power off the server. This helps reduce service operation costs.
Table 6-1 describes the types of VM live migration.
VM Load Balancing
In the load balancing mode, the system dynamically allocates the load based on the current
load status of each physical server node to implements load balance in a cluster.
Snapshot
The snapshot feature enables the FusionCompute to restore a damaged VM by using its
snapshots.
A snapshot is a set of system files and directories of a VM kept in storage as they were some
time in the past.
l When a VM is faulty, a user can quickly create a VM based on the backed-up VM
snapshot.
l A user can also restore a VM to the time the snapshot is created.
VM Isolation
The isolation feature ensures that all VMs running on the same physical server are
independent. Therefore, the faulty VM does not affect other VMs.
VM isolation is implemented based on virtualization software. Each VM has independent
memory space, network address space, CPU stack register, and disk storage space.
VM OS Fault Detection
If a VM becomes faulty, the system automatically restarts the faulty VM from the physical
server where the VM is located or from another physical server, depending on the preset
policy. Users can also configure the system to neglect the faults. The system can detect and
address internal errors of VM OSs, such as the blue screen of death (BSOD) on Windows
VMs and the panic status of Linux VMs.
Black Box
The black box embedded in the FusionCompute collects information about the system. If a
fault occurs, the black box collects and stores the last information about the system. This
facilitates fault location.
The black box stores the following information:
l Storage kernel logs
l System snapshots
l Screen output information before the system exits
l Diagnosis information from the diagnosis tool
The active and standby nodes check the status of each other using the heartbeat messages sent
over the management plane. The active node is automatically determined based on the
heartbeat messages.
l Only the active node provides services. The standby node only provides basic functions
and periodically synchronizes data with the active node.
l If the active node is faulty, the standby node takes over services from the active node and
changes to the active state. The original active node changes to the idle state.
The active node faults include the network interruption, abnormal state, or faulty service
process of the active node.
Traffic Control
The traffic control mechanism helps the management node provide concurrent services of
high availability without system collapse duo to excessive traffic. The traffic control is
enabled for the access point, so that excessive load on the front end can be prevented to
enhance system stability. To prevent service failures due excessive traffic, this function is also
enabled for each key internal process in the system, such as traffic control on image
downloading, authentication, VM services (including VM migration, VM high availability,
VM creation, hibernation, waking up, and stopping), and operation and maintenance (O&M).
Fault Detection
The system provides the fault detection and alarm functions, and the tool for displaying fault
on web browsers. When a cluster is running, users can monitor cluster management and load
balancing by using a data visualization tool to detect faults, including load balancing
problems, abnormal processes, or hardware performance deterioration trend. Users can view
historical record to obtain the information about daily, weekly, and even annual hardware
resource consumption.
The FusionCompute periodically checks consistency of all VM data and disk file data on
management nodes. If data inconsistency is found, the system generates an audit log. The
maintenance engineers can clear the data inconsistency based on the log.
7 Technical Specifications
Management Capability
Host Specifications
VM Capacity
Disk Capacity
Snapshot Capacity
Network Capacity
Maximum virtual switch ports per DVS l Standard mode: 2048 x Number of hosts
managed by a DVS
l VMDq-enabled mode: 250 x Number of
hosts managed by a DVS
VM Specifications
A Glossary
A.1 A-E
A.2 F-J
A.3 K-O
A.4 P-T
A.5 U-Z
A.1 A-E
A
active directory A directory service created by Microsoft for Windows domain networks. It is included
in most Windows Server operating systems, such as Windows Standard Server,
Windows Enterprise Server, and Windows Datacenter Server.
AD See active directory
B
bare VM A VM that has an identity but does not occupy any CPU, memory, storage, or network
resource in the system.
Baseboard A dedicated micro controller embedded in the main board of a computer (especially a
Management server).
Controller
BMC See Baseboard Management Controller
C
CBT See Changed Block Tracking
Changed Block An incremental data backup function. With this function enabled, the system uses a
Tracking bitmap to keep track of VM storage blocks as they change following the last backup.
Therefore, the system only backs up the data blocks that have been changed since the
last backup.
CNA See Computing Node Agent
Computing Node This is deployed on a computing node and used to manage the VMs and VM
Agent mounting on the computing node.
D
disk The logical storage disk of a VM, which can either be a system disk or user disk.
distributed virtual A virtual switch (created on a physical server) that uses software to implement data
switch switching between VMs on the same or different servers.
Distributed Virtual A module used to manage distributed virtual switches (DVSs). Deployed in the same
Switch Management cluster with the Virtual Resource Management (VRM) node, the DVSM creates,
deletes, maintains, and presents DVSs in the system. Each cluster has a DVSM
module.
Dom0 See Domain 0
Domain Domain includes Dom0 and DomU.
Domain 0 A modified Linux kernel and the only VM that operates on the Xen Hypervisor. Dom0
can access physical I/O resources and interwork with other VMs operating on the
system. Dom0 must be started before other domains.
Domain U Paravirtualized VMs operating on the Xen Hypervisor are called Domain U PV
Guests, which supports the operating system whose kernel has been modified, such as
Linux, Solaris, FreeBSD, and other UNIX operating systems. Fully virtualized VMs
are called Domain U HVM Guests, which supports the operating system whose kernel
does not need to be modified, for example, Windows.
DomU See Domain U
DPM See Dynamic Power Management
DRS See Dynamic Resource Scheduler
DVS See distributed virtual switch
DVSM See Distributed Virtual Switch Management
Dynamic Power A module that intelligently powers on or off idle physical servers based on the system
Management load on the network.
Dynamic Resource A module that uses intelligent scheduling algorithms to flexibly schedule resources
Scheduler and dynamically balance system load to improve user experience.
E
Elastic Load Balancer A component that provides load balancing services for tenants. End users can apply
for an ELB and associate their hosts with the ELB. The ELB evenly distributes service
requests to the associated hosts based on customized load balancing policies. The ELB
helps improve service stability and reliability.
Elastic Service A point from which to control VM resources and virtual block storage resources. It
Controller provides an open ECi interface.
elastic virtual switch A virtual switch that implements data switching, virtual local area network (VLAN)
isolation, Dynamic Host Configuration Protocol (DHCP) isolation, bandwidth
limiting, and priority setting.
ELB See Elastic Load Balancer
Equipment Serial This uniquely identifies a set of equipment.
Number
ESN See Equipment Serial Number
EVS See elastic virtual switch
A.2 F-J
F
FCSAN See fibre channel storage area network
fiber channel storage A type of storage area network (SAN) that uses fiber channels between servers and
area network storage devices. FC SAN devices provide high performance but high costs, and are
gradually replaced by IP SAN devices.
full clone Full copy of the consolidated sum of delta disks and base disk of a virtual machine.
Each full clone is entirely separated from the parent VM and can have different system
disks or software from the parent VM. Full clones apply to common office automation
scenarios.
hierarchical storage A storage mechanism that stores the most-frequently accessed IP SAN data on a solid-
state drive (SSD) to speed up access, stores the less-frequently accessed data on a
Serial Attached SCSI (SAS) drive, and stores the seldom accessed data on a Serial
Advanced Technology Attachment (SATA) drive.
Host A physical server that runs virtual software. VMs can be created on a host.
Hypervisor The software layer on a virtual server, which manages the VMs on the server and
helps VMs share the hardware resources of the virtual server. The Xen Hypervisor is a
software layer between the hardware and operating system, which performs CPU
scheduling and partitioning between VMs. The Xen Hypervisor controls VM
migration between hardware devices and other VM-related operations (because the
VMs share a processing environment). The Xen Hypervisor does not process
networks, storage devices, videos, or other I/O resources.
Image An exact copy of all running software on a server used for quick installation of the
VM operating system and software.
iNIC Intelligent Network Interface Card
Integrated Storage This centrally manages multiple storage systems.
Management
IP storage area A type of storage area network (SAN) that uses IP channels between servers and
network storage devices. IP SAN device performance is not as good as the FC SAN device
performance, but the use of IP SAN devices is not restricted by transmission distances.
With the IP bandwidth improvement, IP SAN devices will gradually replace FC SAN
devices.
IP SAN See IP storage area network
iSCSI Internet Small Computer Systems Interface
ISM See Integrated Storage Management
A.3 K-O
L
LB Load Balancer
linked clone A duplicate of a virtual machine that uses the same base disk as the original and a
chain of delta disks to keep track of the differences between the original and the clone.
This reduces the need for disk space and allows multiple VMs to use the same
software installation. Linked clones apply to scenarios that require VMs using the
same software installation, for example, call centers. System disks on linked clones
can have slight differences from the system disk on the parent VM. The differences
are stored on delta disks of the linked clones.
linked cloning A technology used to generate a quick copy of VMs by creating a delta disk instead of
copying an entire virtual hard disk.
linked snapshot A snapshot taken only for the memory or storage changes of a VM. A VM can be
restored using multiple relevant linked snapshots. A snapshot can be taken for the
memory or storage resource.
Live Migration Also known as hot migration, this is a method of migrating virtual machines (VMs)
without interrupting services.
local storage Storage space provided by a computing node agent (CNA).
logical cluster A logical group consisting of servers with the same attributes, such as CPU, storage,
and distributed virtual switch (DVS), in a physical cluster. The VM HA function takes
effect only for servers in the same logical cluster.
LUN Logical Unit Number
A.4 P-T
P
placeorder VM When the host-based replication disaster recovery (DR) is used, and you add VMs to a
protection group, UltraVR automatically creates placeorder VMs at the DR site based
on the VM specifications and resource mappings of the VMs added to the protection
group. Then UltraVR synchronizes data of the protected VMs with the placeorder
VMs. When executing the recovery plan, UltraVR can start the placeorder VMs at the
DR site to quickly restore services. UltraVR can also use a placeorder VM to clone
and start a VM to test the recovery plan. This process exerts no adverse impact on data
synchronization between the protected VM and the placeorder VM. After the recovery
plan is executed, UltraVR will stop and delete the clone.
POE See Provisioning Orchestration Engine
port group A group of ports with the same attributes on a distributed virtual switch (DVS) or
virtual software switch (VSS). In a hypervisor, all DVS settings, such as virtual local
area network (VLAN) and network flow control, are configured on a port group basis.
PortGroup See port group
Pre-boot Execution This technology enables computers to boot from the network. This technology is the
Environment successor of Remote Initial Program Load (RPL). The PXE works in client/server
mode. The PXE client resides in the ROM of a network card. When the computer
boots up, the BIOS invokes the PXE client to the memory. The PXE client obtains an
IP address from the DHCP server and downloads the operating system from the
remote server through TFTP.
Provisioning This exposes the unified service provisioning interface and synchronizes services
Orchestration Engine between components in the SingleCLOUD system.
PXE See Pre-boot Execution Environment
Software Client Software running on a common PC to process the virtual desktop protocol.
Storage Area Network A network dedicated to transporting data for storage and retrieval.
storage cold migration A storage migration mode that allows data migration on a disk only after all VMs on
the disk are stopped.
storage resource pool A collection of storage resources. For example, an IP storage area network (IP SAN)
functions as a storage resource pool for a cluster.
Storage Thin Storage thin provisioning is the act of using virtualization technology to give the
Provisioning appearance of more physical storage resources than is actually available. It allows
storage space to be easily allocated to users on an on-demand and auto-scale basis.
This optimizes utilization of available storage resources.
T
TC See Thin Client
Thin Client A terminal with lower processing power than a Thick Client that processes the virtual
desktop protocol, serves as the client of the remote desktop, and provides an access
method for users.
Thin LUN A logical storage unit created in the thin pool. The thin LUN is accessible to the host.
Thin Pool A thin pool is implemented based on storage thin provisioning. It allows storage space
to be dynamically allocated to users on demand. This optimizes utilization of available
storage resources.
Tools Tools is a virtualized driver for VMs. Tools improves VM performance, enables VM
hardware monitoring and advanced VM functions, such as migration, snapshot taking,
and on-line CPU adjustment.
A.5 U-Z
U
Unified Virtualization Virtual management software that divides the computing resource into multiple VM
Platform resources.
UVP See Unified Virtualization Platform
V
vCPU See Virtual CPU
VDS See Virtual Distributed Switch
vFW See Virtual Firewall
VIMS See Virtual Image Management System
Virtual CPU A hyper-thread on a server with multiple physical CPUs, which have multiple physical
cores on each CPU and multiple hyper-threads on each core.
Virtual Disk A file in the host file system. For a customer operating system, it functions as a
physical disk drive. The file can be configured on the host or a remote file system.
After configuring a VM with a virtual disk, you can install a new operating system to
the disk file without having to repartition a physical disk or restart the host. Virtual
disks on the VMware Workstation can be mapped to the partitions on the host.
Virtual Distributed A virtual switch (created on a physical server) that uses software to implement data
Switch switching between VMs on the same or different servers.
Virtual Firewall A network firewall service or appliance running in a virtualized environment to
provide the usual packet filtering and monitoring functions like a physical network
firewall.
Virtual Image A high-performance cluster file system that enables the FusionManager to connect to
Management System storage resources through a unified interface, which allows multiple VMs to gain
access to an integrated storage pool to improve resource utilization efficiency. The
VIMS, as the basis for virtualizing multiple storage servers, provides services such as
live migration, dynamic resources scheduling, and high availability for storage
devices.
Virtual Local Area An end-to-end logical network across different network segments and networks,
Network constructed using the network management software based on the switching LAN.
Network resources and users are logically divided based on a certain principle and a
physical LAN is logically divided into multiple broadcasting domains (VLANs). The
hosts on a VLAN can directly communicate with each other whereas different VLANs
cannot. This efficiently suppresses broadcasting packets.
Virtual Machine One or multiple computer systems virtualized from a physical server.
Virtual Machine A function that controls the scheduling policy configuration and the control points of
Dispatch the policy dispatching.
Virtual Machine An operation that can be performed to a running VM. After the hibernated VM is
Hibernate started, it restores all programs before the hibernation.
Virtual Machine IP The IP address assigned to a VM, corresponding to the IP address of a physical
Address machine. The VM can communicate with other devices on the network through the IP
address.
Virtual Memory Virtualized memory for a VM allocated based on the physical memory. Even if the
memory is not physically contiguous, it is contiguous for the VM. The VM can
randomly save and obtain data in its virtual memory without affecting the memory
accessibility of other VMs on the same physical machine.
Virtual Network Card A network card for VMs that corresponds to that of a physical machine. Multiple
virtual network cards can be created for a single VM. The virtual network card can
connect to the physical network card in bridge mode for data transmission.
Virtual Resource Huawei-developed virtualization management software, which comprises Huawei
Management infrastructure products and the Unified Virtualization Platform (UVP).
Virtual Server This is where the operating system and applications run based on various
virtualization technologies, unlike the original physical server. When using certain
resources of the physical server, the virtual server is the same as the physical server
for users. Both partitions and VMs are considered virtual servers.
Virtual Service A virtual appliance that provides layer 3 or layer 4 services on a virtual network. It can
Gateway contain one or multiple service instances including vRouter, vFirewall, vDHCP, NAT,
and VPN. The FusionManager supports VSG service implementation through a
vFirewall or a system VM.
Virtual Software This is deployed on a computing node and performs the virtual network switching
Switch function for the VMs on the node.
VLAN See Virtual Local Area Network
VM See Virtual Machine
VM High Availability With this, the O&M system continuously monitors all physical hosts and
automatically migrates all VMs off a faulty host.
VM Migration A technology used to migrate VMs to another hardware resource for VM operations.
VM Specifications A set of pre-defined VM attributes for creating VMs with unified specifications.
VM template A template used to create VMs that have the same specifications. A VM template is a
VM in essence. A VM and a VM template can convert to each other as required. After
a VM is converted to a VM template, only its isTemplate attribute is changed to true.
VMD See Virtual Machine Dispatch
VRM See Virtual Resource Management
VSG See Virtual Service Gateway
VSS See Virtual Software Switch