Huawei cloud computing ie interview notes – explanation of terms


1. Memory reuse:

When the physical memory of the physical server is certain, the memory is time-sharing reused by comprehensively using the single technology of memory reuse (memory bubble, memory sharing and memory replacement).

VMM (virtual machine monitor, also known as hypervisor) dynamically schedules memory between VMS to improve memory utilization. Through the memory reuse technology, the physical memory is virtualized into more memory for the virtual machine, so that the total memory specification of the virtual machine can be greater than the physical memory of the host, and finally improve the virtual machine density of the host. Memory reuse is enabled and disabled in the cluster. You can’t decide which project can be started by yourself. It’s up to the virtual machine. Memory reuse methods commonly used at present: memory bubble, memory replacement, memory sharing;

Memory bubble: VMM monitors the memory use of VMS, reclaims the memory of some VMS with less memory use, and allocates it to VMS with high memory pressure.

Memory replacement: VMM monitors the memory usage of VM, replaces the data not commonly used in VM memory (cold data and data with less hot factor) and writes it to disk storage. When these data are needed, it can be read from disk to memory.

Memory sharing: VMM monitors the memory usage of VMS. When multiple VMS have memory pages with the same data, only one memory page of data is reserved, and other memory pages with the same data are released for more VMS.

Memory reuse is prohibited by default.

*Restrictions (tested)

Enable in cluster configuration

After opening, the three technologies are opened at the same time

Conflict with hardware pass through and NUMA

1. Memory reuse and sriov pass through, GPU pass through and nvme SSD pass through are mutually exclusive.

2. If the physical memory is not expanded in time, the VM service may not run normally.

3. Affect host performance. Memory replacement has the greatest impact on host performance because it is the conversion between memory and physical disk.

4. The intelligent network card is used to monopolize the memory space. Memory reuse cannot be turned on

*When will memory reuse be turned on?

Open when the physical memory is insufficient to allocate.

*Value of memory reuse:

1. Increase VM density 2 Extend and expand the time period of physical memory

*Memory reuse dependency

1. If there are hosts using INIC network cards in the cluster, the cluster memory reuse function cannot be enabled.

2. The sum of the reserved memory of all virtual machines running on each computing node cannot be greater than the sum of the actual available physical memory used by the virtual machine.

3. Turn on the host memory reuse and guest NUMA at the same time, or turn on the host CPU resource isolation mode and guest NUMA at the same time, which will lead to the failure of guest NUMA function.

*How to configure memory reuse?

Through the cluster resource control of VRM management interface, the memory reuse of cna host is turned on / off.

*What is the ratio of memory reuse?

Huawei virtualization platform can increase the memory reuse ratio to 150% through intelligent reuse of the above three technologies.

*Usage scenarios? (tested)

1. It is suitable for increasing the density of virtual machines when the memory space is limited;

2. The cost is limited, and the shortage of memory space needs to be alleviated;

3. During memory expansion, the buffer time required for memory expansion can be extended.

*What are the storage requirements of memory replacement? (tested)

Swap partition space is enough

What is the specific memory management? (tested)

VMM to manage (seems to comment like this)

2. Storage thin provisioning:

Virtual storage thin provisioning is a method to optimize storage utilization by flexibly allocating storage space on demand. Thin provisioning can provide users with virtual storage space larger than the actual physical storage. Only the virtual storage space that writes data can really allocate physical storage for it. The unwritten virtual storage space does not occupy physical storage resources, so as to improve storage utilization.

 storage independent

Virtual storage thin provisioning has nothing to do with the operating system and hardware. Therefore, as long as the virtual image management system is used, the virtual storage thin provisioning function can be provided.

 capacity monitoring

Provide early warning of data storage capacity. You can set the threshold. When the storage capacity exceeds the threshold, an alarm will be generated.

 space recycling

Provide virtual disk space monitoring and recycling functions. When the storage space allocated to users is large and the actual use is small, the allocated but actually unused space can be recycled through the disk space recycling function. Virtual machine disk recycling in ntfusionstorage format is currently supported.

*Usage scenarios? (tested)

Super allocated storage

*Use restrictions? (tested)

Virtualized data storage



3. Virtual machine hot migration:

Supports free migration of virtual machines between hosts with the same shared storage. During the virtual machine migration, the user business will not be interrupted. This function can avoid business interruption caused by server maintenance and reduce the power consumption of the data center.

*Under what circumstances will the system perform thermal migration?

DRS, DPM, resource scheduling rules, manual migration.

Can’t openstack migrate automatically? (tested)


4. Link clone:

Linked clone virtual machine refers to creating the target virtual machine directly from the source virtual machine (i.e. linked clone virtual machine template). The virtual machine must be running when the source virtual machine exists.

The advantage is that the common part between multiple linked clone virtual machines (the part from the source virtual machine, usually the system disk, called the parent volume) can share the same memory space and the same disk space. Therefore, when the server host resources are the same, the linked clone method can support more virtual machines, run more businesses, or run more virtual desktops, so as to reduce the it cost of the enterprise; At the same time, if you need to update the software of the virtual machine (such as upgrading software, updating software virus library, etc.), you only need to operate the source virtual machine, and the target virtual machine cloned from the source virtual machine will be updated at the same time.

Advantages: 1 Fast batch distribution of virtual machines 2 Low storage cost 3 Batch operation and maintenance to save maintenance cost

Disadvantages: 1 Start storm (solution: planned startup, using SSD hard disk) batch startup or anti-virus will cause IO storm

5. Distributed resource scheduling DRS:

Resource scheduling refers to the intelligent scheduling of these virtualized resources according to different loads, so as to achieve the load balance of various resources of the system, and effectively improve the utilization of data center resources while ensuring the high reliability, high availability and good user experience of the whole system.

DRS # threshold configuration: Cluster – configuration – computing resource scheduling configuration – automation level, threshold and scheduling baseline can be set.

Rule group setting: Cluster – configuration – rule group – cluster resource control – add

*Reason for failure of DRS:

1. Virtual machine binding hardware device

2. Binding of virtual machine and host

3. The memory of the virtual machine is greater than 8g

4. The DRS is turned off

5. Mutually exclusive or clustered virtual machine rules affect migration

6. New imbalances after migration

6. Dynamic energy saving dispatching DPM:

In the process of monitoring the running status of computing servers and virtual machines in a cluster, if the business volume in the cluster is found to be reduced, the system will concentrate the business on a few computing servers and automatically shut down the remaining computing servers; If the business volume in the cluster increases, the system will automatically wake up the computing server and share the business.

Power on: VRM cooperates with IPMI protocol to complete the power on action through the BMC card on the CNA physical node.

Power down: VRM performs safe shutdown of cna node through VNA.

7. Resource scheduling strategy:

Clustered virtual machines: the listed virtual machines must run on the same host. A virtual machine can only be added to one clustered virtual machine rule.

Mutually exclusive virtual machines: the listed virtual machines must run on different hosts. A virtual machine can only be added to one mutually exclusive virtual machine rule.

Virtual machine to host: associate a virtual group with a host group and set association rules to specify whether the members of the selected virtual group can run on the members of a specific host group. (prohibited, should, should not, must)

8. Dynamic adjustment of resources:

Support dynamic adjustment of virtual machine resources. Users can dynamically adjust the use of resources according to business load. Virtual machine resource adjustment includes the number of vcpus, memory size, number of network cards and the number and size of virtual disks.

9. QoS fine resource control:

QoS function is the quality of service. Resources and priorities can be configured according to the business of the virtual machine to ensure the operation and use of high priority business virtual machines.

CPU QoS: share (allocated in proportion), quota (maximum), reservation (minimum)

Memory QoS: share, quota (range: current value – current value, 0 means unlimited, which is valid only when memory reuse is enabled), reservation

Disk QoS: IOPs (maximum read, write, read and write bytes) (ensure the minimum read and write capacity of a disk) and BPS (maximum number of read, write, read and write requests per second)

IOPs (IO throughput per second), BPS (IO size per second)

Network QoS: average bandwidth, peak bandwidth, burst size and traffic shaping (peak shaving and valley filling)

Differences between CPU and memory reservation:

CPU takes effect when resources are preempted. After memory reservation, it takes effect regardless of whether it is used or not.

10. Virtual machine snapshot:

A snapshot is a consistent recoverable copy of a virtual machine or disk at a point in time

The virtual machine snapshot function refers to saving the virtual machine state at a certain time like a photo, and using snapshot technology to restore the virtual machine to the state at the time of snapshot when necessary. The contents saved by the virtual machine snapshot include the information of all disks of the virtual machine, disk contents, virtual machine configuration information and memory data. This function is applied to data backup and disaster recovery scenarios to improve the security and reliability of system operation.

Full copy snapshot

Differential snapshot:

Cow: copy on write

Row: redirect on write

Technology of snapshot implementation

Cow: write means copy, and reading basically has no effect. The first writing has an effect. Read and write twice at a time.

Create a snapshot space. When writing data, write the source space data to the snapshot space, and then write new data to the source location. When reading, read the source spatial data directly.




Row: write is redirect, write operation has no impact, and read operation has potential impact (serious data fragmentation may occur)

Create a snapshot space. When writing data, write new data into the snapshot space. When reading, read the snapshot space first and then the source space.




Application scenario: row, virtual machine snapshot in FC; Cow, the backend of cloud disk service is connected to v3/v5 storage.

Full copy (image separation technology)

11. Storage hot migration:

When the virtual machine is running normally, the administrator can manually migrate the disks of the virtual machine to other storage units. Storage hot migration can be migrated within the same storage device and between different storage devices under storage virtualization management. Hot migration enables customers to dynamically adjust virtual machine storage resources without business damage, so as to realize equipment maintenance, storage DRS resource adjustment and other operations.

12. EIP, SNAT, and the difference between them


Definition: elastic IP, also known as static IP and public IP

Function: bind the elastic IP with the virtual machine. When the virtual machine provides external services, you can log in to the IP.

It can be flexibly bound and unbound with elastic cloud server, bare metal server, virtual IP, elastic load balancing and other resources.

Generally implemented by firewall.


Source address translation: in the hard SDN scheme, the IP address in a network segment in the VPC is mapped into the public IP address. It is mainly to provide the internal ECS with access to the Internet.

Differences between the two:

EIP realizes 1:1 mapping between public network and private network, and SNAT is 1: n. (there are two major factors for using SNAT: 1. Cost, because the price of public IP is expensive. 2. Security factor, to prevent direct access to ECs from external network)


Differences between the two types # EIP:

Principle: type I routes traffic to network devices through network nodes (EIP realized by network nodes); Type II is realized through business firewall (EIP realized by firewall)

Configuration process:


1) Create an external network corresponding to EIP in service OM, set the label to Internet type, and create a subnet

2) Configure equivalent routes on network equipment (integrated access switch, core switch and NGFW) to route EIP traffic to soft nat (NAT server) nodes.

3) Assign external network to VDC tenants on SC.

4) Tenants apply for EIP services.

5) Finally, bind the EIP.


1) Create an external gateway on AC (hardware equipment {realizing hard SDN), select “L3 shared exit”, and fill in the planned EIP network name.

2) Create an external network on service om prefixed with the name of the external gateway created above AC. Label the external network and select Internet as the network type. Create a subnet and fill in the planned EIP network segment.

3) Assign external network to VDC tenants on SC.

4) Tenants apply for EIP services.

5) Finally, bind the EIP.

Can EIP only be public IP?

It can be public or private

What equipment is EIP implemented on?

Network nodes and firewalls

13. IMS: the difference between public image, private image and shared image

(1) Public image: the standard image provided by the cloud platform system, which is visible to all users.

(2) Private image: the personal image created by the user based on the ECS, which is only visible to the user

(3) Shared image: users share their private image with other users.

IMS image management service provides the life cycle management capability of image. Users can flexibly use public images, private images or shared images to apply for elastic ECs and bare metal servers. At the same time, users can also create private images using external image files through existing cloud servers to realize business cloud or cloud migration.

14. Ha: High Availability Technology

Concept: when the physical server or virtual machine fails, the system automatically starts the virtual machine in the resource pool to another available physical server.

When HA is triggered, the service will be interrupted. To be exact, when HA is triggered, the service has been interrupted.

Thermal migration is planned migration and HA is unplanned migration.


Processing methods: do not process, restart the source host, ha, shut down the virtual machine


For shared storage, turn on the HA function in the cluster.

*Ha principle or implementation process and implementation mode:

When the virtual machine fails, VRM will detect it, and then first judge whether HA is enabled. If HA is enabled, it will select an appropriate host and send the virtual machine description information to the selected host. The selected host will create a new virtual machine according to the description information, start and remount the volume, and then pull up the virtual machine to restore business.

The management node monitors the VM status and finds VM faults

Tools monitoring, IO monitoring

Judge whether the VM starts the HA function, and select the available cna node according to the VM specification information

The management node sends VM description information to the target cna node

The CNA node mounts the corresponding disk and starts the virtual machine according to the virtual machine description information

*Fault type and scenario:

Virtual machine failure: Windows blue screen, Linux kernel crash

Physical host failure: power down, shutdown, restart

How to detect blue screen or kernel crash:

Install tools, and then tools will establish heartbeat with VMM layer. When heartbeat is not received in the cycle, it is considered that failure may occur. Then check whether VM has io. If not, it is considered that blue screen or kernel crashes. This information is collected by VM and fed back to VRM.

14. Disk type, disk configuration mode, disk mode:

Disk type:

Normal: a disk is used by a virtual machine

Sharing: one disk is used by multiple virtual machines

Disk configuration mode: normal, normal delay zeroing and thin

Difference: 1 Performance: normal > normal delay zeroing > thin

2. Creation speed: compact > normal delay zeroing > Normal

Mode of disk: dependent, independent persistent and independent non persistent

Slave: consistent with the life cycle of the virtual machine. The snapshot includes the slave disk. When the snapshot is restored, the slave disk is restored.

Independent persistent: it is not affected by the snapshot, and the data will be written to the disk persistently. The snapshot does not include the independent persistent disk, and the independent persistent disk will not be recovered during snapshot recovery.

Independent – non persistent: it is not affected by the snapshot, and the restart or shutdown data is restored.

The difference is in snapshot: a slave snapshot is a snapshot of the disk.

The disk is not included in the independent persistent snapshot.

The independent non persistent snapshot does not contain the disk, and the shutdown data is restored.

15. Port group:

Definition: a set of ports with the same network attributes (VLAN, QoS, etc.)

Type: normal, relay

Advanced settings: fill in TCP verification, IP and MAC binding, accept traffic shaping, send traffic shaping and broadcast packet suppression

What are port groups used for?

Configure policy network properties

What is the relationship between port group and VLAN?

VLAN is an attribute in the port group

16. Uplink:

Definition: the logical link connecting DVS to the physical network is the bridge between virtual machines and the physical world.

The link between the distributed switch and the host physical network card

Link aggregation: active / standby, load

Active and standby: two network ports are bound to improve reliability

Load: two network ports are bound and can transmit data (polling based, source destination IP based, destination MAC based, source MAC based)

*What is the difference between uplink and port group?


Uplink group: the physical network port or bound network port whose member is cna, which is the link for the internal traffic of cna host to enter the physical network.

Port of VM group: the port that enables VM to connect to the network.

Different connection objects

Uplink is the interface for DVS to connect the upper physical tor switch;

The port of the port group is used to connect the VM.


Definition: a resource isolated area where any computing resource can access any storage resource

1. Server virtualization scenario:

It is a partition of physical resources and a resource pool for users. The clusters in the partition need to be associated with the same DVS. The storage used must be the same and the virtualization type used must be the same

2. Data center scenario:

The administrator artificially divides the computing resources and storage resources in order to divide different resources when accessing different virtualization environments, so as to avoid the impact of different virtualization on the virtual machine


Multiple CPUs on the server are divided into multiple nodes. Each node has a corresponding memory module. The performance of accessing memory in the node is the best, and there is a delay in accessing across nodes.

Host NUMA: on the host BIOS system

Guestos NUMA: enabled in FC interface cluster settings

19、FusionSphere SOI:

It is a performance monitoring and analysis system, which collects and analyzes the performance of virtual machines in the cloud system, forecasts them through modeling, and feeds back optimization suggestions to the administrator


It is a tool for health examination and information collection.