Data center computer room temperature control: Build hospital data centers with the network as the core
Publication Date:2025-06-03
Share to

With the gradual application of hospital information systems and the continuous growth of hospital business, hospital data has also entered the era of "big data" with the development of IT technology. Modern hospital data centers, like other industries, are composed of standardized components: infrastructure, servers, and storage, which are built into data centers through network systems, and network systems become the supporting platform for data center operation.

 Data centers that virtualize cloud computing

The direct effect of server virtualization is that the data center has a higher application density, and the number of logical servers (virtual machines) in the same physical space is greatly increased compared to physical servers. As a result, the overall service processing volume of the server increases, which increases the external throughput of the server.

Virtualized computing technology has gradually become the main supporting technology for cloud computing services, especially in the field of cloud computing services such as computing power leasing and scheduling.

In the cloud computing data center where large-scale computing resources are concentrated, different server resources based on x86 architecture are abstracted from the computing resources of the entire data center through virtualization technology, forming a computing resource pool that can be allocated according to a certain granularity.


Network design for server virtualization

Because there may be multiple virtual systems on a virtual machine, communication between systems needs to be communicated through the network, but unlike ordinary physical systems through physical network equipment interconnection, the network interface of virtual systems is also virtual, so it cannot be directly interconnected through physical network devices.

Virtualization technology is well used in data centers, and one of the popular solutions for data center network construction is the application of vSwitch (virtual switch) technology. vSwitch, as one of the earliest network virtualization technologies, has been implemented in software products such as Linux Bridge and VMWare vSwitch. vSwitch is VEB (virtual ethernet bridge) technology, that is, the virtual bridge is completely implemented on the server (terminal) hardware and does not involve the collaboration of external switches. Referring to the implementation of virtual machines, the Internet of Things devices are also virtualized and bound to the virtual machine, so that the network interfaces on the virtual machine can be directly interconnected through virtual network devices such as virtual switches within the virtual machine without going through the physical network.

Data center network design

Like ordinary server equipment, each virtual machine has its own virtual NIC (virtual NIC), and each virtual NIC has its own MAC address and IP address. A virtual switch (vSwitch) is equivalent to a virtual Layer 2 switch that connects a virtual network card and a physical network card to forward data packets on the virtual machine from the physical network port. If needed, vSwitch can also support Layer 2 forwarding, security control, port mirroring, and other functions.

Data center network equipment selection

There are many issues that should be considered in the selection process of data center equipment, which are divided into the following aspects.

(1) Cloud-based network

CLOS architecture builds non-blocking switching, CLOS multi-level multi-plane switching architecture realizes the complete separation of forwarding and control planes, that is, independent switching network boards and independent main control boards can be configured to ensure that the full line speed between ports is free of blocking, and provide continuous bandwidth upgrade capabilities and service support capabilities.

The service board and the switching network board are fully orthogonal design, and the cross-board traffic is transmitted to the switching network board through orthogonal connectors for exchange, so as to achieve "zero" traces on the backplane, minimize transmission loss, greatly reduce signal attenuation, and improve the internal transmission efficiency of the service traffic switch.

It supports 96K VoQ queues and realizes fine-grained QoS functions based on switching networks. Based on the VoQ mechanism and the large cache of the inbound port, the data center core switch builds an independent virtual output queue on the ingress side to carry out end-to-end flow control of traffic to different egresses, ensuring unified scheduling and orderly forwarding of services, and achieving strictly non-blocking switching.


(2) Performance meets the development of the network in the next ten years

A single slot supports 2Tbps bandwidth, can be smoothly scaled up to 4Tbps, and supports high-density 40CE and 100CE Ethernet ports, meeting the sustainable needs of cloud computing data centers and meeting the requirements for core switches in the next ten years of network development.

It supports the industry's highest performance packet wire-speed forwarding capability, and all boards, including the highest-density boards, can achieve 64-byte packet wire-speed forwarding, calmly meeting the demanding requirements of services in large data centers for high-speed forwarding without packet loss.

For high-performance computing applications, data center core switches support ultra-low latency technology, with a latency of as low as 0.5 microseconds, ensuring the service requirements of high-speed transmission in supercomputing center scenarios.

In the face of burst traffic in data centers, data center core switches support ultra-large distributed cache design technology, which can achieve 200 milliseconds cache per port to meet the requirements of burst traffic in data centers, high-performance computing, and other networks, and ensure that burst traffic is not lost.

 (3) Data center virtualization allows resources to be allocated on demand

1. Virtual switching unit supports virtualization technology, virtualizes multiple physical devices into a single logical device, unifies operation and management, greatly reduces network nodes, and reduces the workload of network operation and maintenance managers. Increase network reliability and achieve fast switching of 50~200 milliseconds link failures to ensure uninterrupted transmission of critical services. Cross-device link aggregation is supported, making it easier to access servers/switches to achieve active-active link connections, and increasing the cost of effective network connection bandwidth.

2. Virtual Switching Appliance VSD Data Center Core Switch can provide the industry's highest 1:12 device virtualization through virtual switching device (VSD) technology, which can virtualize one device into multiple virtual devices, each virtual device has an independent configuration management interface, independent hardware resource allocation (such as memory, TCAM, hardware transfer table), and can be restarted independently without affecting other virtual switches. Maximize the on-demand allocation of network resources for you, allowing core switch resources to be shared with multiple regions or users at the same time.


3. Multi-link transparent interconnection TRILL data center core switches support the TRILL (transparent interonnec-tion of lots of links) standard protocol formulated by the IETF, which can realize ultra-large-scale Layer 2 networking in data center scenarios, improve the flexibility of user service deployment, and expand the scope of virtual machine migration. It simplifies network design for data centers, improves network scalability and resiliency, and lays the foundation for building a large-scale, virtualized cloud computing network.

4. L2-CRE Layer 2 General Routing Encapsulation Data center core switches support L2-GRE technology based on international standards, which can realize Layer 2 data communication between data centers across geographical restrictions, enabling unified management and distribution of data center resources distributed in different physical locations.

5. Virtual Ethernet port aggregation VEPA data center core switches support IEEE802. The virtual Ethernet port aggregator (VEPA) defined by the 1qbg standard can pull the data flow generated by server virtual machines to physical network devices for "hard exchange", which not only solves the problems of virtual machine traffic cannot be supervised and access control policies cannot be uniformly deployed, but also eliminates the occupation of server resources by traditional "soft switching", so that the next-generation data center network solution can better adapt to the virtualized computing environment.

6. Virtual Machine Awareness and Automatic Security Policy Migration The core switch of the data center supports the automatic migration of virtual machine awareness and security policies, effectively realizing the unified deployment of security control policies for virtual machine traffic in large-scale server virtualization application environments, and through the data center network management platform and data center switches and virtual machine management and control platforms, the synchronous migration of corresponding security control policies during free migration of virtual hosts throughout the network can be realized, and network security vulnerabilities in the server virtualization environment are eliminated. Reduce network maintenance efforts.

7. Unified switching, convergence of storage and Ethernet Data center core switches The switch product line for next-generation data center and cloud computing can provide servers with Fibre Channel over Ethernet (FCoE) access and Ethernet access services, thereby helping users easily integrate heterogeneous storage networks and data networks, reduce the number of devices in the network, and truly realize the convergence of data center network architectures.

At the same time, the core switch of the data center and the TOR device of the 10000 gigabit data center can form an FC/FCoE data center converged network solution, which integrates FC SAN, IP SAN, FCoE SAN and the service-oriented IP network for unified networking and management, simplifying network deployment costs and cabling costs to the greatest extent, while also protecting users' existing investments.

Related Newsmore