A data center is a physical facility used by businesses to house their business-critical applications and information. As they evolve, it is important to think long-term about how to maintain their reliability and safety.

What is a data center?
Data centers are often referred to as a single thing, but in reality they are made up of many technical elements. These can be divided into three categories:
Compute: The memory and processing power to run applications, typically provided by high-end servers
Storage: Critical enterprise data is often stored in data centers, stored on a variety of media, from tape to solid-state drives, with multiple backups
Networking: The interconnection between data center components and the outside world, including routers, switches, application delivery controllers, etc
These are the components that IT needs to store and manage the most critical systems that are critical to the company's ongoing operations. Therefore, reliability, efficiency, security, and continuous evolution of data centers are often imperatives. Both software and hardware security measures are a must.
In addition to technical equipment, data centers require a large amount of facility infrastructure to keep hardware and software up and running. This includes power subsystems, uninterruptible power supplies (UPS), ventilation and cooling systems, backup generators, and cables connected to external network operators.
Data center architecture
Any larger company may have multiple data centers, possibly spread across multiple regions. This gives the organization the flexibility to back up information and protect against natural and man-made disasters such as floods, storms, and terrorist threats. The architecture of a data center can require some difficult decisions because there are almost unlimited options. Some key considerations include:
Does the business need to mirror the data center?
How much geographic diversity is needed?
If downtime occurs, how long does it take to recover?
How much space do I need for an extension?
Should you rent a private data center or use the same location/hosting service?
What are the bandwidth and power requirements?
Is there a preferred supplier?
What kind of physical security is required?
The answers to these questions help determine how many data centers to build and where. For example, a financial services company in Manhattan may need ongoing operations, as any disruption could result in millions of dollars in losses. The company may decide to build two data centers nearby, like one in New Jersey and one in Connecticut, mirroring each other. One of them may close completely without impacting operations as the company may divest the other.
However, small professional services firms may not need immediate access to information and can have a primary data center in their office and back up information to backup sites across the country every night. In the event of an outage, it will initiate a process of recovering information, but its urgency will be different than for businesses that rely on real-time data for a competitive advantage.
While data centers are often associated with enterprise and network-scale cloud providers, virtually any company can own a data center. For some small and medium-sized businesses, a data center may be located in a room in their office space.
Industry standard
To help IT leaders understand what type of infrastructure should be deployed, in 2005, the American National Standards Institute (ANSI) and the Telecommunications Industry Association (TIA) published data center standards that define four separate levels and provide design and implementation guidelines. A Tier 1 data center is basically a modified server room, where Tier 4 data centers have the highest level of system reliability and security. A description of each type of data center can be found here.
Data centers are undergoing a major transformation today, and the data centers of the future will be very different from the data centers that most organizations are familiar with.
Businesses are becoming increasingly dynamic and distributed, which means that the technology that supports data centers needs to be flexible and scalable. With the increasing popularity of server virtualization, laterally moving traffic in data centers (east-west) dwarfs traditional client-server traffic, which goes in and out (north-south). This poses challenges for data center managers, and more are on the horizon.
Here are some key technologies that will enable data centers to evolve from static and rigid environments to flexible, flexible facilities that can meet the needs of digital enterprises.
Edge computing and microdata centers
Edge computing is an increasingly popular paradigm where much of the computational work traditionally performed in centralized data centers takes place close to the edge of the network where data is collected. This means less latency and less data bandwidth required for applications that require near-real-time operations.
A micro data center is a compact unit that physically collects, processes, analyzes, and stores data in close proximity to the equipment that collects it, and puts them on-site for edge computing. The deployment of microdata centers supports many applications, including 5G networks, IoT promotion, and content delivery networks.
There are many vendors in the micro data center space, some of which have backgrounds in adjacent areas such as infrastructure as a service (IaaS) or managed services. Micro data centers are often (but not always) sold as pre-assembled equipment, and "micro" covers a fairly wide range of sizes. They range from a 19-inch rack to a 40-foot container, and management may be handled by the vendor or outsourced to a managed service provider (MSP).
The role of the cloud
In the past, businesses had the option of building their own data centers or using a hosting provider or MSP. The latter approach changes the economics of owning and operating data centers, but the longer lead times required to deploy and manage the necessary technology remain. The rise of IaaS by cloud service providers such as Amazon WebServices and microsoftazure has provided businesses with the option to offer a virtual data center in the cloud with just a few clicks. In 2019, for the first time, enterprises spent more annually on cloud infrastructure services than on physical data center hardware, with more than half of all servers sold going to cloud providers' data centers.
However, on-premises on-prem data centers aren't going away anytime soon. In a 2020 Uptime Institute survey, 58% of respondents said most of their workloads were still in corporate data centers, and they refused to switch, citing a lack of visibility and uptime accountability for public clouds.
Many organizations are achieving the best of both by using a hybrid cloud approach, where some workloads are offloaded to the public cloud while others that require more hands-on control or security are still running in the on-premises data center. According to the Flexera2020 State of Cloud Computing report, 87% of organizations surveyed have adopted a hybrid cloud strategy.
Software-Defined Networking (SDN)
A digital enterprise can only be as agile as its least agile component, and that is often the network. Networks can be made more efficient and flexible by separating the optimal routing of the network control plane, which determines how packets are forwarded from one point to another. They can be easily optimized through software to adapt to changing network loads.
This architecture is known as software-defined networking (SDN) and can be applied to data centers. Data centers can be configured faster by providing and managing data center hardware, often using simple language commands to eliminate time-consuming, error-prone manual configuration.
Hyperconverged Infrastructure (HCI)
One of the operational challenges of data centers is the need to combine the right combination of servers, storage, and networking equipment to support demanding applications. Then, once the infrastructure is deployed, IT operations needs to figure out how to scale quickly without disrupting applications. HCI simplifies this process by providing commodity-based hardware-based, easy-to-deploy appliances that provide processing power, storage, and networking in a single hardware. The architecture can be expanded by adding more nodes.
HCI can provide many benefits to traditional data centers, including scalability, cloud integration, and easier configuration and management.
Containers, microservices, and service meshes
Application development is often slowed down by the time it takes to provide the infrastructure running on it. This can significantly hinder an organization's ability to migrate to a DevOps model. A container is a method of virtualizing the entire runtime environment, allowing developers to run applications and their dependencies in a self-contained system. Containers are very lightweight and can be created and destroyed quickly, so they are ideal for testing how applications perform under specific conditions.
Containerized applications are often divided into individual microservices, each encapsulating a small set of features that interact to form a complete application. The work of coordinating these individual containers falls on a form of architecture called a service mesh, and while the service mesh does a lot of work to abstract the complexity away from the developers, it requires its own care and maintenance. Service mesh automation and management should be integrated into a comprehensive data center network management system, especially as container deployments become more numerous, complex, and strategic.
Microsegmentation
A traditional data center has all security technologies at its core, so it protects the business with security tools as traffic comes in and out. The increase in horizontal traffic within the data center means that traffic bypasses firewalls, intrusion prevention systems, and other security systems, allowing malware to spread quickly. Microsegmentation is a method of creating multiple data segments within a data center where resource groups can be isolated from each other, so if a vulnerability occurs, the data segment contains corruption. Microsegmentation is usually done in software, which makes it very flexible.
Non-Volatile Fast Memory (NVMe)
In an increasingly digital world, everything is getting faster, which means data needs to move in and out of data center storage faster. Traditional storage protocols such as the Small Computer System Interface (SCSI) and Advanced Technology Accessories (ATA) have been around for decades and have reached their limits. NVMe is a storage protocol designed to accelerate the transfer of information between the system and the solid-state drive, greatly increasing data transfer rates.
NVMe is not limited to connecting to solid-state memory chips: NVMeoffabrics (NVMeof) allows for the creation of ultra-high-speed storage networks with latency comparable to direct-attached storage.
GPU computing
Central processing units (CPUs) have been powering data center infrastructure for decades, but Moore's Law is facing physical limitations. In addition, new workloads such as analytics, machine learning, and IoT are driving the demand for a new type of computing model that exceeds the capabilities of CPUs. Graphics processing units (GPUs) used to be used only for gaming, but they operate fundamentally differently due to their ability to process multiple threads in parallel.
As a result, GPUs have found a place in modern data centers, which are increasingly tasked with challenging AI and neural networks. This will lead to a series of shifts in data center architecture, from the way they connect to the network to how they cool.
Data centers are critical to the success of businesses of almost all sizes, and that won't change. However, the way data centers are deployed and the number of supporting technologies is fundamentally changing. The technology that accelerates this shift is what the future will need.