Data centers have been virtualizing for years. And the process continues. It seems that as one level of virtualization hits maturity, it opens the door to yet another way of virtualizing data center services and workloads.
Here are some of the top trends in data center virtualization:
1. Data Center RAS
The resurgence of the importance of data center RAS (reliability, availability and serviceability) has been observed as a trend by Gary Smerdon, President and CEO, TidalScale. The logic is straightforward. The businesses of today operate on data. And the amount of data that needs to be processed is ever-increasing. IT organizations must be able to efficiently process this data and act on these insights to stay competitive. But data is outgrowing the capabilities of traditional IT infrastructure. In recent years, hardware systems are experiencing higher and higher failure rates.
Uncorrectable memory errors were identified as the number one cause of server failures by Google in 2009. And, today failures are 50x more likely than they were a decade ago. Other sources of errors include those related to power supply & fans, networking, and storage. This makes RAS a key trend for data professionals.
“To truly modernize IT infrastructure, hardware systems must be self-optimizing and self-healing,” said Smerdon. “This is the way to achieve maximized uptime and maintain business continuity.”
His company is working with clients that are increasingly looking at software-defined server technology that makes self-optimizing systems a reality. The trend is that clients are turning to making self-healing systems a reality — ushering the IT environment into the future.
2. Virtual Memory at Petabyte Scale
Applications are far hungrier than they used to be. The surge in compute power coupled with advances in memory technology are opening up new vistas. This includes the advent of peta-scale pools of virtualized memory with a CXL (Compute Express Link) backbone to carry the capacity and bandwidth load, plus a heart of memory virtualization software to pump data at the speed of memory.
“The new interconnect and software will support a massive ecosystem of new CXL-compatible processors, memory chips, PCI cards, servers, and storage systems,” said Frank Berry, Vice President of Marketing, MemVerge.
3. Data-hungry AI Applications Go Mainstream
As AI capabilities evolve, developers are building applications to take advantage of these new capabilities. This is causing the size of data sets to explode. This trend has exposed the fact that applications are hitting a memory wall with the symptom being storage bottlenecks that’s hurting time-to-results.
“When data sets reach hundreds of gigabytes, what worked before doesn’t work anymore as routine tasks such as loading, saving, replicating, and restoring files take minutes to hours,” said Berry. “Real-time apps like fraud detection and social media profiling, as well as long-running apps like video rendering and bioinformatics, often process terabytes of data and need faster access to the data.”
4. CXL technology answers the call
CXL represents a long-awaited modernization of the data center memory tier, making it a first-class citizen of the data center virtualization hierarchy. The CXL interconnect supports memory-class bandwidth and latency inside a server and extended to multiple racks. Before CXL, a few terabytes of memory inside a server could be shared. After CXL, memory in multiple servers, and in external memory arrays, will allow petabytes of memory to be shared. Before CXL, memory was siloed by processor. After CXL, memory will be efficiently pooled and shared by CPUs, GPUs, DPUs, etc. Before CXL, memory was not composable like compute, storage, and networking. After CXL, the capacity, performance, QoS, availability, security, and mobility of memory will be provisioned by memory virtualization software.
“Long term, peta-scale memory promises to change the way applications are built and used,” said Berry. “Apps such as chip design, animation, and genomic analytics, which are disassembled and worked on in separate pieces, will be able to load petabytes of data in single memory space which will usher in a new era of collaboration.”
5. Data Centers Can Virtualize Memory Now
The entire storage ecosystem is in motion developing and releasing CXL-compatible hardware and software. A few products designed for the CXL era are available today and ready to move onto CXL when servers start shipping:
- Composable Data Center Infrastructure Systems: Liqid and Giga-IO are delivering composable storage systems, including memory, today.
- Memory Virtualization Software: MemVerge is shipping software that virtualizes PMem and DRAM, and provides a suite of memory data services for provisioning the pool of memory.
- Memory Cards that can be shared by different processors: SmarModular offers a card with PMem that can be used by heterogenous processors (not just Intel processors).
Further, products are beginning to appear across the data center infrastructure spectrum: CXL-compatible processors, memory chips, memory modules, memory arrays, switches, servers with CXL interconnects inside, clusters with CXL servers and CXL memory arrays, and memory tiering software.
6. Hyperscalers Lead the Way
Hyperscale deployments are starting in 2023 at the earliest with a few point solutions for specific apps. Then deployment at scale when there is a critical mass of components, systems, and open-source software integrated and tested. They are leading the way among early adopters with apps with massive data sets where the need is to deliver results as fast as possible. This includes:
- Real-time apps: fraud detection, retail recommendation engines, social media profiling
- Long-running apps: Genomic research, EDA, Animation/VFX, geophysical exploration
Mainstream business apps with AI that drives large datasets