Welcome, Please Sign In

Get in touch with your rep, view past orders, save configurations and more. Don't have an account? Create one in seconds below.


Using Composable Disaggregated Infrastructure to Get More ROI from Clusters

June 28, 2022

Composable disaggregated infrastructure (CDI) refers to the use of software and low-latency fabrics to pool hardware resources so they can be dynamically combined to meet shifting workload needs.

Technically, disaggregated resources are connected by a PCIe-based fabric and controlled by a management plane (GUI, CLI, or integrated with other software stacks like SLURM, and Bright Cluster Manager) that lets you dynamically provision bare metal HPC and AI clusters using best fit hardware.

CDI uses cloud native design principles to deliver best in class performance and flexibility on-premises. As a result, you get the flexibility of cloud and the value of virtualization but the performance of bare metal.

You can also run diverse projects on a cluster while still optimizing for each unique workload. Composable infrastructure also works with a range of security options so you can secure your workloads by leveraging at-rest and in-flight encryption, implement a zero-trust network, run MLS, etc.

How is CDI infrastructure different from traditional infrastructure?

Both kinds of solution can manage virtual machines, containers, and bare metal workloads and many different applications. But, with composable infrastructure, compute and storage resources can be deployed, managed, and scaled separately. Thanks to user friendly GUIs and automation via APIs, it's also much easier to manage than traditional solutions.

This makes the initial design and configuration of the infrastructure less impactful to performance over the lifetime of the system. If, for whatever reason, you aren’t getting ideal performance from the original configuration of CPUs, memory, GPUs, and storage devices, you can easily redeploy those individual resources without sacrificing the performance overhead of virtualization.

What are some use cases for composable infrastructure?

Composable infrastructure works well in HPC and AI/ML clusters, multi-tenant environments/ environments with a shared resource pool, and mixed workloads. It’s a good option for greenfield projects or net new deployments where you can design in composable disaggregated infrastructure from the ground up. However, it can also be added to brownfield environments where you want to add a resource pool like GPUs to existing infrastructure without replacing or redesigning the existing cluster or deployment.

CDI expansions to existing infrastructure are a fantastic way to extend the useful life of a high-cost deployment or experiment with new types of compute resources for your workloads. Usually, these expansions can use JBODs, JBOFs, or JBOXs to minimize the footprint of new components while maximizing density.

Additionally, this method eliminates the need for additional supporting infrastructure like the CPUs and memory necessary for traditional servers.

How does composable disaggregated infrastructure fit into a cluster?

Having little to no impact on performance compared to a traditional cluster, CDI can be designed into, or added to, a cluster without sacrificing any other key areas of design. The most important aspect of cluster design when considering CDI is networking. While other resources are easily scaled via PCIe, the networking layer must have the capacity to support expansion and reconfiguration.

Silicon Mechanics designs clusters, such as the Miranda CDI Cluster, that combine the most appropriate compute, networking, and storage hardware for client needs with a solution from Liqid that uses PCIe-based fabric and their Command Center GUI to provision bare metal resources. This helps us deliver best in class performance and flexibility. It also gives clients much more ROI than traditionally designed clusters.

To learn more about CDI, and the Silicon Mechanics Miranda CDI Cluster, watch our on-demand webinar on leveraging CDI for high-performance workloads here.

About Silicon Mechanics

Silicon Mechanics, Inc. is one of the world’s largest private providers of high-performance computing (HPC), artificial intelligence (AI), and enterprise storage solutions. Since 2001, Silicon Mechanics’ clients have relied on its custom-tailored open-source systems and professional services expertise to overcome the world’s most complex computing challenges. With thousands of clients across the aerospace and defense, education/research, financial services, government, life sciences/healthcare, and oil and gas sectors, Silicon Mechanics solutions always come with “Expert Included” SM.

Latest News

AMD Ryzen Threadripper PRO

February 23, 2024

AMD Ryzen Threadripper PRO 7000 WX-Series: Is It Worth the Upgrade?


Revolutionizing Content Delivery/Streaming w/ Networking & AI

July 10, 2023

Building an infrastructure to deliver high-performance networking and AI is critical to taking content delivery and streaming services to the next level.


Latest in Social

December 31, 1969

Expert Included

Our engineers are not only experts in traditional HPC and AI technologies, we also routinely build complex rack-scale solutions with today's newest innovations so that we can design and build the best solution for your unique needs.

Talk to an engineer and see how we can help solve your computing challenges today.