Welcome, Please Sign In

Get in touch with your rep, view past orders, save configurations and more. Don't have an account? Create one in seconds below.

login

Using Composable Disaggregated Infrastructure to Get More ROI from Clusters

By
June 28, 2022

Composable disaggregated infrastructure (CDI) refers to the use of software and low-latency fabrics to pool hardware resources so they can be dynamically combined to meet shifting workload needs.

Technically, disaggregated resources are connected by a PCIe-based fabric and controlled by a management plane (GUI, CLI, or integrated with other software stacks like SLURM, and Bright Cluster Manager) that lets you dynamically provision bare metal HPC and AI clusters using best fit hardware.

CDI uses cloud native design principles to deliver best in class performance and flexibility on-premises. As a result, you get the flexibility of cloud and the value of virtualization but the performance of bare metal.

You can also run diverse projects on a cluster while still optimizing for each unique workload. Composable infrastructure also works with a range of security options so you can secure your workloads by leveraging at-rest and in-flight encryption, implement a zero-trust network, run MLS, etc.

How is CDI infrastructure different from traditional infrastructure?

Both kinds of solution can manage virtual machines, containers, and bare metal workloads and many different applications. But, with composable infrastructure, compute and storage resources can be deployed, managed, and scaled separately. Thanks to user friendly GUIs and automation via APIs, it's also much easier to manage than traditional solutions.

This makes the initial design and configuration of the infrastructure less impactful to performance over the lifetime of the system. If, for whatever reason, you aren’t getting ideal performance from the original configuration of CPUs, memory, GPUs, and storage devices, you can easily redeploy those individual resources without sacrificing the performance overhead of virtualization.

What are some use cases for composable infrastructure?

Composable infrastructure works well in HPC and AI/ML clusters, multi-tenant environments/ environments with a shared resource pool, and mixed workloads. It’s a good option for greenfield projects or net new deployments where you can design in composable disaggregated infrastructure from the ground up. However, it can also be added to brownfield environments where you want to add a resource pool like GPUs to existing infrastructure without replacing or redesigning the existing cluster or deployment.

CDI expansions to existing infrastructure are a fantastic way to extend the useful life of a high-cost deployment or experiment with new types of compute resources for your workloads. Usually, these expansions can use JBODs, JBOFs, or JBOXs to minimize the footprint of new components while maximizing density.

Additionally, this method eliminates the need for additional supporting infrastructure like the CPUs and memory necessary for traditional servers.

How does composable disaggregated infrastructure fit into a cluster?

Having little to no impact on performance compared to a traditional cluster, CDI can be designed into, or added to, a cluster without sacrificing any other key areas of design. The most important aspect of cluster design when considering CDI is networking. While other resources are easily scaled via PCIe, the networking layer must have the capacity to support expansion and reconfiguration.

Silicon Mechanics designs clusters, such as the Miranda CDI Cluster, that combine the most appropriate compute, networking, and storage hardware for client needs with a solution from Liqid that uses PCIe-based fabric and their Command Center GUI to provision bare metal resources. This helps us deliver best in class performance and flexibility. It also gives clients much more ROI than traditionally designed clusters.

To learn more about CDI, and the Silicon Mechanics Miranda CDI Cluster, watch our on-demand webinar on leveraging CDI for high-performance workloads here.


About Silicon Mechanics

Silicon Mechanics, Inc. is one of the world’s largest private providers of high-performance computing (HPC), artificial intelligence (AI), and enterprise storage solutions. Since 2001, Silicon Mechanics’ clients have relied on its custom-tailored open-source systems and professional services expertise to overcome the world’s most complex computing challenges. With thousands of clients across the aerospace and defense, education/research, financial services, government, life sciences/healthcare, and oil and gas sectors, Silicon Mechanics solutions always come with “Expert Included” SM.

Latest News

4th Generation AMD EPYC™ Server Platforms are Here | Silicon Mechanics

November 10, 2022

The new generation of AMD EPYC processors is here, and it brings major advancements with it. At Silicon Mechanics, we see these new processors as a notable boost to performance, higher cache, better performance per watt, and more.

READ MORE

Overcome Challenges to Big Data Analytics w/ Infrastructure

October 6, 2022

Using big data analytics & predictive analytics through DL is essential but these tactics are not simple, and you need a properly designed infrastructure.

READ MORE

Latest in Social

Silicon Mechanics
@ExpertIncluded
Wishing everyone a Happy Thanksgiving!
November 24, 2022
Silicon Mechanics
@ExpertIncluded
Silicon Mechanics will be closed on Thursday, November 24th in observance of
November 16, 2022

Expert Included

Our engineers are not only experts in traditional HPC and AI technologies, we also routinely build complex rack-scale solutions with today's newest innovations so that we can design and build the best solution for your unique needs.

Talk to an engineer and see how we can help solve your computing challenges today.