Before we get started, can everyone reading this article agree that large compute clusters, supercomputers, HPC systems, or whatever else we choose to call them, are complex? Ok, good! Because, while “out of the box” solutions make things somewhat easier, it’s an oversimplification to say that’s all you need – and you end up missing a few key points about the value of complexity to the end user and their organization.
Many, many individual pieces need to work together to create a cluster and all of them can have a huge impact on the system’s overall performance. From a system architects’ perspective, each component is a decision that must be made, a choice that either improves your performance for a particular workload or hurts it. These decisions can weigh heavy on architects, but often weigh even heavier on the IT infrastructure managers responsible for their company’s budget and resources.
Bad choices in system design have a huge impact to a client organization’s capabilities and, ultimately, revenue. The problem is that it’s hard to know what to tell the system architect about which choices you prefer. Yes, you could design the system yourself, but who has time to bone up on all the issues you need to consider and learn all the technologies out there that might work? Every choice could be a mistake – and your team could suffer for it for years.
From that point-of-view, it’s easy to see why the ‘supercomputer-in-a-box’ idea is appealing to most people on some level. A major technology provider with tons of experience, resources, and engineering brainpower figures out the best possible system for HPC, AI, or data analytics and you buy it.
Maybe it won’t be perfectly tuned to your workload, but at least you know what you can expect, and you don’t have to worry over the little decisions. It’s comfortable, predictable, and relatively safe. It’s also expensive, restrictive, and commonplace. Commonplace is important because, if everyone buys the same solution, no one has a competitive advantage. And who doesn’t want a competitive advantage?
Instead, you could get the best of both worlds by working with a solution provider with experience in both the latest technology but also a solid foundation in proven platforms and have this person design something unique to your needs.
The result is that you get a one-of-a-kind solution tuned to your workload that also stretches your budget further than the preconfigured solution could. What’s even better is that, today, that knowledgeable system architect you’ve hired will be aware of how you can take advantage of the best parts of the preconfigured solution.
Specifically, your system architect can give you the predictability, the simplicity, and the safety of that pre-configured supercomputer-in-a-box you keep seeing advertised everywhere and combine them with the best elements of a custom solution, namely lower costs, fine-tuned performance, and competitive advantage.
How? Building blocks.
Let me explain. System designers at reputable firms will have access to all the individual pieces that make up these pre-configured solutions. For instance, my team builds custom AI solutions based on the NVIDIA HGXTM server platform, which features the A100 GPU. When optimized to a specific workload, these custom solutions provide unmatched performance, and they can be tailored to be easy to use, seamlessly scalable, and more. Out-of-the-box solutions can’t give you that.
Generally, no two solutions are alike. This is great for customers that either need a unique solution for a unique problem or don’t have the budget for a pre-configured solution large enough to meet their needs. These customers are willing to introduce variables to their system design in order to reach their goals. Not everyone is like that, and rightfully so, which is where the out-of-box options are most valuable.
Architects like our team at Silicon Mechanics want to reduce the number of variables in system designs to lower the perceived risk for our customers. We believe that building a strong solution for any workload requires balance between network, storage, and compute. So, we’re developing network, storage, and compute building blocks that are each unique, tested, and high-performance, but have their own, specific purpose in a larger system design.
Take storage for instance. A common trend nowadays is the faster the better, but that is only somewhat true. Not all workloads take advantage of that speed to the fullest, so using the fastest NVMe drives available, or even persistent memory devices, may be an unnecessary expense for you. Instead, we configure building blocks for slower flash storage or even *gasp* spinning disks.
We often suggest a tiered storage structure, which combines varying levels of high-bandwidth and high-capacity storage to provide top-tier performance at a better price. A pre-configured solution may only have the fastest, most expensive storage option available. The decision is no longer which drives to use or what form factor of a storage server to choose, but instead what performance level of storage does your workload need, and how much data will you have.
With the building blocks approach, the key is staying flexible. By configuring and testing building blocks at the storage, compute, or networking level, we gain some of the predictability of a pre-configured solution but can still tune the overall system to match your specific goals. This means as a customer, you can start small, investing only in what you need now, because you can always expand that system later. This is like the concept of rightsizing a cluster, you don’t always need a system to rank high on the Top500 list, you just need it to work for your team.
If you have a fully custom solution, it may be difficult to scale that design. Whereas a pre-configured solution may mean you have to start from scratch with the new version, or even worse, you are paying a premium for an old design that is no longer competitive with the market.
So as a solution designer who loves building unique solutions from the ground up and tinkering with components to get the most out of a solution, even I can endorse going with the middle ground. There are still great reasons to go with a fully custom solution and we are happy to build that with you. Similarly, the pre-configured solutions may be right for you and can absolutely be successful. It’s about finding a partner that you trust, knowing your options, leveraging your budget, and finding the solution that’s right for your team.
Right now, the Silicon Mechanics team is using this building blocks approach to design scalable, world-class AI and HPC solutions that feature the latest and greatest GPU architectures from NVIDIA. You can think of it as supercomputer performance without a supercomputer footprint. If that interests you, I suggest checking out NVIDIA’s GPU Technology Conference (GTC) where our team will be giving several presentations about how to use building block technology and other best practices for industry-specific solutions.
Learn more about the value of a building block methodology to improving time-to-result by reading this white paper.
Silicon Mechanics, Inc. is one of the world’s largest private providers of high-performance computing (HPC), artificial intelligence (AI), and enterprise storage solutions. Since 2001, Silicon Mechanics’ clients have relied on its custom-tailored open-source systems and professional services expertise to overcome the world’s most complex computing challenges. With thousands of clients across the aerospace and defense, education/research, financial services, government, life sciences/healthcare, and oil and gas sectors, Silicon Mechanics solutions always come with “Expert Included” SM.
AMD Ryzen Threadripper PRO 7000 WX-Series: Is It Worth the Upgrade?READ MORE
Building an infrastructure to deliver high-performance networking and AI is critical to taking content delivery and streaming services to the next level.READ MORE