Benchmark your AI or HPC workload on the new NVIDIA A100 systems with AMD EPYC processors
Modern compute-intensive workloads in AI (Artificial Intelligence) training and inference all massively benefit from increased GPU performance. To support these users, Silicon Mechanics is offering workload benchmarking sessions on our new NVIDIA HGX-based GPX servers.
These systems pair NVIDIA A100 GPUs with AMD EPYC CPUs –the world’s highest-performing x86 server CPUi --to minimize bottlenecks between the compute and the acceleration.
The result is faster time to insight and improved ROI. In fact, this pairing provides the world’s fastest memory bandwidth (over 2 TB/s) to run the largest models and datasetsii
Our platform offers the performance of leading GPU accelerated server appliances without the high total cost of ownership (TCO) of thousands of servers or the locked-in design or vendor commitment of the NVIDIA® DGX A100.
If you are looking to improve performance for highly parallel workload like the following, consider benchmarking your workload on an NVIDIA A100 powered GPX system and seeing what they can do for you.
Dual AMD EPYC 7532 32-Core 2.4GHz CPUs
NVIDIA® HGX™ A100 - 4x A100 GPUs - 160GB Memory
2TB 3200MHz ECC Memory
2x 3.84TB U.2 NVMe PCIe 4.0 SSDs
4x Mellanox ConnectX-6 VPI 200GB InfiniBand
Read Full Report on SPEC.org >.
OEM published score(s) for EPYC may vary. SPEC®, SPECrate® and SPEC CPU® are registered trademarks of the Standard Performance Evaluation Corporation. See SPEC.org for more information >
Read More on NVIDIA.com >
Our engineers are not only experts in traditional HPC and AI technologies, we also routinely build complex rack-scale solutions with today's newest innovations so that we can design and build the best solution for your unique needs.
Talk to an engineer and see how we can help solve your computing challenges today.