The NVIDIA H100 Tensor Core GPU enables an order-of-magnitude leap for large-scale AI and HPC with unprecedented performance, scalability, and security for every data center. With NVIDIA AI Enterprise for streamlined AI development and deployment, NVIDIA NVLINK Switch System direct communication between up to 256 GPUs, H100 accelerates everything from exascale scale workloads with a dedicated Transformer Engine for trillion parameter language models, down to right-sized Multi-Instance GPU (MIG) partitions.
Systems with NVIDIA H100 GPUs support PCIe Gen5, gaining 128GB/s of bi-directional throughput, and HBM3 memory, which provides 3TB/sec of memory bandwidth, eliminating bottlenecks for memory and network-constrained workflows.
Silicon Mechanics H100 GPU-accelerated servers are available in a variety of form factors, GPU densities, and storage capacities. As with all Silicon Mechanics systems, these servers are highly customizable via the online system configurator, allowing you to get the optimal performance for your AI and HPC workflows.
For large scale deployments, NVIDIA H100 GPUs are a key component of our GPU-accelerated reference architecture. Learn more about our cluster-scale solutions here.
- H100 is the first GPU to support PCIe Gen5, providing 128GB/s (bi-directional)
- H100 is the world’s first GPU with HBM3 memory, providing 3TB/sec of memory bandwidth
- An 8GPU H100 system provides up to 32 petaFLOPS of FP8 deep learning compute performance
Configure & Buy
Our engineers are not only experts in traditional HPC and AI technologies, we also routinely build complex rack-scale solutions with today's newest innovations so that we can design and build the best solution for your unique needs.
Talk to an engineer and see how we can help solve your computing challenges today.