
| Main Specifications | |
| Product Series | Nvidia A100 |
| Core Type | NVIDIA TENSOR |
| Host Interface | PCI Express 4.0 x16 |
| GPU Architecture | Ampere |
| Detailed Specifications | |
| PCIe x16 Interconnect Bandwidth | PCIe Gen4 64 GB/s |
| Max Memory Size | 80 GB |
| Max Memory Bandwidth | 1,935 GB/s |
| Peak FP64 | 9.7 TFLOPS |
| Peak FP64 Tensor Core | 19.5 TFLOPS |
| INT8 Tensor Core | 624 TOPS |
| TF32 Tensor Core | 156 TFLOPS |
| FP32 | 19.5 TFLOPS |
| Peak BFLOAT16 Tensor Core | 312 TFLOPS |
| Peak FP16 Tensor Core | 312 TFLOPS |
| Total NVLink Bandwidth | 600 GB/s (via NVLink Bridge for up to 2-GPUs) |
| Multi-Instance GPUs | 7 MIGs at 10GB |
| Cooling | Passive |
| Dual Slot | Yes |
| Supplementary Power Connectors | 1x 8-pin CPU (EPS12V) |
| Max Graphics Card Power (W) | 300W |