What if the power of a data center could sit on your desk? That’s the idea behind NVIDIA DGX™ Spark—a compact, high-performance AI system that is small enough to travel but built to handle serious AI workloads.
Powered by the NVIDIA GB10 Grace Blackwell Superchip, DGX Spark delivers enterprise-class compute in a portable package. With the ability to fine-tune and run 200 billion parameter models locally, it’s ideal for developers, researchers, and innovators working at the edge of AI.
Harness the power of NVIDIA’s Grace Blackwell architecture for next-gen AI applications.
Supports foundation models with up to 200 billion parameters—right on your desk.
Cluster two units for scale-out workloads reaching 405 billion parameters.
Quiet, efficient, and compact (150 × 150 × 50 mm; ~1.2 kg; ~170 W).
DGX Spark ships with NVIDIA DGX OS and a complete AI software stack, including PyTorch, RAPIDS, NGC containers, and optimized libraries. It’s ready for immediate use in generative AI, model fine-tuning, inference, and data science workloads.
Specification | Value |
---|---|
Architecture | NVIDIA Grace Blackwell |
GPU | Blackwell Architecture |
CPU | 20 core Arm, 10 Cortex-X925 + 10 Cortex-A725 |
CUDA Cores | Blackwell Generation |
Tensor Cores | 5th Generation |
RT Cores | 4th Generation |
Tensor Performance | 1,000 AI TOPS |
System Memory | 128GB LPDDR5x, unified system memory |
Memory Interface | 256-bit |
Memory Bandwidth | 273 GB/s |
OS | DGX OS |
Storage | 4 TB NVME.M2 with self-encryption |
USB | 4x USB Type C |
Ethernet | 10 GbE 1x RJ-45 connector |
NIC | ConnectX-7 Smart NIC |
Wi-Fi | Wi-Fi 7 |
Bluetooth | BT 5.3 |
Audio Output | HDMI® multichannel audio output |
Power Consumption | TBD |
Display Connectors | 1x HDMI® 2.1a |
NVENC | NVDEC | 1x | 1x |
System Dimensions | 150 mm L × 150 mm W × 50.5 mm H |
System Weight | 1.2 kg |
Warranty | 1-year limited warranty |
Source: pny.com
We don’t just deliver systems—we help determine if DGX Spark is the right fit for your goals. Whether you’re experimenting with generative AI on-prem, building a secure local inference pipeline, or looking to scale development with portable systems, we align our recommendations with your specific needs.
Our expert support team offers proactive service throughout your deployment. From integration and system tuning to lifecycle management, Silicon Mechanics helps keep your AI infrastructure running smoothly—so you can focus on results, not troubleshooting.