Welcome, Please Sign In

Get in touch with your rep, view past orders, save configurations and more. Don't have an account? Create one in seconds below.

login

NVIDIA GTC

Presentations and More from GTC21

NVIDIA GTC

NVIDIA’s Annual GTC Conference is going virtual (and completely free to attend) this year. Running from April 12 – 16, 2021, GTC21 is highlights breakthroughs in AI, data center, accelerated computing, healthcare, intelligent networking, game development, and more.

Registration is completely free and opens soon.

Our Speaking Sessions

Join Silicon Mechanics as we deep dive into leading-edge technologies changing the landscape of advanced computing today!

Design Considerations for Setting Up Composable Infrastructure as an Alternative to HPC/AI in the Cloud

Some applications aren’t well-suited for virtualized infrastructure. They require the performance of bare metal. That prevents some organizations from taking advantage of cloud computing, which relies on virtualization. Fortunately, composable infrastructure gives you bare metal performance with cloud-like flexibility. That means system architects can still use GPUs, fast networking technology, and other elements that typically come to mind when building a powerful on-premises architectures while also knowing the infrastructure can be re-provisioned for other workload types or outcomes. But you do need to keep some design considerations in mind.

This session will explore what those criteria are. We will dive deep into the performance and organizational impact of system design choices. Attendees will learn how composable infrastructure and GPUs interact, as well as the impact to I/O and latency, among other performance issues, as well as suitability for specific deployment types.

Harnessing the Aerospace/Defense Data Explosion With Well-Designed AI Infrastructure

Matt Ritter, Director of Engineering, Silicon Mechanics
Gary Keen, Engineer, Silicon Mechanics

An explosion of sensors embedded into modern equipment, vehicles, and facilities has led to an explosion of data. This, in turn, has led organizations in all sectors of aerospace and defense to explore whether AI can work for them. But the impact of one major area of investment – the hardware platform that runs the training of deep neural networks and then processes inference quickly and efficiently – in the success or failure of AI is often underestimated.

This technical deep dive explores key infrastructure considerations for meeting the huge computing, storage, and networking demands AI places on hardware and extracting maximum value from data. We will discuss how innovative architectural choices can accelerate training and improve performance of inference. We will also look at how new technologies can simplify deployment, management, and scaling of AI infrastructure even up to supercomputer scale to ensure long-term ROI. The session will also include a demonstration of workloads running on GPU accelerated computing and storage software.

Design Considerations for Achieving Faster Time-To-Result in Drug Discovery

Matt Ritter, Director of Engineering, Silicon Mechanics
Curtis Elgin, Engineer, Silicon Mechanics

With broader access to advanced computing, the life sciences sector has spawned creation of a plethora of new HPC analysis and simulation software, particularly in the big data, drug discovery space.

To make maximum use of these exciting applications, reduce time to insight (or discovery), and gain first mover advantage, researchers require powerful and flexible computing and storage platforms. This technical deep dive will outline how design choices can be cost-efficient and still support and accelerate modern HPC applications, up to 10x faster for certain workloads. Today, systems can be tailored to process data faster, scale additional storage, increase throughput, and remove storage I/O bottlenecks regardless of the size of the datasets or the heterogenous nature of the workload.

The session will also include a demonstration of workloads running on optimized, GPU-accelerated computing and software defined storage (SDS). We will call out techniques that that scale deployments for future expansion and emerging technologies, such as private/hybrid cloud computing, that life sciences teams might want to leverage in the future.

Increasing Financial Services Intelligence with AI-First Infrastructure

Matt Ritter, Director of Engineering, Silicon Mechanics
Andrew O’Neill, Engineer, Silicon Mechanics

With changing customer expectations, new online-first service providers, and a challenging regulatory environment, the cost of doing business as usual can severely restrict financial services firms. Fortunately, AI can provide valuable insights into customer behavior patterns, fraud reduction, faster market prediction and decision-making, even compliance testing. To make maximum use of new AI algorithms and gain first mover advantage requires computing, storage and networking optimized for AI as well as the compliance and business needs unique to the financial services sector.

This technical deep dive will outline how new technologies such as the NVIDIA A100 HGX can become the foundation of a highly-optimized AI infrastructure. Along with storage and connectivity optimized for the large datasets required in inference or training, these platforms can dramatically reduce time-to-value. The session will include a demonstration of workloads running on AI-optimized, HGX-based infrastructure. Then we will look at how AI-first infrastructure can be expanded into other, valuable technology areas, such as the cloud.

Expert Included

Our engineers are not only experts in traditional HPC and AI technologies, we also routinely build complex rack-scale solutions with today's newest innovations so that we can design and build the best solution for your unique needs.

Talk to an engineer and see how we can help solve your computing challenges today.