Artificial Intelligence has been a key talking point in the high-performance computing and computer science communities for decades now.
First it was a thought experiment and theoretical discussion point, but in recent years, it has become a practical area of focus for many scientists, researchers, and engineers, as well as businesses, universities, and government agencies. As the shift from theory to practice accelerated, so too did the excitement and scope of the expected impact of this technology on our society.
Autonomous vehicles, just-in-time maintenance, and real-time image processing are just a few of the ways you can apply AI at the edge. These and many other use cases are considered realistic deployments of complex deep learning training and inference models.
Until recently, these technologies have relied on large datacenters and inefficient algorithms to build, train, and run these models. For example, a decade ago, IBM’s Watson became famous as one of the first modern AI systems. It required ninety 4U servers to generate 80 TFLOPs of performance. But today, you can beat that performance with just a handful of white box GPU-accelerated servers.
This growth in system efficiency and performance has created an environment in the datacenter where machine learning and deep learning models can thrive. Simultaneously, the barrier to entry into AI workflows is being lowered, opening the doors to new use cases, algorithms, and benefits from AI.
Still, we haven’t seen the world-altering effects of AI outside of digital experiences like social media and search engines. What is keeping more tangible use cases like autonomous vehicles from wide-scale adoption? Challenges in edge computing.
Historically, datacenter environments have had the distinct advantage of having large amounts of equipment, power, cooling, physical space, data storage, and high-performance networking available to provide ever-increasing levels of performance.
Trying to create similar performance at the edge has been a key challenge because engineers lose the luxuries of large, power-hungry, and loud clusters. Edge devices have environmental, operational, and practical limitations that are not present in the controlled environment of a datacenter.
A well-trained and efficient AI model allows lower-power edge devices to handle inference tasks, but they are unable to also train and improve their own algorithms. Instead, workflows must rely on communication between edge devices and the datacenter.
Machine learning and deep learning rely on huge volumes of data that must be stored and processed. To use these technologies at the edge requires a tiered processing system. This allows data to be uploaded to the datacenter for processing, where algorithms can be further refined and downloaded to the edge device. That device, in turn, becomes better able to act or output in a desired fashion, without having to wait for the datacenter to execute a decision. This positive feedback loop is key to building effective and powerful AI applications that can perform on the edge.
Multiple forces are converging to allow this model to operate practically.
Processing power for AI has always been a limiting factor. Now, however, thanks to GPU acceleration and great advances in CPU performance and efficiency, that has changed. The state-of-the-art has moved so much that traditional chip manufacturers are creating processors that are much faster and more efficient than chips from just a few years ago, making them well-suited for edge applications. Some manufacturers are going so far as to make -specific processors for AI on the edge.
Because of these new capabilities, the overall cost of performance has come down substantially in recent years. This is a critical issue for organizations looking to deploy AI on the edge, since the sheer number of potential edge devices that can run AI far exceeds the number of datacenters.
Another area where historical attempts at AI on the edge have fallen short is in the speed and reliability of communication between the datacenter and the edge. For instance, an autonomous vehicle is responsible for the safety of passengers, pedestrians, and other vehicles. It cannot depend on a slow or unreliable network to operate properly. Fortunately, new 5G wireless, wide-area networks will enable workloads that will drive demand for AI and Compute at the edge.
Just because something is possible does not mean it is practical. Organizations looking to implement AI at the edge often face an uphill battle. Without the proper expertise, developing optimized datacenter systems for machine learning training and designing the necessary edge equipment for inference can be unrealistic for most teams.
Ensuring you have a properly balanced, high-ROI cluster for your AI workload is difficult. But the problem becomes even more challenging when planning for the large amounts of data coming in from hundreds or thousands of edge devices.
Then, if you’re able to build that system efficiently, you still need to solve for a completely different technological problem: edge device design. Commercial, off-the-shelf hardware is rarely capable of providing the optimal solution for specific workload needs, let alone the additional environmental and form factor challenges edge clusters face.
For example, what COTS system can operate in extreme temperature or weather conditions? What about safety and security aspects like operating temperatures or data encryption? If you find you need to design a custom device, how do you ensure it has everything you need while meeting regulatory requirements?
These are important things to consider as you plan for an edge AI or inference project. Silicon Mechanics and our partner Comark have many decades of experience in datacenter and ruggedized edge solution design experience between them. Together, we’ve put together a consideration guide on preparing for 5G edge computing that covers many of the key decisions organizations must make when deploying AI on the edge.
Get the white paper here.
Silicon Mechanics, Inc. is one of the world’s largest private providers of high-performance computing (HPC), artificial intelligence (AI), and enterprise storage solutions. Since 2001, Silicon Mechanics’ clients have relied on its custom-tailored open-source systems and professional services expertise to overcome the world’s most complex computing challenges. With thousands of clients across the aerospace and defense, education/research, financial services, government, life sciences/healthcare, and oil and gas sectors, Silicon Mechanics solutions always come with “Expert Included” SM.
The new generation of AMD EPYC processors is here, and it brings major advancements with it. At Silicon Mechanics, we see these new processors as a notable boost to performance, higher cache, better performance per watt, and more.READ MORE
Using big data analytics & predictive analytics through DL is essential but these tactics are not simple, and you need a properly designed infrastructure.READ MORE
Our engineers are not only experts in traditional HPC and AI technologies, we also routinely build complex rack-scale solutions with today's newest innovations so that we can design and build the best solution for your unique needs.
Talk to an engineer and see how we can help solve your computing challenges today.