Technical Trends in AI Infrastructure for Developers

This talk is to discuss some of the trends we have observed over time in the Infrastructure space when it comes to addressing AI applications and workloads.

As most of you are aware AI has been evolving at a very rapid pace with adoption evolving from research and niche use cases to encompassing the majority of mainstream consumer, enterprise, telco, cloud, and government use cases. This massive broadening of AI use cases has impacted the direction of infrastructure across the spectrum of silicon, software, systems, and solutions. There are multiple factors that are affecting the long-term view of how we should be thinking about Infrastructure. The key factors are : 1) Applications (Image classification, Object segmentation, NLP, Recommendation, speech 2) Neural network model sizes 3) Power 4) Cooling. This has led to a broad range of Accelerators addressing this space from GPUs, FPGAs to custom ASICs. Since the use cases within AI are broad, implementation of Infrastructure using these compute devices can vary based on deployment model i.e. far-edge, near-edge, on-prem datacenter and cloud. Besides Accelerators we are also seeing how important it is to address the needs for storage and networking since at the end of the day an accelerator is just a fast calculator.

To address some of these factors, we will cover the evolving landscape for Infrastructure and what can a future deployment look like for these AI solutions.

Bhavesh Patel
Dell Technologies
Related Sessions