The Five Key Principles for Building a Cloud Platform for AI-native Applications
From the Art of the Possible to the Science of Achieving Outcomes. In the past year since the release of NVIDIA's revolutionary H100 TensorCore GPU, enterprises have accelerating discovery of new applications of LLMs and AI models to transform research and development, operations, and their employee and customer experience. Over the next 12 months, enterprises need to move beyond discovery and the Art of the Possible to building a strategy and actionable roadmap to deliver on the Science of Achieving Outcomes. This next phase of the AI revolution requires an evolution of cloud infrastructure to deliver a new, modern platform for AI-native applications. In this talk, learn the 5 Key Principles for re-imagining cloud infrastructure and building a new global platform for enterprise AI success.