An unprecedented growth in Large Language Models has created a demand for massive AI workloads, comprising tens of thousands of GPUs. An efficient and scalable Network is, therefore, at the heart of an efficient AI infrastructure to allow interworking of these GPUs as a unified computing system. But networking requirements for AI workloads are quite different from standard cloud computing infrastructures. In this presentation, we will explain the unique requirements for AI Networking and how efficient Scheduled Fabric running SONiC can meet these requirements and drive performance for AI infrastructure.