Enfabrica, a company specializing in networking chips optimized for AI and machine learning workloads, has announced a successful Series B funding round, raising $125 million. The funding round has valued the company at five times its post-Series A valuation, according to Rochan Sankar, the co-founder and CEO of Enfabrica. The Series B was led by Atreides Management, with participation from prominent investors such as Sutter Hill Ventures, Nvidia, IAG Capital Partners, Liberty Global Ventures, Valor Equity Partners, Infinitum Partners, and Alumni Ventures. This latest funding round brings Enfabrica’s total capital raised to $148 million, which will be allocated towards advancing the company’s research and development efforts, expanding its operational capacity, and growing its engineering, sales, and marketing teams.
Sankar emphasized the significance of Enfabrica’s achievement in securing substantial funding, especially in the challenging funding landscape for chip startups and deep tech ventures. He noted that the demand for networking technologies has surged alongside the proliferation of generative AI and large language models, making Enfabrica’s solutions vital in meeting these industry needs.
Enfabrica’s journey began in 2020, founded by Rochan Sankar, who formerly served as the director of engineering at Broadcom, and Shrijeet Mukherjee, a former leader in networking platforms and architecture at Google. They recognized the AI industry’s growing demand for infrastructure capable of supporting parallel, accelerated, and heterogeneous computing, particularly relying on GPUs. The challenge lay in scaling AI infrastructure efficiently in terms of cost and sustainability.
Enfabrica, with Sankar as CEO and Mukherjee as the chief development officer, along with a team of engineers from prominent companies like Cisco, Meta, and Intel, embarked on designing networking chips that could meet the I/O and memory movement requirements of parallel workloads, including AI.
Sankar pointed out that conventional networking chips, such as switches, often struggle to handle the data movement demands of modern AI workloads. Large AI models like Meta’s Llama 2 and GPT-4 require massive datasets during training, and network switches can become bottlenecks in these scenarios.
Enfabrica focused on enhancing parallelizability in its hardware, known as the Accelerated Compute Fabric Switch (ACF-S). This hardware can achieve multi-terabit-per-second data movement between GPUs, CPUs, AI accelerator chips, memory, and networking devices, using standards-based interfaces. It can scale to tens of thousands of nodes and reduce GPU compute requirements for large language models like Llama 2 by approximately 50% while maintaining performance.
Sankar also highlighted the benefits of ACF-S for companies engaged in inferencing, as it optimizes hardware utilization by rapidly moving large volumes of data. Moreover, ACF-S is compatible with various AI processors and models, enabling flexibility and avoiding vendor lock-in.
While Enfabrica has secured substantial funding, it faces competition from other networking chip startups venturing into the AI space. For instance, Cisco introduced hardware solutions to support AI networking workloads, while incumbents like Broadcom and Marvell offer high-bandwidth switches.
Enfabrica is well-positioned to leverage the significant attention and investment pouring into AI infrastructure. According to the Dell’Oro Group, AI infrastructure investments are projected to surpass $500 billion in data center capital expenditures by 2027, and AI-tailored hardware is expected to experience a compound annual growth rate of 20.5% over the next five years, as per IDC.
Enfabrica, headquartered in Mountain View, currently employs just over 100 individuals across North America, Europe, and India.