Groq, a startup developing chips to run generative AI models faster than conventional processors, said on Monday that it has raised $640 million in a new funding round led by Blackrock. Neuberger Berman, Type One Ventures, Cisco, KDDI and Samsung Catalyst Fund also participated.
The tranche, which brings Groq’s total raised to over $1 billion and values the company at $2.8 billion, is a major win for Groq, which reportedly was originally looking to raise $300 million at a slightly lower ($2.5 billion) valuation. It more than doubles Groq’s previous valuation (~$1 billion) in April 2021, when the company raised $300 million in a round led by Tiger Global Management and D1 Capital Partners.
Meta chief AI scientist Yann LeCun will serve as a technical advisor to Groq and Stuart Pann, the former head of Intel’s foundry business and ex-CIO at HP, will join the startup as chief operating officer, Groq also announced today. LeCun’s appointment is a bit unexpected, given Meta’s investments in its own AI chips — but it undoubtedly gives Groq a powerful ally in a cutthroat space.
Groq, which emerged from stealth in 2016, is creating what it calls an LPU (language processing unit) inference engine. The company claims its LPUs can run existing generative AI models similar in architecture to OpenAI’s ChatGPT and GPT-4o at 10x the speed and one-tenth the energy.
Groq CEO Jonathan Ross’ claim to fame is helping to invent the tensor processing unit (TPU), Google’s custom AI accelerator chip used to train and run models. Ross teamed up with Douglas Wightman, an entrepreneur and former engineer at Google parent company Alphabet’s X moonshot lab, to co-found Groq close to a decade ago.
Groq provides an LPU-powered developer platform called GroqCloud that offers “open” models like Meta’s Llama 3.1 family, Google’s Gemma, OpenAI’s Whisper and Mistral’s Mixtral, as well as an API that allows customers to use its chips in cloud instances. (Groq also hosts a playground for AI-powered chatbots, GroqChat, that it launched late last year.) As of July, GroqCloud had more than 356,000 developers; Groq says that a portion of the proceeds from the round will be used to scale capacity and add new models and features.
“Many of these developers are at large enterprises,” Stuart Pann, Groq’s COO, told TechCrunch. “By our estimates, over 75% of the Fortune 100 are represented.”
As the generative AI boom continues, Groq faces increasing competition from both rival AI chip upstarts and Nvidia, the formidable incumbent in the AI hardware sector.
Nvidia controls an estimated 70% to 95% of the market for AI chips used to train and deploy generative AI models, and the firm’s taking aggressive steps to maintain its dominance.
Nvidia has committed to releasing a new AI chip architecture every year, rather than every other year as was the case historically. And it’s reportedly establishing a new business unit focused on designing bespoke chips for cloud computing firms and others, including AI hardware.
Beyond Nvidia, Groq competes with Amazon, Google and Microsoft, all of which offer — or will soon offer — custom chips for AI workloads in the cloud. Amazon has its Trainium, Inferentia and Graviton processors, available through AWS; Google Cloud customers can use the aforementioned TPUs and, in time, Google’s Axion chip; and Microsoft recently launched Azure instances in preview for its Cobalt 100 CPU, with Maia 100 AI Accelerator instances to come in the next several months.
Groq could consider Arm, Intel, AMD and a growing number of startups as rivals, too, in an AI chip market that could reach $400 billion in annual sales in the next five years, according to some analysts. Arm and AMD in particular have blossoming AI chip businesses, thanks to soaring capital spending by cloud vendors to meet the capacity demand for generative AI.
D-Matrix late last year raised $110 million to commercialize what it’s characterizing as a first-of-its-kind inference compute platform. In June, Etched emerged from stealth with $120 million for a processor custom-built to speed up the dominant generative AI model architecture today, the transformer. SoftBank’s Masayoshi Son is reportedly looking to raise $100 billion for a chip venture to compete Nvidia. And OpenAI is said to be in talks with investment firms to launch an AI chip-making initiative.
To carve out its niche, Groq is investing heavily in enterprise and government outreach.
In March, Groq acquired Definitive Intelligence, a Palo Alto-based firm offering a range of business-oriented AI solutions, to form a new business unit called Groq Systems. Within Groq Systems’ purview is serving organizations, including U.S. government agencies and sovereign nations, that wish to add Groq’s chips to existing data centers or build new data centers using Groq processors.
More recently, Groq partnered with Carahsoft, a government IT contractor, to sell its solutions to public sector clients through Carahsoft’s reseller partners, and the startup has a letter of intent to install tens of thousands of its LPUs at European firm Earth Wind & Power’s Norway data center.
Groq is also collaborating with Saudi Arabian consulting firm Aramco Digital to install LPUs in future data centers in the Middle East.
At the same time it’s establishing customer relationships, Mountain View, California-based Groq is marching toward the next generation of its chip. Last August, the company announced that it would contract with Samsung’s foundry business to manufacture 4nm LPUs, which are expected to deliver performance and efficiency gains over Groq’s first-gen 13nm chips.
Groq says it plans to deploy more than 108,000 LPUs by the end of Q1 2025.