Loading organizations...

Groq is a technology company.
Groq has raised $1.8B across 5 funding rounds.
Key people at Groq.
Groq was founded in 2016 by Jonathan Ross (Co-Founder).
Groq has raised $1.8B in total across 5 funding rounds.
Groq designs and builds specialized hardware and software for high-performance artificial intelligence inference. Its flagship product, the Language Processing Unit (LPU), is an application-specific integrated circuit engineered to accelerate demanding AI workloads, particularly large language models. The LPU's deterministic architecture tightly integrates memory and compute units, enabling exceptionally fast and efficient processing for diverse AI applications.
Jonathan Ross and Douglas Wightman co-founded Groq in 2016. Ross, a key architect of Google's Tensor Processing Unit (TPU), recognized the essential need for dedicated silicon to efficiently process advanced AI models. This insight, paired with Wightman’s entrepreneurial acumen, established Groq to optimize AI inference at a fundamental hardware level.
Groq targets developers and enterprises seeking superior speed and cost-efficiency for their AI inference needs. Its LPU technology powers the GroqCloud platform, offering rapid deployment and scalable performance for AI models. The company aims to be a premier AI inference infrastructure provider, democratizing access to high-performance AI processing globally.
Key people at Groq.
Groq is an AI hardware company specializing in Language Processing Units (LPUs), custom chips designed exclusively for fast, low-cost AI inference. It builds inference engines optimized for real-time AI workloads like large language models (LLMs), chatbots, natural language processing, and predictive analysis, serving developers, enterprises, governments, and public sector users seeking high throughput, low latency, and energy efficiency over traditional GPUs.[1][3][4][5][7] Groq solves key bottlenecks in AI deployment—such as compute density, memory bandwidth, and unpredictable performance—by focusing solely on inference rather than training, enabling scalable, affordable intelligence for applications in autonomous vehicles, robotics, and GenAI.[1][3][6] The company has shown strong growth, including a $2.8 billion valuation, global data center deployments, GroqCloud for easy model access, and partnerships like Samsung for 4nm chip production.[4][5][7]
Groq was founded in 2016 by former Google engineers, pioneering the first chip purpose-built for AI inference with a software-defined hardware approach inspired by a software-first mindset.[2][5][8] The idea emerged from recognizing limitations in traditional CPUs and GPUs for machine learning workloads, leading to innovations like the Tensor Streaming Processor (TSP), later rebranded as LPU amid the LLM boom post-ChatGPT.[1][7] Early traction included developing the LPU architecture on a 14nm process with over 1 TeraOp/s per square mm density, acquiring Maxeler Technologies in 2022 for dataflow tech, and selecting Samsung's Texas foundry in 2023 for next-gen 4nm chips.[7] Headquartered in Mountain View, CA, with offices across North America and Europe, Groq has hosted open-source LLMs publicly and expanded via GroqCloud.[5][7]
Groq rides the explosive growth of AI inference demand fueled by LLMs and generative AI, where deployment speed and cost are critical post-training era bottlenecks.[1][4] Timing aligns with surging real-world AI adoption—e.g., chatbots, edge computing, and public sector data processing—amid GPU shortages and high energy costs, positioning LPUs as a specialized alternative.[3][5][6] Market forces like U.S. domestic manufacturing pushes and hyperscaler needs favor Groq's efficient, scalable stack, influencing the ecosystem by democratizing fast inference via cloud access and open models, accelerating AI from labs to production.[2][7] Recent developments, including a December 2025 Nvidia licensing deal valued at $20 billion, underscore its tech's strategic value while allowing independent operation.[7]
Groq's inference-first LPU positions it to dominate as AI shifts toward ubiquitous, real-time deployment, with LPU v2 on 4nm promising even higher density and global expansion via data centers and partnerships.[5][7] Trends like edge AI, multimodal models, and energy-constrained computing will amplify its advantages, potentially evolving influence through deeper integrations in government, enterprise, and dev tools—especially post-Nvidia deal, which validates and funds scaling without full acquisition.[6][7] As the pioneer keeping "intelligence fast and affordable," Groq fuels the AI economy's next phase, preserving human agency through accessible, efficient compute.[1][5]
Groq was founded in 2016 by Jonathan Ross (Co-Founder).
Groq has raised $1.8B in total across 5 funding rounds.
Groq's investors include Alexander Davis, 1789 Capital, Altimeter, BlackRock, Cisco, D1 Capital Partners, DTCP, Infinitum Partners, Neuberger Berman, Samsung, Samir Menon, Alumni Ventures.
Groq has raised $1.8B across 5 funding rounds. Most recently, it raised $750.0M Other Equity in September 2025.