World’s fastest frontier AI reasoning model now available on Cerebras Inference Cloud
Delivers production-grade code generation at 30x the speed and 1/10th the cost of closed-source alternatives
PARIS–(BUSINESS WIRE)–Cerebras Systems today announced the launch of Qwen3-235B with full 131K context support on its inference cloud platform. This milestone represents a breakthrough in AI model performance, combining frontier-level intelligence with unprecedented speed at one-tenth the cost of closed-source models, fundamentally transforming enterprise AI deployment.
Frontier Intelligence on Cerebras
Alibaba’s Qwen3-235B delivers model intelligence that rivals frontier models such as Claude 4 Sonnet, Gemini 2.5 Flash, and DeepSeek R1 across a range of science, coding, and general knowledge benchmarks according to independent tests by Artificial Analysis.
Qwen3-235B uses an efficient mixture-of-experts architecture that delivers exceptional compute efficiency, enabling Cerebras to offer the model at $0.60 per million input tokens and $1.20 per million output tokens—less than one-tenth the cost of comparable closed-source models.
Cut Reasoning Time from Minutes to Seconds
Reasoning models are notoriously slow, often taking minutes to answer a simple question. By leveraging the Wafer Scale Engine, Cerebras accelerates Qwen3-235B to an unprecedented 1,500 tokens per second, reducing response times from 1-2 minutes to 0.6 seconds, making coding, reasoning, and deep-RAG workflows nearly instantaneous.
Based on Artificial Analysis measurements, Cerebras is the only company globally offering a frontier AI model capable of generating output at over 1,000 tokens per second, setting a new standard for real-time AI performance.
131K Context Enables Production-grade Code Generation
Concurrent with this launch, Cerebras has quadrupled its context length support from 32K to 131K tokens—the maximum supported by Qwen3-235B. This expansion directly impacts the model’s ability to reason over large codebases and complex documents. While 32K context is sufficient for simple code generation use cases, 131K context allows the model to process dozens of files and tens of thousands of lines of code simultaneously, enabling production-grade application development.
This enhanced context length means Cerebras now directly addresses the enterprise code generation market, which is one of the largest and fastest-growing segments for generative AI.
Strategic Partnership with Cline
To showcase these new capabilities, Cerebras has partnered with Cline, the leading agentic coding agent for Microsoft VS Code with over 1.8 million installations. Cline users can now access Cerebras Qwen models directly within the editor—starting with Qwen3-32B at 64K context on the free tier. This rollout will expand to include Qwen3-235B with 131K context, delivering 10–20x faster code generation speeds compared to alternatives like DeepSeek R1.
“With Cerebras’ inference, developers using Cline are getting a glimpse of the future, as Cline reasons through problems, reads codebases, and writes code in near real-time. Everything happens so fast that developers stay in flow, iterating at the speed of thought. This kind of fast inference isn’t just nice to have — it shows us what’s possible when AI truly keeps pace with developers,” said Saoud Rizwan, CEO of Cline.
Frontier Intelligence at 30x the Speed and 1/10th the Cost
With today’s launch, Cerebras has significantly expanded its inference offering, providing developers looking for an open alternative to OpenAI and Anthropic with comparable levels of model intelligence and code generation capabilities. Moreover, Cerebras delivers something that no other AI provider in the world—closed or open—can do: instant reasoning speed at over 1,500 tokens per second, increasing developer productivity by an order of magnitude vs. GPU solutions. All of this is delivered at one-tenth the token cost of leading closed-source models.
About Cerebras Systems
Cerebras Systems is a team of pioneering computer architects, computer scientists, deep learning researchers, and engineers of all types. We have come together to accelerate generative AI by building from the ground up a new class of AI supercomputer. Our flagship product, the CS-3 system, is powered by the world’s largest and fastest commercially available AI processor, our Wafer-Scale Engine-3. CS-3s are quickly and easily clustered together to make the largest AI supercomputers in the world, and make placing models on the supercomputers dead simple by avoiding the complexity of distributed computing. Cerebras Inference delivers breakthrough inference speeds, empowering customers to create cutting-edge AI applications. Leading corporations, research institutions, and governments use Cerebras solutions for the development of pathbreaking proprietary models, and to train open-source models with millions of downloads. Cerebras solutions are available through the Cerebras Cloud and on-premises. For further information, visit cerebras.ai or follow us on LinkedIn, X and/or Threads
Contacts
Media Contact
[email protected]