Sarvam AI released two large language models on March 6, 2026: Sarvam 30B and Sarvam 105B, positioning them as India's first sovereign LLMs competitive with international models. The Bengaluru-based startup, backed by the IndiaAI Mission with $41M in funding, trained the models using 4,096 NVIDIA H100 GPUs with support for 10 Indian languages.
Models Feature Mixture-of-Experts Architecture and Government Certification
The 30B model uses mixture-of-experts (MoE) architecture, while both models carry ISO and SOC 2 Type II certification. Sarvam AI (legal name: Axonwise Private Limited) positioned the release as competing with China's DeepSeek and Europe's Mistral in the sovereign AI space.
The models are available for download from sarvam.ai under open-source licensing, making them accessible to researchers and enterprises working on Indian language applications.
Community Reception Highlights Performance and Transparency Concerns
Community testing revealed mixed results on the models' capabilities. One comparison on JEE Advanced mathematics problems showed Sarvam AI correctly answering 3 out of 4 questions compared to GPT's 4 out of 4. However, technical evaluations on Hacker News raised concerns about current performance levels.
One researcher noted the 30B MoE "is not currently good" and described it as "competitive with state of the art 2 years ago," citing hallucinations and lacking tool-calling abilities. Another commenter observed that "Qwen models at tenth or so of params" outperform it in certain benchmarks.
Transparency issues emerged around technical claims. Users flagged vague statements about "architecture-aware fused kernels" without supporting code or papers, with one noting the announcement "seems AI written" with excessive buzzwords lacking context.
Alignment and Political Concerns Raise Questions About Model Independence
Multiple commenters identified problematic system prompts directing the model to reject terminology around human rights violations, raising questions about governmental influence over model alignment. These concerns are significant given the model's backing by the IndiaAI Mission, a government initiative.
One positive signal emerged: a commenter noted the model hasn't appeared to be finetuned on OpenAI or Anthropic outputs, suggesting "a novel model being built" rather than derivative work based on existing commercial models.
India now has five major homegrown LLM initiatives including Sarvam, Gnani, BharatGen, Fractal, and Tech Mahindra, all focusing on Indian languages, enterprise use cases, and lower-cost AI infrastructure.
Key Takeaways
- Sarvam AI released 30B and 105B parameter models on March 6, 2026, trained on 4,096 NVIDIA H100 GPUs with support for 10 Indian languages
- The company has raised $41M in funding backed by the IndiaAI Mission and holds ISO and SOC 2 Type II certifications
- Community testing shows performance comparable to 2-year-old state-of-the-art models with issues including hallucinations and lack of tool-calling abilities
- System prompts contain directives to reject human rights violation terminology, raising concerns about governmental influence on model alignment
- India now has five major homegrown LLM initiatives focusing on Indian languages and enterprise applications