deepseek-ai / DeepSeek-Prover-V2Links
☆1,220Updated 4 months ago
Alternatives and similar repositories for DeepSeek-Prover-V2
Users that are interested in DeepSeek-Prover-V2 are comparing it to the libraries listed below
Sorting:
- ☆898Updated this week
- ☆543Updated last year
- Technical report of Kimina-Prover Preview.☆346Updated 4 months ago
- ☆1,344Updated 2 months ago
- MiniMax-M1, the world's first open-weight, large-scale hybrid-attention reasoning model.☆2,998Updated 4 months ago
- An AI agent system for solving International Mathematical Olympiad (IMO) problems using Google's Gemini, OpenAI, and XAI APIs.☆867Updated last month
- ☆1,022Updated last week
- ☆478Updated 4 months ago
- MiMo: Unlocking the Reasoning Potential of Language Model – From Pretraining to Posttraining☆1,633Updated 5 months ago
- Muon is Scalable for LLM Training☆1,365Updated 3 months ago
- ☆1,215Updated 2 weeks ago
- Unleashing the Power of Reinforcement Learning for Math and Code Reasoners☆731Updated 5 months ago
- The official repo of MiniMax-Text-01 and MiniMax-VL-01, large-language-model & vision-language-model based on Linear Attention☆3,250Updated 4 months ago
- Dream 7B, a large diffusion language model☆1,085Updated last week
- Understanding R1-Zero-Like Training: A Critical Perspective☆1,157Updated 3 months ago
- OpenAI Frontier Evals☆948Updated last month
- ☆308Updated 2 months ago
- [COLM 2025] LIMO: Less is More for Reasoning☆1,051Updated 4 months ago
- Qwen3-omni is a natively end-to-end, omni-modal LLM developed by the Qwen team at Alibaba Cloud, capable of understanding text, audio, im…☆2,966Updated last month
- MoBA: Mixture of Block Attention for Long-Context LLMs☆2,006Updated 7 months ago
- Democratizing Reinforcement Learning for LLMs☆4,770Updated last week
- ☆544Updated 6 months ago
- Pretraining and inference code for a large-scale depth-recurrent language model☆849Updated last month
- Seed-Coder is a family of lightweight open-source code LLMs comprising base, instruct and reasoning models, developed by ByteDance Seed.☆683Updated 5 months ago
- Hypernetworks that adapt LLMs for specific benchmark tasks using only textual task description as the input☆919Updated 5 months ago
- ☆474Updated last year
- ☆214Updated 7 months ago
- ☆703Updated last week
- ☆2,449Updated 3 weeks ago
- ☆819Updated 5 months ago