shaochenze / calmLinks
Official implementation of "Continuous Autoregressive Language Models"
☆673Updated 2 weeks ago
Alternatives and similar repositories for calm
Users that are interested in calm are comparing it to the libraries listed below
Sorting:
- codes for R-Zero: Self-Evolving Reasoning LLM from Zero Data (https://www.arxiv.org/pdf/2508.05004)☆699Updated last week
- dLLM: Simple Diffusion Language Modeling☆1,397Updated last week
- GPU-optimized framework for training diffusion language models at any scale. The backend of Quokka, Super Data Learners, and OpenMoE 2 tr…☆301Updated last month
- Mixture-of-Recursions: Learning Dynamic Recursive Depths for Adaptive Token-Level Computation (NeurIPS 2025)☆525Updated 2 months ago
- Training teachers with reinforcement learning able to make LLMs learn how to reason for test time scaling.☆354Updated 5 months ago
- ☆1,233Updated last month
- The offical repo for "Parallel-R1: Towards Parallel Thinking via Reinforcement Learning"☆242Updated last month
- ToolOrchestra is an end-to-end RL training framework for orchestrating tools and agentic workflows.☆379Updated last week
- Chain of Experts (CoE) enables communication between experts within Mixture-of-Experts (MoE) models☆226Updated last month
- ☆352Updated last month
- Latent Collaboration in Multi-Agent Systems☆615Updated this week
- A Reproduction of GDM's Nested Learning Paper☆463Updated 2 weeks ago
- A Scientific Multimodal Foundation Model☆618Updated 2 months ago
- The code repository of the paper: Competition and Attraction Improve Model Fusion☆167Updated 3 months ago
- [NeurIPS 2025] Thinkless: LLM Learns When to Think☆246Updated 2 months ago
- Tina: Tiny Reasoning Models via LoRA☆310Updated 2 months ago
- OmniVinci is an omni-modal LLM for joint understanding of vision, audio, and language.☆603Updated last month
- QeRL enables RL for 32B LLMs on a single H100 GPU.☆467Updated 3 weeks ago
- Research code artifacts for Code World Model (CWM) including inference tools, reproducibility, and documentation.☆769Updated 2 months ago
- LongRoPE is a novel method that can extends the context window of pre-trained LLMs to an impressive 2048k tokens.☆276Updated last month
- dInfer: An Efficient Inference Framework for Diffusion Language Models☆361Updated last week
- Simple & Scalable Pretraining for Neural Architecture Research☆305Updated 2 weeks ago
- Official Repository for "Glyph: Scaling Context Windows via Visual-Text Compression"☆524Updated last month
- This is the official Python version of Vision-Zero: Scalable VLM Self-Improvement via Strategic Gamified Self-Play.☆104Updated 2 months ago
- Open-source release accompanying Gao et al. 2025☆450Updated last week
- ☆437Updated 3 weeks ago
- All information and news with respect to Falcon-H1 series☆93Updated 2 months ago
- Code and implementations for the paper "AgentGym-RL: Training LLM Agents for Long-Horizon Decision Making through Multi-Turn Reinforcemen…☆529Updated 3 months ago
- Easy and Efficient dLLM Fine-Tuning☆168Updated this week
- Ring-V2 is a reasoning MoE LLM provided and open-sourced by InclusionAI.☆83Updated last month