dllm-reasoning / d1
Official Implementation for the paper "d1: Scaling Reasoning in Diffusion Large Language Models via Reinforcement Learning"
☆55Updated last week
Alternatives and similar repositories for d1:
Users that are interested in d1 are comparing it to the libraries listed below
- [ICLR2025] DiffuGPT and DiffuLLaMA: Scaling Diffusion Language Models via Adaptation from Autoregressive Models☆154Updated last month
- [NeurIPS 2024] Code for the paper "Diffusion of Thoughts: Chain-of-Thought Reasoning in Diffusion Language Models"☆144Updated last month
- Official PyTorch implementation for ICLR2025 paper "Scaling up Masked Diffusion Models on Text"☆158Updated 4 months ago
- Implementation of 🥥 Coconut, Chain of Continuous Thought, in Pytorch☆164Updated 3 months ago
- Auto Interpretation Pipeline and many other functionalities for Multimodal SAE Analysis.☆127Updated 2 months ago
- ☆77Updated 8 months ago
- Code for "Reasoning to Learn from Latent Thoughts"☆91Updated 3 weeks ago
- [NeurIPS 2024] Official Repository of The Mamba in the Llama: Distilling and Accelerating Hybrid Models☆214Updated last week
- [ICLR 2025] Code for the paper "Beyond Autoregression: Discrete Diffusion for Complex Reasoning and Planning"☆47Updated 2 months ago
- A brief and partial summary of RLHF algorithms.☆127Updated last month
- A general framework for inference-time scaling and steering of diffusion models with arbitrary rewards.☆127Updated 2 months ago
- Auto get diffusion nlp papers in Axriv. More papers Information can be found in another repository "Diffusion-LM-Papers".☆112Updated this week
- AnchorAttention: Improved attention for LLMs long-context training☆206Updated 3 months ago
- Code accompanying the paper "Noise Contrastive Alignment of Language Models with Explicit Rewards" (NeurIPS 2024)☆51Updated 5 months ago
- The official code of "VL-Rethinker: Incentivizing Self-Reflection of Vision-Language Models with Reinforcement Learning"☆61Updated this week
- L1: Controlling How Long A Reasoning Model Thinks With Reinforcement Learning☆190Updated last month
- ☆69Updated last month
- [NeurIPS-2024] 📈 Scaling Laws with Vocabulary: Larger Models Deserve Larger Vocabularies https://arxiv.org/abs/2407.13623☆82Updated 6 months ago
- Official implementation of Phi-Mamba. A MOHAWK-distilled model (Transformers to SSMs: Distilling Quadratic Knowledge to Subquadratic Mode…☆103Updated 7 months ago
- Tiny re-implementation of MDM in style of LLaDA and nano-gpt speedrun☆47Updated last month
- ☆89Updated 6 months ago
- The official implementation of Self-Exploring Language Models (SELM)☆63Updated 10 months ago
- Normalized Transformer (nGPT)☆168Updated 5 months ago
- Code for the paper: "Fine-Tuning Discrete Diffusion Models with Policy Gradient Methods"☆19Updated last month
- Block Transformer: Global-to-Local Language Modeling for Fast Inference (NeurIPS 2024)☆152Updated last week
- official code for "BoostStep: Boosting mathematical capability of Large Language Models via improved single-step reasoning"☆35Updated 3 months ago
- Official repo of paper LM2☆37Updated 2 months ago
- SIFT: Grounding LLM Reasoning in Contexts via Stickers☆56Updated last month
- Code for the paper "VinePPO: Unlocking RL Potential For LLM Reasoning Through Refined Credit Assignment"☆151Updated 5 months ago
- Repo of paper "Free Process Rewards without Process Labels"☆143Updated last month