JinjieNi / MegaDLMsLinks
GPU-optimized framework for training diffusion language models at any scale. The backend of Quokka, Super Data Learners, and OpenMoE 2 training.
☆296Updated last month
Alternatives and similar repositories for MegaDLMs
Users that are interested in MegaDLMs are comparing it to the libraries listed below
Sorting:
- TraceRL & TraDo-8B: Revolutionizing Reinforcement Learning Framework for Diffusion Large Language Models☆363Updated this week
- Easy and Efficient dLLM Fine-Tuning☆168Updated this week
- The most open diffusion language model for code generation — releasing pretraining, evaluation, inference, and checkpoints.☆484Updated last month
- The official github repo for "Diffusion Language Models are Super Data Learners".☆212Updated last month
- ☆352Updated last month
- Official Implementation for the paper "d1: Scaling Reasoning in Diffusion Large Language Models via Reinforcement Learning"☆384Updated 5 months ago
- [ICLR2025] DiffuGPT and DiffuLLaMA: Scaling Diffusion Language Models via Adaptation from Autoregressive Models☆348Updated 6 months ago
- dInfer: An Efficient Inference Framework for Diffusion Language Models☆340Updated last week
- Discrete Diffusion Forcing (D2F): dLLMs Can Do Faster-Than-AR Inference☆214Updated 2 months ago
- Official implementation of "Fast-dLLM: Training-free Acceleration of Diffusion LLM by Enabling KV Cache and Parallel Decoding"☆736Updated 3 weeks ago
- Esoteric Language Models☆109Updated 3 weeks ago
- [NeurIPS 2025] Reinforcement Learning for Reasoning in Large Language Models with One Training Example☆385Updated 3 weeks ago
- QeRL enables RL for 32B LLMs on a single H100 GPU.☆466Updated 3 weeks ago
- The official GitHub repo for the survey paper "A Survey on Diffusion Language Models".☆575Updated this week
- Chain of Experts (CoE) enables communication between experts within Mixture-of-Experts (MoE) models☆226Updated last month
- Official PyTorch implementation of the paper "dLLM-Cache: Accelerating Diffusion Large Language Models with Adaptive Caching" (dLLM-Cache…☆187Updated last month
- PaCoRe: Learning to Scale Test-Time Compute with Parallel Coordinated Reasoning☆177Updated last week
- [NeurIPS 2025] Thinkless: LLM Learns When to Think☆246Updated 2 months ago
- Mixture-of-Recursions: Learning Dynamic Recursive Depths for Adaptive Token-Level Computation (NeurIPS 2025)☆525Updated 2 months ago
- Geometric-Mean Policy Optimization☆95Updated last month
- Official PyTorch implementation for ICLR2025 paper "Scaling up Masked Diffusion Models on Text"☆349Updated 11 months ago
- Official implementation of the NeurIPS 2025 paper "Soft Thinking: Unlocking the Reasoning Potential of LLMs in Continuous Concept Space"☆286Updated last week
- Defeating the Training-Inference Mismatch via FP16☆163Updated last month
- [NeurIPS 2025] The official repo of SynLogic: Synthesizing Verifiable Reasoning Data at Scale for Learning Logical Reasoning and Beyond☆187Updated 5 months ago
- 📖 This is a repository for organizing papers, codes, and other resources related to Latent Reasoning.☆312Updated last month
- paper list, tutorial, and nano code snippet for Diffusion Large Language Models.☆139Updated 5 months ago
- ☆107Updated 3 months ago
- ☆121Updated 3 weeks ago
- Speed Always Wins: A Survey on Efficient Architectures for Large Language Models☆373Updated last month
- repo for paper https://arxiv.org/abs/2504.13837☆288Updated 5 months ago