JinjieNi / dlms-are-super-data-learnersLinks
The official github repo for "Diffusion Language Models are Super Data Learners".
☆208Updated last month
Alternatives and similar repositories for dlms-are-super-data-learners
Users that are interested in dlms-are-super-data-learners are comparing it to the libraries listed below
Sorting:
- Official PyTorch implementation and models for paper "Diffusion Beats Autoregressive in Data-Constrained Settings". We find diffusion mod…☆113Updated last month
- Esoteric Language Models☆108Updated 2 weeks ago
- TraceRL & TraDo-8B: Revolutionizing Reinforcement Learning Framework for Diffusion Large Language Models☆347Updated this week
- [ICML 2025] Roll the dice & look before you leap: Going beyond the creative limits of next-token prediction☆80Updated 6 months ago
- Geometric-Mean Policy Optimization☆95Updated 3 weeks ago
- Tiny re-implementation of MDM in style of LLaDA and nano-gpt speedrun☆57Updated 9 months ago
- [NeurIPS'25] dKV-Cache: The Cache for Diffusion Language Models☆120Updated 6 months ago
- Defeating the Training-Inference Mismatch via FP16☆161Updated 3 weeks ago
- [ICLR2025] DiffuGPT and DiffuLLaMA: Scaling Diffusion Language Models via Adaptation from Autoregressive Models☆347Updated 6 months ago
- ☆105Updated 2 months ago
- Mixture-of-Transformers: A Sparse and Scalable Architecture for Multi-Modal Foundation Models. TMLR 2025.☆129Updated 2 months ago
- Official PyTorch Implementation for Vision-Language Models Create Cross-Modal Task Representations, ICML 2025☆31Updated 7 months ago
- Code accompanying the paper "Generalized Interpolating Discrete Diffusion"☆109Updated 6 months ago
- Implementation of 🥥 Coconut, Chain of Continuous Thought, in Pytorch☆181Updated 5 months ago
- [ICLR2025] Codebase for "ReMoE: Fully Differentiable Mixture-of-Experts with ReLU Routing", built on Megatron-LM.☆99Updated 11 months ago
- [ICLR 2025 & COLM 2025] Official PyTorch implementation of the Forgetting Transformer and Adaptive Computation Pruning☆134Updated last month
- Remasking Discrete Diffusion Models with Inference-Time Scaling☆59Updated 9 months ago
- GPU-optimized framework for training diffusion language models at any scale. The backend of Quokka, Super Data Learners, and OpenMoE 2 tr…☆289Updated last month
- Physics of Language Models, Part 4☆265Updated this week
- ☆89Updated last year
- ☆257Updated 6 months ago
- Official implementation of Phi-Mamba. A MOHAWK-distilled model (Transformers to SSMs: Distilling Quadratic Knowledge to Subquadratic Mode…☆116Updated last year
- ☆100Updated 9 months ago
- Some preliminary explorations of Mamba's context scaling.☆217Updated last year
- Easy and Efficient dLLM Fine-Tuning☆139Updated last week
- ☆33Updated 11 months ago
- ☆76Updated 3 weeks ago
- ☆342Updated last month
- ☆62Updated 5 months ago
- P1: Mastering Physics Olympiads with Reinforcement Learning☆67Updated 3 weeks ago