JinjieNi / dlms-are-super-data-learnersLinks
The official github repo for "Diffusion Language Models are Super Data Learners".
☆111Updated last month
Alternatives and similar repositories for dlms-are-super-data-learners
Users that are interested in dlms-are-super-data-learners are comparing it to the libraries listed below
Sorting:
- Official PyTorch implementation and models for paper "Diffusion Beats Autoregressive in Data-Constrained Settings". We find diffusion mod…☆92Updated 3 weeks ago
- TraceRL - Revolutionizing Reinforcement Learning Framework for Diffusion Large Language Models☆187Updated last week
- ☆85Updated last year
- Official PyTorch Implementation for Vision-Language Models Create Cross-Modal Task Representations, ICML 2025☆30Updated 4 months ago
- Remasking Discrete Diffusion Models with Inference-Time Scaling☆43Updated 6 months ago
- [ICLR 2025 & COLM 2025] Official PyTorch implementation of the Forgetting Transformer and Adaptive Computation Pruning☆130Updated last week
- ☆85Updated 6 months ago
- Implementation of 🥥 Coconut, Chain of Continuous Thought, in Pytorch☆179Updated 3 months ago
- An efficient implementation of the NSA (Native Sparse Attention) kernel☆115Updated 2 months ago
- Esoteric Language Models☆99Updated last month
- [ICLR2025] Codebase for "ReMoE: Fully Differentiable Mixture-of-Experts with ReLU Routing", built on Megatron-LM.☆91Updated 9 months ago
- ☆34Updated 8 months ago
- [ICML 2025] Roll the dice & look before you leap: Going beyond the creative limits of next-token prediction☆68Updated 3 months ago
- ☆92Updated last week
- The official repository for SkyLadder: Better and Faster Pretraining via Context Window Scheduling☆34Updated 3 weeks ago
- [NeurIPS'25] dKV-Cache: The Cache for Diffusion Language Models☆96Updated 3 months ago
- [ICLR2025] DiffuGPT and DiffuLLaMA: Scaling Diffusion Language Models via Adaptation from Autoregressive Models☆302Updated 3 months ago
- Code accompanying the paper "Generalized Interpolating Discrete Diffusion"☆102Updated 3 months ago
- ☆104Updated 11 months ago
- Tiny re-implementation of MDM in style of LLaDA and nano-gpt speedrun☆56Updated 6 months ago
- Some preliminary explorations of Mamba's context scaling.☆217Updated last year
- Mixture-of-Transformers: A Sparse and Scalable Architecture for Multi-Modal Foundation Models. TMLR 2025.☆105Updated last week
- Code for NeurIPS 2024 Spotlight: "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations"☆83Updated 10 months ago
- Stick-breaking attention☆60Updated 2 months ago
- Official implementation of Regularized Policy Gradient (RPG) (https://arxiv.org/abs/2505.17508)☆37Updated last week
- ☆243Updated 3 months ago
- Official implementation of Phi-Mamba. A MOHAWK-distilled model (Transformers to SSMs: Distilling Quadratic Knowledge to Subquadratic Mode…☆115Updated last year
- [NeurIPS 2024] Official Repository of The Mamba in the Llama: Distilling and Accelerating Hybrid Models☆229Updated 4 months ago
- ☆57Updated 2 months ago
- [NeurIPS-2024] 📈 Scaling Laws with Vocabulary: Larger Models Deserve Larger Vocabularies https://arxiv.org/abs/2407.13623☆86Updated 11 months ago