kuleshov-group / bd3lmsLinks
[ICLR 2025 Oral] Block Diffusion: Interpolating Between Autoregressive and Diffusion Language Models
☆884Updated 4 months ago
Alternatives and similar repositories for bd3lms
Users that are interested in bd3lms are comparing it to the libraries listed below
Sorting:
- [NeurIPS 2025] MMaDA - Open-Sourced Multimodal Large Diffusion Language Models☆1,485Updated this week
- Dream 7B, a large diffusion language model☆1,054Updated last month
- Official Implementation for the paper "d1: Scaling Reasoning in Diffusion Large Language Models via Reinforcement Learning"☆352Updated 4 months ago
- [ICLR2025] DiffuGPT and DiffuLLaMA: Scaling Diffusion Language Models via Adaptation from Autoregressive Models☆334Updated 5 months ago
- [ICLR2025 Spotlight🔥] Official Implementation of TokenFormer: Rethinking Transformer Scaling with Tokenized Model Parameters☆576Updated 9 months ago
- [NeurIPS 2024] Simple and Effective Masked Diffusion Language Model☆555Updated last month
- Official PyTorch implementation for ICLR2025 paper "Scaling up Masked Diffusion Models on Text"☆334Updated 10 months ago
- The most open diffusion language model for code generation — releasing pretraining, evaluation, inference, and checkpoints.☆442Updated this week
- The official GitHub repo for the survey paper "A Survey on Diffusion Language Models".☆450Updated this week
- H-Net: Hierarchical Network with Dynamic Chunking☆778Updated last month
- [ICML 2024 Best Paper] Discrete Diffusion Modeling by Estimating the Ratios of the Data Distribution (https://arxiv.org/abs/2310.16834)☆659Updated last year
- Pytorch implementation of Transfusion, "Predict the Next Token and Diffuse Images with One Multi-Modal Model", from MetaAI☆1,259Updated last month
- code for "Diffusion Forcing: Next-token Prediction Meets Full-Sequence Diffusion"☆1,073Updated last week
- Official implementation of "Fast-dLLM: Training-free Acceleration of Diffusion LLM by Enabling KV Cache and Parallel Decoding"☆676Updated 3 weeks ago
- Implementation of the sparse attention pattern proposed by the Deepseek team in their "Native Sparse Attention" paper☆779Updated 3 months ago
- HART: Efficient Visual Generation with Hybrid Autoregressive Transformer☆640Updated last year
- This repo contains the code for 1D tokenizer and generator☆1,073Updated 7 months ago
- Discrete Diffusion Forcing (D2F): dLLMs Can Do Faster-Than-AR Inference☆194Updated last month
- SEED-Voken: A Series of Powerful Visual Tokenizers☆973Updated 3 weeks ago
- [NeurIPS 2025] An official implementation of Flow-GRPO: Training Flow Matching Models via Online RL☆1,570Updated last week
- Scaling Diffusion Transformers with Mixture of Experts☆400Updated last year
- Long-RL: Scaling RL to Long Sequences (NeurIPS 2025)☆652Updated last month
- [ICML2024 (Oral)] Official PyTorch implementation of DoRA: Weight-Decomposed Low-Rank Adaptation☆878Updated last year
- [ICLR'25 Oral] Representation Alignment for Generation: Training Diffusion Transformers Is Easier Than You Think☆1,412Updated 8 months ago
- A curated list for awesome discrete diffusion models resources.☆491Updated 2 months ago
- 🐳 Efficient Triton implementations for "Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention"☆920Updated 7 months ago
- Pretraining and inference code for a large-scale depth-recurrent language model☆843Updated last month
- Muon is Scalable for LLM Training☆1,354Updated 3 months ago
- A Distributed Attention Towards Linear Scalability for Ultra-Long Context, Heterogeneous Data Training☆550Updated last week
- Autoregressive Model Beats Diffusion: 🦙 Llama for Scalable Image Generation☆1,885Updated last year