kuleshov-group / bd3lmsLinks
Block Diffusion: Interpolating Between Autoregressive and Diffusion Language Models
☆819Updated 2 months ago
Alternatives and similar repositories for bd3lms
Users that are interested in bd3lms are comparing it to the libraries listed below
Sorting:
- Dream 7B, a large diffusion language model☆970Updated 3 weeks ago
- MMaDA - Open-Sourced Multimodal Large Diffusion Language Models☆1,361Updated last week
- [ICLR2025] DiffuGPT and DiffuLLaMA: Scaling Diffusion Language Models via Adaptation from Autoregressive Models☆302Updated 3 months ago
- Official Implementation for the paper "d1: Scaling Reasoning in Diffusion Large Language Models via Reinforcement Learning"☆308Updated 2 months ago
- Official PyTorch implementation for ICLR2025 paper "Scaling up Masked Diffusion Models on Text"☆293Updated 8 months ago
- [ICLR2025 Spotlight🔥] Official Implementation of TokenFormer: Rethinking Transformer Scaling with Tokenized Model Parameters☆572Updated 7 months ago
- [NeurIPS 2024] Simple and Effective Masked Diffusion Language Model☆497Updated 3 months ago
- An official implementation of Flow-GRPO: Training Flow Matching Models via Online RL☆1,273Updated this week
- H-Net: Hierarchical Network with Dynamic Chunking☆713Updated last month
- code for "Diffusion Forcing: Next-token Prediction Meets Full-Sequence Diffusion"☆1,009Updated 5 months ago
- [ICML 2024 Best Paper] Discrete Diffusion Modeling by Estimating the Ratios of the Data Distribution (https://arxiv.org/abs/2310.16834)☆633Updated last year
- The official GitHub repo for the survey paper "A Survey on Diffusion Language Models".☆245Updated last week
- SEED-Voken: A Series of Powerful Visual Tokenizers☆936Updated 2 months ago
- HART: Efficient Visual Generation with Hybrid Autoregressive Transformer☆631Updated 11 months ago
- Official implementation of "Fast-dLLM: Training-free Acceleration of Diffusion LLM by Enabling KV Cache and Parallel Decoding"☆453Updated last week
- Implementation of the sparse attention pattern proposed by the Deepseek team in their "Native Sparse Attention" paper☆744Updated last month
- The most open diffusion language model for code generation — releasing pretraining, evaluation, inference, and checkpoints.☆214Updated last week
- This repo contains the code for 1D tokenizer and generator☆1,027Updated 5 months ago
- Pytorch implementation of Transfusion, "Predict the Next Token and Diffuse Images with One Multi-Modal Model", from MetaAI☆1,203Updated 2 months ago
- Long-RL: Scaling RL to Long Sequences☆605Updated last week
- Scaling Diffusion Transformers with Mixture of Experts☆377Updated last year
- Discrete Diffusion Forcing (D2F): dLLMs Can Do Faster-Than-AR Inference☆142Updated last week
- Muon is an optimizer for hidden layers in neural networks☆1,710Updated 2 months ago
- Pretraining and inference code for a large-scale depth-recurrent language model☆827Updated last week
- [ICLR'25 Oral] Representation Alignment for Generation: Training Diffusion Transformers Is Easier Than You Think☆1,306Updated 6 months ago
- [ICML2024 (Oral)] Official PyTorch implementation of DoRA: Weight-Decomposed Low-Rank Adaptation☆851Updated 11 months ago
- A Distributed Attention Towards Linear Scalability for Ultra-Long Context, Heterogeneous Data Training☆508Updated this week
- Muon is Scalable for LLM Training☆1,311Updated last month
- 🐳 Efficient Triton implementations for "Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention"☆856Updated 5 months ago
- Implementation of a single layer of the MMDiT, proposed in Stable Diffusion 3, in Pytorch☆437Updated 8 months ago