d3LLM: Ultra-Fast Diffusion LLM 🚀
☆93Feb 4, 2026Updated last month
Alternatives and similar repositories for d3LLM
Users that are interested in d3LLM are comparing it to the libraries listed below
Sorting:
- Efficient Long-context Language Model Training by Core Attention Disaggregation☆91Feb 23, 2026Updated last week
- LoPA: Scaling dLLM Inference via Lookahead Parallel Decoding☆34Jan 16, 2026Updated last month
- ☆55Jun 4, 2025Updated 9 months ago
- DeepGEMM: clean and efficient FP8 GEMM kernels with fine-grained scaling☆21Feb 9, 2026Updated 3 weeks ago
- ☆20Jun 9, 2025Updated 8 months ago
- Learnable Semi-structured Sparsity for Vision Transformers and Diffusion Transformers☆14Feb 7, 2025Updated last year
- Source code for paper "Empirical Analysis of Decoding Biases in Masked Diffusion Models"☆37Jan 11, 2026Updated last month
- ☆44Updated this week
- Official implementation of "Diffusion Language Models Know the Answer Before Decoding"☆47Sep 8, 2025Updated 5 months ago
- Official PyTorch implementation of the paper "Accelerating Diffusion Large Language Models with SlowFast Sampling: The Three Golden Princ…☆40Jul 18, 2025Updated 7 months ago
- The official implementation for the intra-stage fusion technique introduced in https://arxiv.org/abs/2409.13221☆31Apr 22, 2025Updated 10 months ago
- DFlash: Block Diffusion for Flash Speculative Decoding☆593Feb 18, 2026Updated 2 weeks ago
- ☆32Oct 4, 2025Updated 5 months ago
- Dynamic resources changes for multi-dimensional parallelism training☆30Aug 22, 2025Updated 6 months ago
- Stable-DiffCoder is a family of lightweight open-source code DLLMs(diffusion large language models) comprising base and instruct models, …☆75Jan 23, 2026Updated last month
- [Interspeech 2024] LiteFocus is a tool designed to accelerate diffusion-based TTA model, now implemented with the base model AudioLDM2.☆34Mar 11, 2025Updated 11 months ago
- ☆51Aug 22, 2025Updated 6 months ago
- ☆23Sep 26, 2025Updated 5 months ago
- A curated list of recent papers on efficient video attention for video diffusion models, including sparsification, quantization, and cach…☆59Oct 27, 2025Updated 4 months ago
- [NeurIPS 2025] Scaling Speculative Decoding with Lookahead Reasoning☆67Oct 31, 2025Updated 4 months ago
- Holistic Evaluation of Multimodal LLMs on Spatial Intelligence☆87Feb 25, 2026Updated last week
- [Archived] For the latest updates and community contribution, please visit: https://github.com/Ascend/TransferQueue or https://gitcode.co…☆13Jan 16, 2026Updated last month
- [NeurIPS'25] dKV-Cache: The Cache for Diffusion Language Models☆130May 22, 2025Updated 9 months ago
- [ICLR 2026] Official code for TraceRL: Revolutionizing post-training for Diffusion LLMs, powering the SOTA TraDo series.☆435Jan 28, 2026Updated last month
- Data recipes and robust infrastructure for training AI agents☆104Updated this week
- [CVPR 2025 Highlight] TinyFusion: Diffusion Transformers Learned Shallow☆160Dec 1, 2025Updated 3 months ago
- [ICLR 2026] ParallelBench: Understanding the Tradeoffs of Parallel Decoding in Diffusion LLMs☆42Updated this week
- ☆10Jun 24, 2020Updated 5 years ago
- Paper reading and discussion notes, covering AI frameworks, distributed systems, cluster management, etc.☆55Nov 11, 2025Updated 3 months ago
- Defeating the Training-Inference Mismatch via FP16☆183Nov 14, 2025Updated 3 months ago
- Inferix: A Block-Diffusion based Next-Generation Inference Engine for World Simulation☆110Feb 26, 2026Updated last week
- NVSHMEM‑Tutorial: Build a DeepEP‑like GPU Buffer☆163Feb 11, 2026Updated 3 weeks ago
- Github mirror of trition-lang/triton repo.☆146Updated this week
- ☆10Apr 12, 2025Updated 10 months ago
- The official repo for "CodeScaler: Scaling Code LLM Training and Test-Time Inference via Execution-Free Reward Models"☆29Feb 23, 2026Updated last week
- [ICLR 2026] Learning to Parallel: Accelerating Diffusion Large Language Models via Learnable Parallel Decoding☆30Jan 27, 2026Updated last month
- CONFSEC's ComputeNode component of the OpenPCC standard☆17Dec 15, 2025Updated 2 months ago
- ☆48Aug 6, 2024Updated last year
- Tile-Based Runtime for Ultra-Low-Latency LLM Inference☆675Updated this week