Tencent / WeDLMView on GitHub
WeDLM: The fastest diffusion language model with standard causal attention and native KV cache compatibility, delivering real speedups over vLLM-optimized baselines.
616Feb 9, 2026Updated 3 weeks ago

Alternatives and similar repositories for WeDLM

Users that are interested in WeDLM are comparing it to the libraries listed below

Sorting:

Are these results useful?