Tencent / WeDLMLinks
WeDLM: The fastest diffusion language model with standard causal attention and native KV cache compatibility, delivering real speedups over vLLM-optimized baselines.
☆597Updated 3 weeks ago
Alternatives and similar repositories for WeDLM
Users that are interested in WeDLM are comparing it to the libraries listed below
Sorting:
- ☆1,283Updated 2 months ago
- Research code artifacts for Code World Model (CWM) including inference tools, reproducibility, and documentation.☆833Updated last month
- ToolOrchestra is an end-to-end RL training framework for orchestrating tools and agentic workflows.☆642Updated last week
- codes for R-Zero: Self-Evolving Reasoning LLM from Zero Data (https://www.arxiv.org/pdf/2508.05004)☆755Updated this week
- Official implementation of "Continuous Autoregressive Language Models"☆726Updated 2 months ago
- dLLM: Simple Diffusion Language Modeling☆1,716Updated this week
- OpenTinker is an RL-as-a-Service infrastructure for foundation models☆625Updated last week
- ☆867Updated 4 months ago
- The offical repo for "Parallel-R1: Towards Parallel Thinking via Reinforcement Learning"☆255Updated 2 months ago
- ☆724Updated 2 months ago
- Official Repository for "Glyph: Scaling Context Windows via Visual-Text Compression"☆558Updated 3 months ago
- A Scientific Multimodal Foundation Model☆706Updated this week
- Tiny Model, Big Logic: Diversity-Driven Optimization Elicits Large-Model Reasoning Ability in VibeThinker-1.5B☆569Updated 2 months ago
- GPU-optimized framework for training diffusion language models at any scale. The backend of Quokka, Super Data Learners, and OpenMoE 2 tr…☆321Updated 2 months ago
- Open-source release accompanying Gao et al. 2025☆501Updated last month
- QeRL enables RL for 32B LLMs on a single H100 GPU.☆481Updated 2 months ago
- Official JAX implementation of End-to-End Test-Time Training for Long Context☆511Updated last week
- Training teachers with reinforcement learning able to make LLMs learn how to reason for test time scaling.☆358Updated 7 months ago
- Fast, Sharp & Reliable Agentic Intelligence☆492Updated last week
- dInfer: An Efficient Inference Framework for Diffusion Language Models☆410Updated last month
- Block Diffusion for Ultra-Fast Speculative Decoding☆459Updated this week
- CUDA-L1: Improving CUDA Optimization via Contrastive Reinforcement Learning☆287Updated 3 months ago
- Moonshot's most powerful model☆795Updated last week
- Mixture-of-Recursions: Learning Dynamic Recursive Depths for Adaptive Token-Level Computation (NeurIPS 2025)☆541Updated 4 months ago
- Parallel Scaling Law for Language Model — Beyond Parameter and Inference Time Scaling☆468Updated 8 months ago
- Welcome to the official repository of SINQ! A novel, fast and high-quality quantization method designed to make any Large Language Model …☆590Updated 3 weeks ago
- Official repository for DR Tulu: Reinforcement Learning with Evolving Rubrics for Deep Research☆542Updated last week
- OmniVinci is an omni-modal LLM for joint understanding of vision, audio, and language.☆631Updated 3 months ago
- Official PyTorch implementation for Hogwild! Inference: Parallel LLM Generation with a Concurrent Attention Cache☆140Updated 5 months ago
- Simple & Scalable Pretraining for Neural Architecture Research☆307Updated 2 months ago