NVlabs / QeRLLinks
QeRL enables RL for 32B LLMs on a single H100 GPU.
☆416Updated 3 weeks ago
Alternatives and similar repositories for QeRL
Users that are interested in QeRL are comparing it to the libraries listed below
Sorting:
- CUDA-L1: Improving CUDA Optimization via Contrastive Reinforcement Learning☆203Updated this week
- Geometric-Mean Policy Optimization☆89Updated 3 weeks ago
- Official PyTorch implementation for Hogwild! Inference: Parallel LLM Generation with a Concurrent Attention Cache☆130Updated 2 months ago
- [NeurIPS'25 Oral] Query-agnostic KV cache eviction: 3–4× reduction in memory and 2× decrease in latency (Qwen3/2.5, Gemma3, LLaMA3)☆128Updated last week
- Chain of Experts (CoE) enables communication between experts within Mixture-of-Experts (MoE) models☆222Updated this week
- Training-free Post-training Efficient Sub-quadratic Complexity Attention. Implemented with OpenAI Triton.☆148Updated this week
- ☆19Updated 8 months ago
- An efficient implementation of the NSA (Native Sparse Attention) kernel☆124Updated 4 months ago
- Work in progress.☆74Updated 4 months ago
- ☆120Updated last month
- Esoteric Language Models☆104Updated last month
- Training teachers with reinforcement learning able to make LLMs learn how to reason for test time scaling.☆347Updated 4 months ago
- RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best…☆53Updated 7 months ago
- ☆85Updated 7 months ago
- The evaluation framework for training-free sparse attention in LLMs☆102Updated 3 weeks ago
- RLP: Reinforcement as a Pretraining Objective☆198Updated last month
- ☆281Updated 3 weeks ago
- Repository for the Q-Filters method (https://arxiv.org/pdf/2503.02812)☆35Updated 8 months ago
- 😊 TPTT: Transforming Pretrained Transformers into Titans☆29Updated 3 weeks ago
- ☆100Updated last month
- Ring-V2 is a reasoning MoE LLM provided and open-sourced by InclusionAI.☆72Updated 2 weeks ago
- ☆87Updated 5 months ago
- LIMI: Less is More for Agency☆147Updated 3 weeks ago
- The official repo for “Unleashing the Reasoning Potential of Pre-trained LLMs by Critique Fine-Tuning on One Problem” [EMNLP25]☆33Updated 2 months ago
- ☆60Updated 4 months ago
- [EMNLP'2025 Industry] Repo for "Z1: Efficient Test-time Scaling with Code"☆66Updated 6 months ago
- Ling-V2 is a MoE LLM provided and open-sourced by InclusionAI.☆217Updated last month
- The offical repo for "Parallel-R1: Towards Parallel Thinking via Reinforcement Learning"☆229Updated this week
- dInfer: An Efficient Inference Framework for Diffusion Language Models☆284Updated this week
- Landing repository for the paper "Predicting the Order of Upcoming Tokens Improves Language Modeling"☆40Updated last month