volcengine / verl
veRL: Volcano Engine Reinforcement Learning for LLM
☆1,135Updated this week
Alternatives and similar repositories for verl:
Users that are interested in verl are comparing it to the libraries listed below
- Scalable toolkit for efficient model alignment☆693Updated this week
- Large Reasoning Models☆801Updated last month
- ☆868Updated this week
- Recipes to scale inference-time compute of open models☆975Updated last week
- Scalable RL solution for advanced reasoning of language models☆981Updated this week
- [NeurIPS'24 Spotlight, ICLR'25] To speed up Long-context LLMs' inference, approximate and dynamic sparse calculate the attention, which r…☆890Updated last week
- Memory optimization and training recipes to extrapolate language models' context length to 1 million tokens, with minimal hardware.☆694Updated 4 months ago
- [ICML 2024] Break the Sequential Dependency of LLM Inference Using Lookahead Decoding☆1,183Updated 3 months ago
- [NeurIPS 2024] SimPO: Simple Preference Optimization with a Reference-Free Reward☆803Updated 2 months ago
- OLMoE: Open Mixture-of-Experts Language Models☆536Updated last month
- Official Implementation of EAGLE-1 (ICML'24) and EAGLE-2 (EMNLP'24)☆928Updated 3 weeks ago
- FlashInfer: Kernel Library for LLM Serving☆1,876Updated this week
- A bibliography and survey of the papers surrounding o1☆1,076Updated 2 months ago
- Ring attention implementation with flash attention☆660Updated last month
- O1 Replication Journey☆1,910Updated 2 weeks ago
- ☆2,341Updated this week
- OpenR: An Open Source Framework for Advanced Reasoning with Large Language Models☆1,497Updated last week
- Efficient, Flexible and Portable Structured Generation☆619Updated this week
- ☆1,150Updated 2 months ago
- An Open Large Reasoning Model for Real-World Solutions☆1,411Updated 2 months ago
- Fast inference from large lauguage models via speculative decoding☆643Updated 5 months ago
- Minimalistic large language model 3D-parallelism training☆1,400Updated this week
- A library with extensible implementations of DPO, KTO, PPO, ORPO, and other human-aware loss functions (HALOs).☆791Updated this week
- Minimalistic 4D-parallelism distributed training framework for education purpose☆670Updated this week
- ReST-MCTS*: LLM Self-Training via Process Reward Guided Tree Search (NeurIPS 2024)☆548Updated last week
- Serving multiple LoRA finetuned LLM as one☆1,018Updated 8 months ago
- USP: Unified (a.k.a. Hybrid, 2D) Sequence Parallel Attention for Long Context Transformers Model Training and Inference☆415Updated 3 weeks ago
- Code for Quiet-STaR☆706Updated 5 months ago
- DeepSeekMoE: Towards Ultimate Expert Specialization in Mixture-of-Experts Language Models☆1,187Updated last year