OpenRL-Lab / Ray_TutorialLinks
Tutorial for Ray
☆28Updated last year
Alternatives and similar repositories for Ray_Tutorial
Users that are interested in Ray_Tutorial are comparing it to the libraries listed below
Sorting:
- A visuailzation tool to make deep understaning and easier debugging for RLHF training.☆241Updated 5 months ago
- A MoE impl for PyTorch, [ATC'23] SmartMoE☆66Updated 2 years ago
- siiRL: Shanghai Innovation Institute RL Framework for Advanced LLMs and Multi-Agent Systems☆160Updated this week
- This is a repo for showcasing using MCTS with LLMs to solve gsm8k problems☆86Updated 4 months ago
- Rethinking RL Scaling for Vision Language Models: A Transparent, From-Scratch Framework and Comprehensive Evaluation Scheme☆138Updated 4 months ago
- Exploring the Limit of Outcome Reward for Learning Mathematical Reasoning☆188Updated 4 months ago
- ☆154Updated 6 months ago
- Trinity-RFT is a general-purpose, flexible and scalable framework designed for reinforcement fine-tuning (RFT) of large language models (…☆218Updated this week
- A High-Efficiency System of Large Language Model Based Search Agents☆71Updated last month
- ☆198Updated 3 months ago
- RLHF experiments on a single A100 40G GPU. Support PPO, GRPO, REINFORCE, RAFT, RLOO, ReMax, DeepSeek R1-Zero reproducing.☆68Updated 5 months ago
- A Telegram bot to recommend arXiv papers☆281Updated 3 months ago
- DeepSpeed教程 & 示例注释 & 学习笔记 (大模型高效训练)☆174Updated last year
- ICML2025: Forest-of-Thought: Scaling Test-Time Compute for Enhancing LLM Reasoning☆46Updated 3 months ago
- A lightweight reinforcement learning framework that integrates seamlessly into your codebase, empowering developers to focus on algorithm…☆34Updated this week
- ZO2 (Zeroth-Order Offloading): Full Parameter Fine-Tuning 175B LLMs with 18GB GPU Memory☆167Updated 3 weeks ago
- Super-Efficient RLHF Training of LLMs with Parameter Reallocation☆307Updated 3 months ago
- ☆72Updated this week
- Train your grpo with zero dataset and low resources, 8bit/4bit/lora/qlora supported, multi-gpu supported ...☆75Updated 3 months ago
- DeepSeek Native Sparse Attention pytorch implementation☆86Updated this week
- Tiny-DeepSpeed, a minimalistic re-implementation of the DeepSpeed library☆41Updated last week
- OpenRFT: Adapting Reasoning Foundation Model for Domain-specific Tasks with Reinforcement Fine-Tuning☆147Updated 7 months ago
- Tiny-Megatron, a minimalistic re-implementation of the Megatron library☆15Updated last week
- ☆30Updated 5 months ago
- "what, how, where, and how well? a survey on test-time scaling in large language models" repository☆57Updated this week
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models☆136Updated last year
- A highly capable 2.4B lightweight LLM using only 1T pre-training data with all details.☆207Updated 2 weeks ago
- An automated pipeline for evaluating LLMs for role-playing.☆195Updated 10 months ago
- ☆113Updated 9 months ago
- 训练一个对中文支持更好的LLaVA模型,并开源训练代码和数据。☆64Updated 11 months ago