OpenRL-Lab / Ray_TutorialLinks
Tutorial for Ray
☆30Updated last year
Alternatives and similar repositories for Ray_Tutorial
Users that are interested in Ray_Tutorial are comparing it to the libraries listed below
Sorting:
- A MoE impl for PyTorch, [ATC'23] SmartMoE☆71Updated 2 years ago
- siiRL: Shanghai Innovation Institute RL Framework for Advanced LLMs and Multi-Agent Systems☆222Updated this week
- A lightweight reinforcement learning framework that integrates seamlessly into your codebase, empowering developers to focus on algorithm…☆68Updated 2 months ago
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models☆137Updated last year
- A High-Efficiency System of Large Language Model Based Search Agents☆74Updated 3 months ago
- 青稞Talk☆156Updated this week
- A visuailzation tool to make deep understaning and easier debugging for RLHF training.☆260Updated 8 months ago
- Rethinking RL Scaling for Vision Language Models: A Transparent, From-Scratch Framework and Comprehensive Evaluation Scheme☆142Updated 6 months ago
- RLHF experiments on a single A100 40G GPU. Support PPO, GRPO, REINFORCE, RAFT, RLOO, ReMax, DeepSeek R1-Zero reproducing.☆73Updated 8 months ago
- ☆83Updated 2 months ago
- Tiny-Megatron, a minimalistic re-implementation of the Megatron library☆16Updated last month
- This is a repo for showcasing using MCTS with LLMs to solve gsm8k problems☆92Updated 7 months ago
- ☆33Updated 7 months ago
- qwen-nsa☆79Updated 2 weeks ago
- Efficient Mixture of Experts for LLM Paper List☆140Updated last month
- DeepSpeed教程 & 示例注释 & 学习笔记 (大模型高效训练)☆179Updated 2 years ago
- 训练一个对中文支持更好的LLaVA模型,并开源训练代码和数据。☆74Updated last year
- DeepSeek Native Sparse Attention pytorch implementation☆106Updated 2 weeks ago
- mllm-npu: training multimodal large language models on Ascend NPUs☆93Updated last year
- This is a personal reimplementation of Google's Infini-transformer, utilizing a small 2b model. The project includes both model and train…☆58Updated last year
- HFAI deep learning models☆153Updated 2 years ago
- Tiny-DeepSpeed, a minimalistic re-implementation of the DeepSpeed library☆48Updated 2 months ago
- Delta-CoMe can achieve near loss-less 1-bit compressin which has been accepted by NeurIPS 2024☆57Updated 11 months ago
- ☆16Updated last year
- Fast LLM Training CodeBase With dynamic strategy choosing [Deepspeed+Megatron+FlashAttention+CudaFusionKernel+Compiler];☆41Updated last year
- ☆115Updated 11 months ago
- ☆71Updated 4 months ago
- ZO2 (Zeroth-Order Offloading): Full Parameter Fine-Tuning 175B LLMs with 18GB GPU Memory [COLM2025]☆190Updated 3 months ago
- Mixture-of-Experts (MoE) Language Model☆189Updated last year
- ICML2025: Forest-of-Thought: Scaling Test-Time Compute for Enhancing LLM Reasoning☆48Updated 5 months ago