OpenRL-Lab / Ray_TutorialLinks
Tutorial for Ray
☆36Updated last year
Alternatives and similar repositories for Ray_Tutorial
Users that are interested in Ray_Tutorial are comparing it to the libraries listed below
Sorting:
- A lightweight reinforcement learning framework that integrates seamlessly into your codebase, empowering developers to focus on algorithm…☆81Updated 2 months ago
- 青稞Talk☆161Updated last week
- A MoE impl for PyTorch, [ATC'23] SmartMoE☆71Updated 2 years ago
- ☆35Updated 8 months ago
- ☆79Updated last year
- This is a personal reimplementation of Google's Infini-transformer, utilizing a small 2b model. The project includes both model and train…☆58Updated last year
- Efficient, Flexible, and Highly Fault-Tolerant Model Service Management Based on SGLang☆60Updated last year
- Tiny-FSDP, a minimalistic re-implementation of the PyTorch FSDP☆90Updated 3 months ago
- A visuailzation tool to make deep understaning and easier debugging for RLHF training.☆265Updated 9 months ago
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models☆137Updated last year
- ☆205Updated 3 weeks ago
- Nano repo for RL training of LLMs☆68Updated 3 weeks ago
- ☆86Updated 3 months ago
- Fast LLM Training CodeBase With dynamic strategy choosing [Deepspeed+Megatron+FlashAttention+CudaFusionKernel+Compiler];☆41Updated last year
- Efficient Mixture of Experts for LLM Paper List☆144Updated last month
- siiRL: Shanghai Innovation Institute RL Framework for Advanced LLMs and Multi-Agent Systems☆226Updated this week
- qwen-nsa☆83Updated last month
- MiroRL is an MCP-first reinforcement learning framework for deep research agent.☆172Updated 2 months ago
- Pretrain、decay、SFT a CodeLLM from scratch 🧙♂️☆39Updated last year
- Towards Economical Inference: Enabling DeepSeek's Multi-Head Latent Attention in Any Transformer-based LLMs☆194Updated last month
- ZO2 (Zeroth-Order Offloading): Full Parameter Fine-Tuning 175B LLMs with 18GB GPU Memory [COLM2025]☆196Updated 4 months ago
- Lightning Attention-2: A Free Lunch for Handling Unlimited Sequence Lengths in Large Language Models☆333Updated 8 months ago
- Delta-CoMe can achieve near loss-less 1-bit compressin which has been accepted by NeurIPS 2024☆58Updated last year
- a toolkit on knowledge distillation for large language models☆200Updated 2 weeks ago
- RLHF experiments on a single A100 40G GPU. Support PPO, GRPO, REINFORCE, RAFT, RLOO, ReMax, DeepSeek R1-Zero reproducing.☆74Updated 9 months ago
- An industrial extension library of pytorch to accelerate large scale model training☆51Updated 3 months ago
- Scaling Preference Data Curation via Human-AI Synergy☆128Updated 4 months ago
- 模型压缩的小白入门教程☆22Updated last year
- DeepSeek Native Sparse Attention pytorch implementation☆108Updated last week
- ☆121Updated 3 months ago