OpenRL-Lab / Ray_TutorialLinks
Tutorial for Ray
☆36Updated last year
Alternatives and similar repositories for Ray_Tutorial
Users that are interested in Ray_Tutorial are comparing it to the libraries listed below
Sorting:
- A lightweight reinforcement learning framework that integrates seamlessly into your codebase, empowering developers to focus on algorithm…☆99Updated 5 months ago
- A MoE impl for PyTorch, [ATC'23] SmartMoE☆71Updated 2 years ago
- ☆79Updated 2 years ago
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models☆139Updated last year
- Efficient, Flexible, and Highly Fault-Tolerant Model Service Management Based on SGLang☆61Updated last year
- 青稞Talk☆190Updated 2 weeks ago
- This is a personal reimplementation of Google's Infini-transformer, utilizing a small 2b model. The project includes both model and train…☆58Updated last year
- Nano repo for RL training of LLMs☆70Updated 3 months ago
- Tiny-FSDP, a minimalistic re-implementation of the PyTorch FSDP☆93Updated 5 months ago
- ☆41Updated 11 months ago
- DeepSeek Native Sparse Attention pytorch implementation☆114Updated last month
- Tiny-DeepSpeed, a minimalistic re-implementation of the DeepSpeed library☆49Updated 5 months ago
- Odysseus: Playground of LLM Sequence Parallelism☆79Updated last year
- Pipeline-Parallel Lecture: Simplest Dualpipe Implementation.☆31Updated 4 months ago
- Implementation of FlashAttention in PyTorch☆180Updated last year
- A visuailzation tool to make deep understaning and easier debugging for RLHF training.☆283Updated 11 months ago
- Lightning Attention-2: A Free Lunch for Handling Unlimited Sequence Lengths in Large Language Models☆340Updated 11 months ago
- ☆209Updated 3 months ago
- An industrial extension library of pytorch to accelerate large scale model training☆58Updated 5 months ago
- Mixture-of-Experts (MoE) Language Model☆195Updated last year
- ☆32Updated last year
- ZO2 (Zeroth-Order Offloading): Full Parameter Fine-Tuning 175B LLMs with 18GB GPU Memory [COLM2025]☆199Updated 6 months ago
- RLHF experiments on a single A100 40G GPU. Support PPO, GRPO, REINFORCE, RAFT, RLOO, ReMax, DeepSeek R1-Zero reproducing.☆79Updated 11 months ago
- 模型压缩的小白入门教程☆22Updated last year
- Implementation for FP8/INT8 Rollout for RL training without performence drop.☆289Updated 3 months ago
- A highly capable 2.4B lightweight LLM using only 1T pre-training data with all details.☆223Updated 6 months ago
- [ICML'24 Oral] The official code of "DiJiang: Efficient Large Language Models through Compact Kernelization", a novel DCT-based linear at…☆104Updated last year
- Rethinking RL Scaling for Vision Language Models: A Transparent, From-Scratch Framework and Comprehensive Evaluation Scheme☆147Updated 9 months ago
- Efficient Mixture of Experts for LLM Paper List☆166Updated 4 months ago
- Low-bit optimizers for PyTorch☆138Updated 2 years ago