OpenRL-Lab / Ray_TutorialLinks
Tutorial for Ray
☆36Updated last year
Alternatives and similar repositories for Ray_Tutorial
Users that are interested in Ray_Tutorial are comparing it to the libraries listed below
Sorting:
- A lightweight reinforcement learning framework that integrates seamlessly into your codebase, empowering developers to focus on algorithm…☆91Updated 3 months ago
- 青稞Talk☆173Updated last week
- A MoE impl for PyTorch, [ATC'23] SmartMoE☆70Updated 2 years ago
- ☆79Updated last year
- This is a personal reimplementation of Google's Infini-transformer, utilizing a small 2b model. The project includes both model and train…☆58Updated last year
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models☆137Updated last year
- Efficient, Flexible, and Highly Fault-Tolerant Model Service Management Based on SGLang☆61Updated last year
- ☆86Updated 3 months ago
- Tiny-FSDP, a minimalistic re-implementation of the PyTorch FSDP☆91Updated 3 months ago
- Nano repo for RL training of LLMs☆70Updated last month
- 模型压缩的小白入门教程☆22Updated last year
- A visuailzation tool to make deep understaning and easier debugging for RLHF training.☆272Updated 9 months ago
- ☆208Updated last month
- Towards Economical Inference: Enabling DeepSeek's Multi-Head Latent Attention in Any Transformer-based LLMs☆197Updated last week
- DeepSeek Native Sparse Attention pytorch implementation☆109Updated last month
- Fast LLM Training CodeBase With dynamic strategy choosing [Deepspeed+Megatron+FlashAttention+CudaFusionKernel+Compiler];☆41Updated last year
- Manages vllm-nccl dependency☆17Updated last year
- An industrial extension library of pytorch to accelerate large scale model training☆54Updated 3 months ago
- Lightning Attention-2: A Free Lunch for Handling Unlimited Sequence Lengths in Large Language Models☆338Updated 9 months ago
- Pretrain、decay、SFT a CodeLLM from scratch 🧙♂️☆39Updated last year
- ZO2 (Zeroth-Order Offloading): Full Parameter Fine-Tuning 175B LLMs with 18GB GPU Memory [COLM2025]☆197Updated 4 months ago
- ☆16Updated last year
- RLHF experiments on a single A100 40G GPU. Support PPO, GRPO, REINFORCE, RAFT, RLOO, ReMax, DeepSeek R1-Zero reproducing.☆75Updated 9 months ago
- Rethinking RL Scaling for Vision Language Models: A Transparent, From-Scratch Framework and Comprehensive Evaluation Scheme☆146Updated 8 months ago
- ☆39Updated 9 months ago
- ☆90Updated 2 weeks ago
- Tiny-DeepSpeed, a minimalistic re-implementation of the DeepSpeed library☆48Updated 3 months ago
- Implementation of FlashAttention in PyTorch☆175Updated 11 months ago
- a toolkit on knowledge distillation for large language models☆218Updated last month
- [ICML 2025] Fourier Position Embedding: Enhancing Attention’s Periodic Extension for Length Generalization☆104Updated 6 months ago