ISEEKYAN / verl_megatron_practiceLinks
(best/better) practices of megatron on veRL and tuning guide
☆89Updated 2 weeks ago
Alternatives and similar repositories for verl_megatron_practice
Users that are interested in verl_megatron_practice are comparing it to the libraries listed below
Sorting:
- Bridge Megatron-Core to Hugging Face/Reinforcement Learning☆129Updated this week
- Async pipelined version of Verl☆117Updated 6 months ago
- Implementation for FP8/INT8 Rollout for RL training without performence drop.☆253Updated 2 weeks ago
- Super-Efficient RLHF Training of LLMs with Parameter Reallocation☆319Updated 5 months ago
- A lightweight reinforcement learning framework that integrates seamlessly into your codebase, empowering developers to focus on algorithm…☆68Updated last month
- Repository of LV-Eval Benchmark☆70Updated last year
- Official repository for DistFlashAttn: Distributed Memory-efficient Attention for Long-context LLMs Training☆215Updated last year
- [ICLR 2025] PEARL: Parallel Speculative Decoding with Adaptive Draft Length☆118Updated 6 months ago
- Training library for Megatron-based models☆116Updated this week
- ByteCheckpoint: An Unified Checkpointing Library for LFMs☆249Updated 3 months ago
- PyTorch bindings for CUTLASS grouped GEMM.☆151Updated this week
- siiRL: Shanghai Innovation Institute RL Framework for Advanced LLMs and Multi-Agent Systems☆218Updated last week
- Best practices for training DeepSeek, Mixtral, Qwen and other MoE models using Megatron Core.☆108Updated this week
- Estimate MFU for DeepSeekV3☆25Updated 9 months ago
- Ouroboros: Speculative Decoding with Large Model Enhanced Drafting (EMNLP 2024 main)☆110Updated 6 months ago
- Code associated with the paper **Draft & Verify: Lossless Large Language Model Acceleration via Self-Speculative Decoding**☆202Updated 8 months ago
- Allow torch tensor memory to be released and resumed later☆147Updated this week
- ☆47Updated last month
- A flexible and efficient training framework for large-scale alignment tasks☆428Updated this week
- ☆23Updated last month
- [ICLR 2025] COAT: Compressing Optimizer States and Activation for Memory-Efficient FP8 Training☆240Updated 2 months ago
- Reproducing R1 for Code with Reliable Rewards☆259Updated 5 months ago
- Toolchain built around the Megatron-LM for Distributed Training☆67Updated 2 weeks ago
- ☆43Updated last year
- A simple calculation for LLM MFU.☆48Updated last month
- Homepage for ProLong (Princeton long-context language models) and paper "How to Train Long-Context Language Models (Effectively)"☆230Updated last month
- [NeurIPS 2024] Fast Best-of-N Decoding via Speculative Rejection☆51Updated 11 months ago
- ☆43Updated 4 months ago
- ☆118Updated 4 months ago