lmgame-org / GRLLinks
Multi-Turn RL Training System with AgentTrainer for Language Model Game Reinforcement Learning
β51Updated 2 weeks ago
Alternatives and similar repositories for GRL
Users that are interested in GRL are comparing it to the libraries listed below
Sorting:
- DPO, but faster πβ46Updated 11 months ago
- Defeating the Training-Inference Mismatch via FP16β154Updated last week
- β55Updated 5 months ago
- β103Updated 2 months ago
- The official repository for SkyLadder: Better and Faster Pretraining via Context Window Schedulingβ40Updated last month
- β95Updated 8 months ago
- Memory optimized Mixture of Expertsβ69Updated 3 months ago
- π₯ LLM-powered GPU kernel synthesis: Train models to convert PyTorch ops into optimized Triton kernels via SFT+RL. Multi-turn compilationβ¦β99Updated 2 weeks ago
- Kinetics: Rethinking Test-Time Scaling Lawsβ82Updated 4 months ago
- The evaluation framework for training-free sparse attention in LLMsβ104Updated last month
- Accelerate LLM preference tuning via prefix sharing with a single line of codeβ51Updated 4 months ago
- Using FlexAttention to compute attention with different masking patternsβ47Updated last year
- Pytorch Distributed native training library for LLMs/VLMs with OOTB Hugging Face supportβ179Updated this week
- Flash-Muon: An Efficient Implementation of Muon Optimizerβ206Updated 5 months ago
- Linear Attention Sequence Parallelism (LASP)β87Updated last year
- β254Updated 5 months ago
- β132Updated 5 months ago
- β143Updated last week
- The official implementation of paper: SimLayerKV: A Simple Framework for Layer-Level KV Cache Reduction.β50Updated last year
- Implementation for FP8/INT8 Rollout for RL training without performence drop.β275Updated 2 weeks ago
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clustersβ130Updated 11 months ago
- Block Transformer: Global-to-Local Language Modeling for Fast Inference (NeurIPS 2024)β162Updated 7 months ago
- Tiny-FSDP, a minimalistic re-implementation of the PyTorch FSDPβ90Updated 3 months ago
- β66Updated 4 months ago
- β109Updated last year
- [NeurIPS-2024] π Scaling Laws with Vocabulary: Larger Models Deserve Larger Vocabularies https://arxiv.org/abs/2407.13623β89Updated last year
- An efficient implementation of the NSA (Native Sparse Attention) kernel