THUDM / slimeLinks
slime is a LLM post-training framework aiming at scaling RL.
☆328Updated this week
Alternatives and similar repositories for slime
Users that are interested in slime are comparing it to the libraries listed below
Sorting:
- VeOmni: Scaling any Modality Model Training to any Accelerators with PyTorch native Training Framework☆353Updated last month
- ☆190Updated 2 months ago
- Super-Efficient RLHF Training of LLMs with Parameter Reallocation☆303Updated last month
- A flexible and efficient training framework for large-scale alignment tasks☆384Updated this week
- ByteCheckpoint: An Unified Checkpointing Library for LFMs☆219Updated 2 months ago
- [ICLR 2025] COAT: Compressing Optimizer States and Activation for Memory-Efficient FP8 Training☆209Updated this week
- USP: Unified (a.k.a. Hybrid, 2D) Sequence Parallel Attention for Long Context Transformers Model Training and Inference☆519Updated 3 weeks ago
- A Comprehensive Survey on Long Context Language Modeling☆151Updated 2 weeks ago
- A lightweight reproduction of DeepSeek-R1-Zero with indepth analysis of self-reflection behavior.☆240Updated 2 months ago
- A visuailzation tool to make deep understaning and easier debugging for RLHF training.☆213Updated 4 months ago
- [ICML 2024] Quest: Query-Aware Sparsity for Efficient Long-Context LLM Inference☆295Updated 6 months ago
- Trinity-RFT is a general-purpose, flexible and scalable framework designed for reinforcement fine-tuning (RFT) of large language models (…☆122Updated this week
- ☆254Updated last year
- ☆141Updated 3 months ago
- Spec-Bench: A Comprehensive Benchmark and Unified Evaluation Platform for Speculative Decoding (ACL 2024 Findings)☆277Updated 2 months ago
- Codes for the paper "∞Bench: Extending Long Context Evaluation Beyond 100K Tokens": https://arxiv.org/abs/2402.13718☆336Updated 8 months ago
- Towards Economical Inference: Enabling DeepSeek's Multi-Head Latent Attention in Any Transformer-based LLMs