Red-Hat-AI-Innovation-Team / mini_trainerLinks
fast trainer for educational purposes
☆18Updated last week
Alternatives and similar repositories for mini_trainer
Users that are interested in mini_trainer are comparing it to the libraries listed below
Sorting:
- ☆83Updated 8 months ago
- ☆34Updated 2 years ago
- Code for the paper "VinePPO: Unlocking RL Potential For LLM Reasoning Through Refined Credit Assignment"☆175Updated 4 months ago
- The HELMET Benchmark☆177Updated 2 months ago
- Understand and test language model architectures on synthetic tasks.☆233Updated 3 weeks ago
- ☆53Updated 5 months ago
- Language models scale reliably with over-training and on downstream tasks☆100Updated last year
- Homepage for ProLong (Princeton long-context language models) and paper "How to Train Long-Context Language Models (Effectively)"☆230Updated last month
- A scalable asynchronous reinforcement learning implementation with in-flight weight updates.☆207Updated last week
- Can Language Models Solve Olympiad Programming?☆118Updated 9 months ago
- LOFT: A 1 Million+ Token Long-Context Benchmark☆218Updated 4 months ago
- ☆101Updated last year
- Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision☆123Updated last year
- [COLM 2025] Official repository for R2E-Gym: Procedural Environment Generation and Hybrid Verifiers for Scaling Open-Weights SWE Agents☆170Updated 3 months ago
- ☆53Updated last year
- [ICLR 2025] Code for the paper "Beyond Autoregression: Discrete Diffusion for Complex Reasoning and Planning"☆77Updated 8 months ago
- Large language models (LLMs) made easy, EasyLM is a one stop solution for pre-training, finetuning, evaluating and serving LLMs in JAX/Fl…☆75Updated last year
- Replicating O1 inference-time scaling laws☆90Updated 10 months ago
- ☆186Updated last year
- Code and Configs for Asynchronous RLHF: Faster and More Efficient RL for Language Models☆63Updated 5 months ago
- BABILong is a benchmark for LLM evaluation using the needle-in-a-haystack approach.☆213Updated last month
- ☆112Updated this week
- Code for NeurIPS 2024 Spotlight: "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations"☆84Updated 11 months ago
- Code and data used in the paper: "Training on Incorrect Synthetic Data via RL Scales LLM Math Reasoning Eight-Fold"☆30Updated last year
- ☆83Updated 2 years ago
- ☆128Updated last year
- A framework for few-shot evaluation of autoregressive language models.☆24Updated last year
- A MAD laboratory to improve AI architecture designs 🧪☆131Updated 10 months ago
- A library for efficient patching and automatic circuit discovery.☆77Updated 2 months ago
- A Kernel-Based View of Language Model Fine-Tuning https://arxiv.org/abs/2210.05643☆78Updated 2 years ago