tokenbender / avataRLLinks
rl from zero pretrain, can it be done? yes.
☆286Updated 4 months ago
Alternatives and similar repositories for avataRL
Users that are interested in avataRL are comparing it to the libraries listed below
Sorting:
- Lightly-reviewed collection of community environments☆210Updated last week
- ☆118Updated 2 weeks ago
- Exploring Applications of GRPO☆251Updated 5 months ago
- Storing long contexts in tiny caches with self-study☆233Updated 2 months ago
- Compiling useful links, papers, benchmarks, ideas, etc.☆46Updated 10 months ago
- OpenCoconut implements a latent reasoning paradigm where we generate thoughts before decoding.☆175Updated last year
- ☆137Updated 10 months ago
- Open source interpretability artefacts for R1.☆170Updated 9 months ago
- [ACL 2024] Do Large Language Models Latently Perform Multi-Hop Reasoning?☆88Updated 10 months ago
- Async RL Training at Scale☆1,044Updated this week
- Train your own SOTA deductive reasoning model☆107Updated 11 months ago
- A scalable asynchronous reinforcement learning implementation with in-flight weight updates.☆361Updated this week
- ☆394Updated last week
- an open source reproduction of NVIDIA's nGPT (Normalized Transformer with Representation Learning on the Hypersphere)☆110Updated 11 months ago
- Simple & Scalable Pretraining for Neural Architecture Research☆307Updated 2 months ago
- The Automated LLM Speedrunning Benchmark measures how well LLM agents can reproduce previous innovations and discover new ones in languag…☆128Updated 3 months ago
- Build your own visual reasoning model☆418Updated 3 weeks ago
- Archon provides a modular framework for combining different inference-time techniques and LMs with just a JSON config file.☆189Updated 11 months ago
- A Collection of Competitive Text-Based Games for Language Model Evaluation and Reinforcement Learning☆345Updated last month
- Long context evaluation for large language models☆226Updated 11 months ago
- MoE training for Me and You and maybe other people☆335Updated last month
- This repo contains the source code for the paper "Evolution Strategies at Scale: LLM Fine-Tuning Beyond Reinforcement Learning"☆288Updated 2 months ago
- Official PyTorch implementation for Hogwild! Inference: Parallel LLM Generation with a Concurrent Attention Cache☆140Updated 5 months ago
- ☆67Updated 8 months ago
- j1-micro (1.7B) & j1-nano (600M) are absurdly tiny but mighty reward models.☆102Updated 6 months ago
- NanoGPT-speedrunning for the poor T4 enjoyers☆73Updated 9 months ago
- ☆134Updated last year
- Training an LLM to use a calculator with multi-turn reinforcement learning, achieving a **62% absolute increase in evaluation accuracy**.☆65Updated 9 months ago
- Tina: Tiny Reasoning Models via LoRA☆316Updated 4 months ago
- Official repo for Learning to Reason for Long-Form Story Generation☆74Updated 9 months ago