tokenbender / avataRLLinks
rl from zero pretrain, can it be done? yes.
☆257Updated this week
Alternatives and similar repositories for avataRL
Users that are interested in avataRL are comparing it to the libraries listed below
Sorting:
- Exploring Applications of GRPO☆246Updated last month
- Decentralized RL Training at Scale☆472Updated this week
- Compiling useful links, papers, benchmarks, ideas, etc.☆45Updated 5 months ago
- OpenCoconut implements a latent reasoning paradigm where we generate thoughts before decoding.☆173Updated 7 months ago
- Simple & Scalable Pretraining for Neural Architecture Research☆289Updated this week
- ☆98Updated 2 weeks ago
- ☆130Updated 5 months ago
- an open source reproduction of NVIDIA's nGPT (Normalized Transformer with Representation Learning on the Hypersphere)☆105Updated 5 months ago
- Open source interpretability artefacts for R1.☆158Updated 4 months ago
- Build your own visual reasoning model☆405Updated this week
- A Collection of Competitive Text-Based Games for Language Model Evaluation and Reinforcement Learning☆245Updated last week
- NanoGPT-speedrunning for the poor T4 enjoyers☆69Updated 4 months ago
- [ACL 2024] Do Large Language Models Latently Perform Multi-Hop Reasoning?☆73Updated 5 months ago
- Train your own SOTA deductive reasoning model☆104Updated 5 months ago
- Long context evaluation for large language models☆220Updated 5 months ago
- ⚖️ Awesome LLM Judges ⚖️☆122Updated 3 months ago
- Official PyTorch implementation for Hogwild! Inference: Parallel LLM Generation with a Concurrent Attention Cache☆120Updated last week
- Atropos is a Language Model Reinforcement Learning Environments framework for collecting and evaluating LLM trajectories through diverse …☆591Updated this week
- Single File, Single GPU, From Scratch, Efficient, Full Parameter Tuning library for "RL for LLMs"☆520Updated last month
- Memory layers use a trainable key-value lookup mechanism to add extra parameters to a model without increasing FLOPs. Conceptually, spars…☆344Updated 8 months ago
- j1-micro (1.7B) & j1-nano (600M) are absurdly tiny but mighty reward models.☆95Updated last month
- ☆395Updated last week
- Storing long contexts in tiny caches with self-study☆140Updated this week
- Simple Transformer in Jax☆139Updated last year
- Simple repository for training small reasoning models☆37Updated 6 months ago
- Tina: Tiny Reasoning Models via LoRA☆275Updated last week
- code for training & evaluating Contextual Document Embedding models☆197Updated 3 months ago
- smolLM with Entropix sampler on pytorch☆150Updated 9 months ago
- An extension of the nanoGPT repository for training small MOE models.☆178Updated 5 months ago
- ☆118Updated 8 months ago