Laz4rz / RLLinks
☆16Updated 5 months ago
Alternatives and similar repositories for RL
Users that are interested in RL are comparing it to the libraries listed below
Sorting:
- rl from zero pretrain, can it be done? we'll see.☆63Updated 3 weeks ago
- NanoGPT-speedrunning for the poor T4 enjoyers☆68Updated 2 months ago
- Compiling useful links, papers, benchmarks, ideas, etc.☆45Updated 4 months ago
- An introduction to LLM Sampling☆79Updated 7 months ago
- Following master Karpathy with GPT-2 implementation and training, writing lots of comments cause I have memory of a goldfish☆173Updated 11 months ago
- ☆46Updated 3 months ago
- in this repository, i'm going to implement increasingly complex llm inference optimizations☆63Updated last month
- Hub for researchers exploring VLMs and Multimodal Learning:)☆41Updated this week
- MLX port for xjdr's entropix sampler (mimics jax implementation)☆64Updated 8 months ago
- Training an LLM to use a calculator with multi-turn reinforcement learning, achieving a **62% absolute increase in evaluation accuracy**.☆42Updated 2 months ago
- ☆64Updated last month
- look how they massacred my boy☆63Updated 9 months ago
- Simple Transformer in Jax☆138Updated last year
- A simple MLX implementation for pretraining LLMs on Apple Silicon.☆81Updated 2 months ago
- ☆90Updated last week
- ☆93Updated 9 months ago
- smolLM with Entropix sampler on pytorch☆150Updated 8 months ago
- ☆69Updated this week
- Exploring Applications of GRPO☆243Updated last week
- ⚖️ Awesome LLM Judges ⚖️☆107Updated 2 months ago
- a tiny vectorstore implementation built with numpy.☆62Updated last year
- Fine tune Gemma 3 on an object detection task☆72Updated this week
- lossily compress representation vectors using product quantization☆57Updated 2 months ago
- ☆43Updated last month
- ☆44Updated 3 weeks ago
- aesthetic tensor visualiser☆24Updated 2 months ago
- an open source reproduction of NVIDIA's nGPT (Normalized Transformer with Representation Learning on the Hypersphere)☆101Updated 4 months ago
- Lego for GRPO☆28Updated last month
- working implimention of deepseek MLA☆42Updated 6 months ago
- j1-micro (1.7B) & j1-nano (600M) are absurdly tiny but mighty reward models.☆91Updated last month