Laz4rz / RLLinks
☆16Updated last year
Alternatives and similar repositories for RL
Users that are interested in RL are comparing it to the libraries listed below
Sorting:
- A collection of lightweight interpretability scripts to understand how LLMs think☆89Updated last week
- NanoGPT-speedrunning for the poor T4 enjoyers☆73Updated 9 months ago
- ☆46Updated 10 months ago
- An introduction to LLM Sampling☆79Updated last year
- A simple MLX implementation for pretraining LLMs on Apple Silicon.☆85Updated 5 months ago
- MLX port for xjdr's entropix sampler (mimics jax implementation)☆61Updated last year
- ☆68Updated 8 months ago
- Low memory full parameter finetuning of LLMs☆53Updated 6 months ago
- Tensor-Slayer : Manipulate weights and tensors of LLMs to achieve performance upgrades and introduce a novel inferenceless mechanistic in…☆27Updated 8 months ago
- Simple Transformer in Jax☆142Updated last year
- j1-micro (1.7B) & j1-nano (600M) are absurdly tiny but mighty reward models.☆101Updated 6 months ago
- lossily compress representation vectors using product quantization☆59Updated 3 months ago
- Quick Notebook Tutorials☆36Updated 6 months ago
- Simple GRPO scripts and configurations.☆59Updated 11 months ago
- coloring terminal text with intensities (used for plotting probability, entropy with tokens)☆12Updated last year
- This repository contain the simple llama3 implementation in pure jax.☆71Updated 11 months ago
- Compiling useful links, papers, benchmarks, ideas, etc.☆46Updated 10 months ago
- Lego for GRPO☆30Updated 8 months ago
- PTX-Tutorial Written Purely By AIs (Deep Research of Openai and Claude 3.7)☆66Updated 10 months ago
- in this repository, i'm going to implement increasingly complex llm inference optimizations☆81Updated 8 months ago
- look how they massacred my boy☆63Updated last year
- RL gym for vision language models written in JAX☆141Updated 3 months ago
- smolLM with Entropix sampler on pytorch☆149Updated last year
- Optimizing Causal LMs through GRPO with weighted reward functions and automated hyperparameter tuning using Optuna☆59Updated 3 months ago
- A zero-to-one guide on scaling modern transformers with n-dimensional parallelism.☆114Updated last month
- Entropy Based Sampling and Parallel CoT Decoding☆17Updated last year
- an open source reproduction of NVIDIA's nGPT (Normalized Transformer with Representation Learning on the Hypersphere)☆110Updated 10 months ago
- Following Karpathy with GPT-2 implementation and training, writing lots of comments cause I have memory of a goldfish☆172Updated last year
- One click away from a locally downloaded, fine-tuned model, hosted on hugging face, with inference built in. In two hours.☆23Updated 2 months ago
- Simple repository for training small reasoning models☆48Updated 11 months ago