saurabhaloneai / Llama-3-From-Scratch-In-Pure-JaxLinks
This repository contain the simple llama3 implementation in pure jax.
☆70Updated 10 months ago
Alternatives and similar repositories for Llama-3-From-Scratch-In-Pure-Jax
Users that are interested in Llama-3-From-Scratch-In-Pure-Jax are comparing it to the libraries listed below
Sorting:
- A zero-to-one guide on scaling modern transformers with n-dimensional parallelism.☆105Updated 3 months ago
- NanoGPT-speedrunning for the poor T4 enjoyers☆73Updated 8 months ago
- Simple Transformer in Jax☆141Updated last year
- Jax like function transformation engine but micro, microjax☆34Updated last year
- ☆28Updated last year
- A collection of lightweight interpretability scripts to understand how LLMs think☆74Updated this week
- A simple MLX implementation for pretraining LLMs on Apple Silicon.☆84Updated 4 months ago
- in this repository, i'm going to implement increasingly complex llm inference optimizations☆75Updated 7 months ago
- Compiling useful links, papers, benchmarks, ideas, etc.☆45Updated 9 months ago
- ☆21Updated last year
- MoE training for Me and You and maybe other people☆298Updated 2 weeks ago
- ☆42Updated last year
- Simple repository for training small reasoning models☆47Updated 10 months ago
- A practical guide to diffusion models, implemented from scratch.☆229Updated this week
- ☆40Updated last year
- An introduction to LLM Sampling☆79Updated last year
- rl from zero pretrain, can it be done? yes.☆282Updated 3 months ago
- A really tiny autograd engine☆96Updated 7 months ago
- Port of Andrej Karpathy's nanoGPT to Apple MLX framework.☆116Updated last year
- The Automated LLM Speedrunning Benchmark measures how well LLM agents can reproduce previous innovations and discover new ones in languag…☆122Updated 2 months ago
- Evolution Pretraining Fully in Int Formats☆135Updated 2 weeks ago
- ☆55Updated last year
- Simple GRPO scripts and configurations.☆59Updated 10 months ago
- The history files when recording human interaction while solving ARC tasks☆118Updated this week
- A package for defining deep learning models using categorical algebraic expressions.☆61Updated last year
- an open source reproduction of NVIDIA's nGPT (Normalized Transformer with Representation Learning on the Hypersphere)☆109Updated 9 months ago
- j1-micro (1.7B) & j1-nano (600M) are absurdly tiny but mighty reward models.☆99Updated 5 months ago
- LLM training in simple, raw C/CUDA☆15Updated last year
- ☆287Updated last year
- ☆116Updated 3 weeks ago