saurabhaloneai / Llama-3-From-Scratch-In-Pure-JaxLinks
This repository contain the simple llama3 implementation in pure jax.
☆70Updated 9 months ago
Alternatives and similar repositories for Llama-3-From-Scratch-In-Pure-Jax
Users that are interested in Llama-3-From-Scratch-In-Pure-Jax are comparing it to the libraries listed below
Sorting:
- The Automated LLM Speedrunning Benchmark measures how well LLM agents can reproduce previous innovations and discover new ones in languag…☆112Updated 2 months ago
- A zero-to-one guide on scaling modern transformers with n-dimensional parallelism.☆105Updated 2 months ago
- Simple Transformer in Jax☆139Updated last year
- NanoGPT-speedrunning for the poor T4 enjoyers☆73Updated 7 months ago
- A really tiny autograd engine☆96Updated 6 months ago
- ☆28Updated last year
- ☆55Updated last year
- Jax like function transformation engine but micro, microjax☆33Updated last year
- A simple MLX implementation for pretraining LLMs on Apple Silicon.☆84Updated 3 months ago
- in this repository, i'm going to implement increasingly complex llm inference optimizations☆73Updated 6 months ago
- an open source reproduction of NVIDIA's nGPT (Normalized Transformer with Representation Learning on the Hypersphere)☆108Updated 9 months ago
- An introduction to LLM Sampling☆79Updated 11 months ago
- ☆40Updated last year
- A practical guide to diffusion models, implemented from scratch.☆164Updated this week
- aesthetic tensor visualiser☆27Updated 7 months ago
- A collection of lightweight interpretability scripts to understand how LLMs think☆68Updated 2 weeks ago
- Simple repository for training small reasoning models☆47Updated 10 months ago
- Andrej Kapathy's micrograd implemented in c☆30Updated last year
- A package for defining deep learning models using categorical algebraic expressions.☆61Updated last year
- ☆21Updated last year
- look how they massacred my boy☆63Updated last year
- NanoGPT (124M) quality in 2.67B tokens☆28Updated 2 months ago
- Collection of autoregressive model implementation☆85Updated 7 months ago
- ☆213Updated this week
- rl from zero pretrain, can it be done? yes.☆282Updated 2 months ago
- j1-micro (1.7B) & j1-nano (600M) are absurdly tiny but mighty reward models.☆99Updated 4 months ago
- Port of Andrej Karpathy's nanoGPT to Apple MLX framework.☆116Updated last year
- PTX-Tutorial Written Purely By AIs (Deep Research of Openai and Claude 3.7)☆66Updated 8 months ago
- Evolution Pretraining Fully in Int Formats☆123Updated last week
- Compiling useful links, papers, benchmarks, ideas, etc.☆45Updated 8 months ago