saurabhaloneai / Llama-3-From-Scratch-In-Pure-JaxLinks
This repository contain the simple llama3  implementation in pure jax. 
☆70Updated 8 months ago
Alternatives and similar repositories for Llama-3-From-Scratch-In-Pure-Jax
Users that are interested in Llama-3-From-Scratch-In-Pure-Jax are comparing it to the libraries listed below
Sorting:
- A zero-to-one guide on scaling modern transformers with n-dimensional parallelism.☆104Updated last month
- NanoGPT-speedrunning for the poor T4 enjoyers☆72Updated 6 months ago
- ☆28Updated last year
- A simple MLX implementation for pretraining LLMs on Apple Silicon.☆84Updated 2 months ago
- ☆54Updated last year
- in this repository, i'm going to implement increasingly complex llm inference optimizations☆68Updated 5 months ago
- Compiling useful links, papers, benchmarks, ideas, etc.☆45Updated 7 months ago
- Simple Transformer in Jax☆139Updated last year
- The Automated LLM Speedrunning Benchmark measures how well LLM agents can reproduce previous innovations and discover new ones in languag…☆103Updated 3 weeks ago
- SIMD quantization kernels☆89Updated last month
- an open source reproduction of NVIDIA's nGPT (Normalized Transformer with Representation Learning on the Hypersphere)☆107Updated 7 months ago
- A collection of lightweight interpretability scripts to understand how LLMs think☆61Updated this week
- ☆40Updated last year
- NanoGPT (124M) quality in 2.67B tokens☆28Updated last month
- A really tiny autograd engine☆95Updated 5 months ago
- Jax like function transformation engine but micro, microjax☆33Updated last year
- rl from zero pretrain, can it be done? yes.☆279Updated last month
- Quantized LLM training in pure CUDA/C++.☆209Updated this week
- Simple repository for training small reasoning models☆44Updated 8 months ago
- Benchmarks comparing PyTorch and MLX on Apple Silicon GPUs☆89Updated last year
- Simple GRPO scripts and configurations.☆59Updated 8 months ago
- Port of Andrej Karpathy's nanoGPT to Apple MLX framework.☆113Updated last year
- look how they massacred my boy☆63Updated last year
- An introduction to LLM Sampling☆79Updated 10 months ago
- Andrej Kapathy's micrograd implemented in c☆30Updated last year
- Write a fast kernel and run it on Discord. See how you compare against the best!☆58Updated 2 weeks ago
- PTX-Tutorial Written Purely By AIs (Deep Research of Openai and Claude 3.7)☆66Updated 7 months ago
- Simple & Scalable Pretraining for Neural Architecture Research☆297Updated 2 months ago
- j1-micro (1.7B) & j1-nano (600M) are absurdly tiny but mighty reward models.☆98Updated 3 months ago
- Optimizing Causal LMs through GRPO with weighted reward functions and automated hyperparameter tuning using Optuna☆58Updated last week