saurabhaloneai / Llama-3-From-Scratch-In-Pure-Jax
This repository contain the simple llama3 implementation in pure jax.
☆58Updated last month
Alternatives and similar repositories for Llama-3-From-Scratch-In-Pure-Jax:
Users that are interested in Llama-3-From-Scratch-In-Pure-Jax are comparing it to the libraries listed below
- an open source reproduction of NVIDIA's nGPT (Normalized Transformer with Representation Learning on the Hypersphere)☆91Updated 2 weeks ago
- Simple Transformer in Jax☆136Updated 9 months ago
- look how they massacred my boy☆63Updated 5 months ago
- ☆38Updated 7 months ago
- Jax like function transformation engine but micro, microjax☆30Updated 5 months ago
- ☆27Updated 8 months ago
- An introduction to LLM Sampling☆77Updated 3 months ago
- A really tiny autograd engine☆90Updated 11 months ago
- ☆52Updated 11 months ago
- Just large language models. Hackable, with as little abstraction as possible. Done for my own purposes, feel free to rip.☆44Updated last year
- PTX-Tutorial Written Purely By AIs (Deep Research of Openai and Claude 3.7)☆60Updated this week
- Custom triton kernels for training Karpathy's nanoGPT.☆18Updated 5 months ago
- A package for defining deep learning models using categorical algebraic expressions.☆60Updated 7 months ago
- MLX port for xjdr's entropix sampler (mimics jax implementation)☆63Updated 4 months ago
- Compiling useful links, papers, benchmarks, ideas, etc.☆41Updated last week
- Simple GRPO scripts and configurations.☆58Updated last month
- ☆41Updated 2 months ago
- smolLM with Entropix sampler on pytorch☆150Updated 4 months ago
- NanoGPT (124M) quality in 2.67B tokens☆28Updated last month
- smol models are fun too☆90Updated 4 months ago
- Learning about CUDA by writing PTX code.☆124Updated last year
- ☆87Updated last week
- Optimizing Causal LMs through GRPO with weighted reward functions and automated hyperparameter tuning using Optuna☆39Updated last month
- Train your own SOTA deductive reasoning model☆81Updated 2 weeks ago
- ☆20Updated 4 months ago
- [ACL 2024] Do Large Language Models Latently Perform Multi-Hop Reasoning?☆52Updated last week
- Cerule - A Tiny Mighty Vision Model☆67Updated 6 months ago
- Andrej Kapathy's micrograd implemented in c☆28Updated 7 months ago
- Fast bare-bones BPE for modern tokenizer training☆149Updated 5 months ago
- LLM training in simple, raw C/CUDA☆14Updated 3 months ago