thecharlieblake / lovely-llamaLinks
An implementation of the Llama architecture, to instruct and delight
☆21Updated 4 months ago
Alternatives and similar repositories for lovely-llama
Users that are interested in lovely-llama are comparing it to the libraries listed below
Sorting:
- Experiment of using Tangent to autodiff triton☆81Updated last year
- ☆89Updated last year
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT training☆132Updated last year
- supporting pytorch FSDP for optimizers☆84Updated 9 months ago
- Easily run PyTorch on multiple GPUs & machines☆47Updated 3 months ago
- Automatically take good care of your preemptible TPUs☆36Updated 2 years ago
- ☆20Updated 2 years ago
- Transformer with Mu-Parameterization, implemented in Jax/Flax. Supports FSDP on TPU pods.☆32Updated 3 months ago
- A place to store reusable transformer components of my own creation or found on the interwebs☆60Updated last week
- ☆19Updated 4 months ago
- Triton Implementation of HyperAttention Algorithm☆48Updated last year
- A fusion of a linear layer and a cross entropy loss, written for pytorch in triton.☆70Updated last year
- ☆21Updated 7 months ago
- A library for unit scaling in PyTorch☆130Updated 2 months ago
- Custom triton kernels for training Karpathy's nanoGPT.☆19Updated 11 months ago
- Train a SmolLM-style llm on fineweb-edu in JAX/Flax with an assortment of optimizers.☆19Updated 2 months ago
- JAX implementation of the Mistral 7b v0.2 model☆36Updated last year
- Demo of the unit_scaling library, showing how a model can be easily adapted to train in FP8.☆46Updated last year
- ☆122Updated last year
- ☆62Updated 3 years ago
- PyTorch centric eager mode debugger☆48Updated 9 months ago
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆164Updated 3 months ago
- ☆58Updated last year
- HomebrewNLP in JAX flavour for maintable TPU-Training☆50Updated last year
- train with kittens!☆62Updated 11 months ago
- Tiled Flash Linear Attention library for fast and efficient mLSTM Kernels.☆70Updated last month
- CUDA implementation of autoregressive linear attention, with all the latest research findings☆44Updated 2 years ago
- Fast, Modern, and Low Precision PyTorch Optimizers☆112Updated 3 weeks ago
- Make triton easier☆47Updated last year
- Code for the paper "Function-Space Learning Rates"☆23Updated 4 months ago