thecharlieblake / lovely-llamaLinks
An implementation of the Llama architecture, to instruct and delight
☆21Updated 2 months ago
Alternatives and similar repositories for lovely-llama
Users that are interested in lovely-llama are comparing it to the libraries listed below
Sorting:
- ☆82Updated last year
- Experiment of using Tangent to autodiff triton☆79Updated last year
- ☆20Updated 2 years ago
- supporting pytorch FSDP for optimizers☆84Updated 7 months ago
- Transformer with Mu-Parameterization, implemented in Jax/Flax. Supports FSDP on TPU pods.☆32Updated last month
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT training☆130Updated last year
- ☆19Updated 2 months ago
- Automatically take good care of your preemptible TPUs☆36Updated 2 years ago
- ☆53Updated 10 months ago
- Easily run PyTorch on multiple GPUs & machines☆46Updated last month
- Custom triton kernels for training Karpathy's nanoGPT.☆19Updated 9 months ago
- A fusion of a linear layer and a cross entropy loss, written for pytorch in triton.☆70Updated last year
- A place to store reusable transformer components of my own creation or found on the interwebs☆59Updated this week
- Demo of the unit_scaling library, showing how a model can be easily adapted to train in FP8.☆46Updated last year
- ☆113Updated last year
- JAX implementation of the Mistral 7b v0.2 model☆35Updated last year
- Fast, Modern, and Low Precision PyTorch Optimizers☆101Updated last week
- ☆21Updated 5 months ago
- Triton Implementation of HyperAttention Algorithm☆48Updated last year
- Make triton easier☆47Updated last year
- A library for unit scaling in PyTorch☆128Updated 3 weeks ago
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆149Updated last month
- Simple (fast) transformer inference in PyTorch with torch.compile + lit-llama code☆11Updated last year
- HomebrewNLP in JAX flavour for maintable TPU-Training☆50Updated last year
- PyTorch centric eager mode debugger☆47Updated 7 months ago
- ☆61Updated 3 years ago
- Experimenting with how best to do multi-host dataloading☆10Updated 2 years ago
- ☆34Updated 10 months ago
- Tiled Flash Linear Attention library for fast and efficient mLSTM Kernels.☆65Updated this week
- CUDA and Triton implementations of Flash Attention with SoftmaxN.☆71Updated last year