Chillee / lit-llamaLinks
Simple (fast) transformer inference in PyTorch with torch.compile + lit-llama code
☆10Updated 2 years ago
Alternatives and similar repositories for lit-llama
Users that are interested in lit-llama are comparing it to the libraries listed below
Sorting:
- Automatically take good care of your preemptible TPUs☆37Updated 2 years ago
- See https://github.com/cuda-mode/triton-index/ instead!☆10Updated last year
- An implementation of the Llama architecture, to instruct and delight☆21Updated 4 months ago
- Experiment of using Tangent to autodiff triton☆80Updated last year
- ☆19Updated 5 months ago
- supporting pytorch FSDP for optimizers☆83Updated 10 months ago
- RWKV model implementation☆38Updated 2 years ago
- ☆20Updated 2 years ago
- ☆53Updated last year
- ☆34Updated last year
- some common Huggingface transformers in maximal update parametrization (µP)☆86Updated 3 years ago
- ☆91Updated last year
- Latent Diffusion Language Models☆69Updated 2 years ago
- HomebrewNLP in JAX flavour for maintable TPU-Training☆51Updated last year
- Yet another random morning idea to be quickly tried and architecture shared if it works; to allow the transformer to pause for any amount…☆52Updated 2 years ago
- ☆22Updated 10 months ago
- A set of Python scripts that makes your experience on TPU better☆54Updated last month
- Engineering the state of RNN language models (Mamba, RWKV, etc.)☆32Updated last year
- LayerNorm(SmallInit(Embedding)) in a Transformer to improve convergence☆58Updated 3 years ago
- ☆53Updated last year
- ☆41Updated 2 weeks ago
- A fusion of a linear layer and a cross entropy loss, written for pytorch in triton.☆69Updated last year
- train with kittens!☆63Updated last year
- ☆13Updated 5 months ago
- ☆21Updated 11 months ago
- Experiments for efforts to train a new and improved t5☆75Updated last year
- Simple and efficient pytorch-native transformer training and inference (batched)☆78Updated last year
- ☆62Updated 3 years ago
- ☆121Updated last year
- ☆50Updated last year