Chillee / lit-llamaLinks
Simple (fast) transformer inference in PyTorch with torch.compile + lit-llama code
☆10Updated 2 years ago
Alternatives and similar repositories for lit-llama
Users that are interested in lit-llama are comparing it to the libraries listed below
Sorting:
- Experiment of using Tangent to autodiff triton☆79Updated last year
- See https://github.com/cuda-mode/triton-index/ instead!☆10Updated last year
- Automatically take good care of your preemptible TPUs☆37Updated 2 years ago
- ☆20Updated 2 years ago
- ☆13Updated 5 months ago
- An implementation of the Llama architecture, to instruct and delight☆21Updated 5 months ago
- A fusion of a linear layer and a cross entropy loss, written for pytorch in triton.☆70Updated last year
- ☆53Updated last year
- RWKV model implementation☆38Updated 2 years ago
- Yet another random morning idea to be quickly tried and architecture shared if it works; to allow the transformer to pause for any amount…☆52Updated 2 years ago
- Simple and efficient pytorch-native transformer training and inference (batched)☆78Updated last year
- LayerNorm(SmallInit(Embedding)) in a Transformer to improve convergence☆60Updated 3 years ago
- Engineering the state of RNN language models (Mamba, RWKV, etc.)☆32Updated last year
- A place to store reusable transformer components of my own creation or found on the interwebs☆59Updated last month
- ☆18Updated last year
- ☆21Updated 8 months ago
- Latent Diffusion Language Models☆69Updated 2 years ago
- Experiments for efforts to train a new and improved t5☆75Updated last year
- ☆121Updated last year
- ☆19Updated 6 months ago
- ☆34Updated last year
- ☆91Updated last year
- Transformer with Mu-Parameterization, implemented in Jax/Flax. Supports FSDP on TPU pods.☆32Updated 5 months ago
- Demo of the unit_scaling library, showing how a model can be easily adapted to train in FP8.☆45Updated last year
- Token Omission Via Attention☆127Updated last year
- train with kittens!☆63Updated last year
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆62Updated 2 years ago
- PyTorch and NNsight implementation of AtP* (Kramar et al 2024, DeepMind)☆20Updated 10 months ago
- ☆23Updated 11 months ago
- A set of Python scripts that makes your experience on TPU better☆54Updated 2 months ago