Chillee / lit-llamaLinks
Simple (fast) transformer inference in PyTorch with torch.compile + lit-llama code
☆10Updated 2 years ago
Alternatives and similar repositories for lit-llama
Users that are interested in lit-llama are comparing it to the libraries listed below
Sorting:
- Experiment of using Tangent to autodiff triton☆80Updated last year
- An implementation of the Llama architecture, to instruct and delight☆21Updated 6 months ago
- See https://github.com/cuda-mode/triton-index/ instead!☆11Updated last year
- RWKV model implementation☆38Updated 2 years ago
- ☆13Updated 6 months ago
- ☆34Updated last year
- ☆19Updated this week
- ☆53Updated last year
- ☆53Updated last year
- Automatically take good care of your preemptible TPUs☆37Updated 2 years ago
- A place to store reusable transformer components of my own creation or found on the interwebs☆62Updated 2 weeks ago
- A set of Python scripts that makes your experience on TPU better☆54Updated 2 months ago
- ☆20Updated 2 years ago
- LayerNorm(SmallInit(Embedding)) in a Transformer to improve convergence☆61Updated 3 years ago
- Token Omission Via Attention☆127Updated last year
- HomebrewNLP in JAX flavour for maintable TPU-Training☆51Updated last year
- JAX implementation of the Mistral 7b v0.2 model☆35Updated last year
- ☆24Updated 11 months ago
- supporting pytorch FSDP for optimizers☆84Updated last year
- Engineering the state of RNN language models (Mamba, RWKV, etc.)☆32Updated last year
- ☆91Updated last year
- Latent Diffusion Language Models☆70Updated 2 years ago
- CUDA implementation of autoregressive linear attention, with all the latest research findings☆46Updated 2 years ago
- Triton Implementation of HyperAttention Algorithm☆48Updated last year
- Serialize JAX, Flax, Haiku, or Objax model params with 🤗`safetensors`☆47Updated last year
- ☆32Updated last year
- ☆21Updated last year
- ☆50Updated last year
- H-Net Dynamic Hierarchical Architecture☆80Updated 2 months ago
- research impl of Native Sparse Attention (2502.11089)☆63Updated 9 months ago