thecharlieblake / lovely-llama
An implementation of the Llama architecture, to instruct and delight
☆21Updated last month
Related projects: ⓘ
- ☆18Updated last month
- Simple (fast) transformer inference in PyTorch with torch.compile + lit-llama code☆10Updated last year
- Transformer with Mu-Parameterization, implemented in Jax/Flax. Supports FSDP on TPU pods.☆29Updated 3 weeks ago
- ☆68Updated 2 months ago
- Experiment of using Tangent to autodiff triton☆66Updated 7 months ago
- ☆42Updated 3 months ago
- JAX implementation of the Mistral 7b v0.2 model☆32Updated 2 months ago
- ☆66Updated 3 months ago
- ☆30Updated this week
- Triton Implementation of HyperAttention Algorithm☆46Updated 9 months ago
- PyTorch half precision gemm lib w/ fused optional bias + optional relu/gelu☆25Updated 3 weeks ago
- ☆27Updated this week
- ☆28Updated this week
- Mixture of A Million Experts☆29Updated last month
- Exploration into the proposed "Self Reasoning Tokens" by Felipe Bonetto☆53Updated 4 months ago
- Scalable neural net training via automatic normalization in the modular norm.☆108Updated last month
- Collection of autoregressive model implementation☆62Updated 2 weeks ago
- Utilities for PyTorch distributed☆23Updated 11 months ago
- ☆53Updated 8 months ago
- HomebrewNLP in JAX flavour for maintable TPU-Training☆46Updated 8 months ago
- A place to store reusable transformer components of my own creation or found on the interwebs☆43Updated 3 weeks ago
- ☆48Updated 4 months ago
- Engineering the state of RNN language models (Mamba, RWKV, etc.)☆31Updated 3 months ago
- CUDA implementation of autoregressive linear attention, with all the latest research findings☆43Updated last year
- Automatically take good care of your preemptible TPUs☆28Updated last year
- ☆20Updated last year
- ☆35Updated 5 months ago
- Explorations into the recently proposed Taylor Series Linear Attention☆85Updated last month
- ☆50Updated last month
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT training☆110Updated 5 months ago