srush / do-we-need-attention
☆166Updated last year
Alternatives and similar repositories for do-we-need-attention:
Users that are interested in do-we-need-attention are comparing it to the libraries listed below
- Understand and test language model architectures on synthetic tasks.☆194Updated last month
- Code for the paper "The Impact of Positional Encoding on Length Generalization in Transformers", NeurIPS 2023☆135Updated last year
- some common Huggingface transformers in maximal update parametrization (µP)☆80Updated 3 years ago
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆105Updated this week
- Language models scale reliably with over-training and on downstream tasks☆96Updated last year
- A MAD laboratory to improve AI architecture designs 🧪☆113Updated 4 months ago
- ☆78Updated 10 months ago
- nanoGPT-like codebase for LLM training☆94Updated last month
- Code to reproduce "Transformers Can Do Arithmetic with the Right Embeddings", McLeish et al (NeurIPS 2024)☆189Updated 11 months ago
- Inference code for LLaMA models in JAX☆118Updated 11 months ago
- Code for exploring Based models from "Simple linear attention language models balance the recall-throughput tradeoff"☆232Updated 2 months ago
- ☆37Updated last year
- Implementation of the conditionally routed attention in the CoLT5 architecture, in Pytorch☆226Updated 8 months ago
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT training☆123Updated last year
- Some personal experiments around routing tokens to different autoregressive attention, akin to mixture-of-experts☆118Updated 6 months ago
- ☆217Updated 9 months ago
- Implementation of Infini-Transformer in Pytorch☆110Updated 4 months ago
- Official Repository of Pretraining Without Attention (BiGS), BiGS is the first model to achieve BERT-level transfer learning on the GLUE …☆116Updated last year
- ☆103Updated 11 months ago
- Implementation of the Llama architecture with RLHF + Q-learning☆164Updated 3 months ago
- Train very large language models in Jax.☆204Updated last year
- Experiment of using Tangent to autodiff triton☆78Updated last year
- JAX implementation of the Llama 2 model☆218Updated last year
- ☆349Updated last year
- ☆51Updated 11 months ago
- supporting pytorch FSDP for optimizers☆80Updated 4 months ago
- Simple implementation of muP, based on Spectral Condition for Feature Learning. The implementation is SGD only, dont use it for Adam☆76Updated 9 months ago
- LoRA for arbitrary JAX models and functions☆136Updated last year
- Annotated version of the Mamba paper☆483Updated last year
- WIP☆93Updated 8 months ago