ofirpress / attention_with_linear_biasesView external linksLinks
Code for the ALiBi method for transformer language models (ICLR 2022)
☆550Oct 30, 2023Updated 2 years ago
Alternatives and similar repositories for attention_with_linear_biases
Users that are interested in attention_with_linear_biases are comparing it to the libraries listed below
Sorting:
- YaRN: Efficient Context Window Extension of Large Language Models☆1,668Apr 17, 2024Updated last year
- ☆20Oct 25, 2022Updated 3 years ago
- Sequence modeling with Mega.☆303Jan 28, 2023Updated 3 years ago
- Implementation of RETRO, Deepmind's Retrieval based Attention net, in Pytorch☆879Oct 30, 2023Updated 2 years ago
- PyTorch implementation of Train Short, Test Long: Attention with Linear Biases Enables Input Length Extrapolation☆33Dec 29, 2021Updated 4 years ago
- Code for the paper "The Impact of Positional Encoding on Length Generalization in Transformers", NeurIPS 2023☆137Apr 30, 2024Updated last year
- Official repository of paper "RNNs Are Not Transformers (Yet): The Key Bottleneck on In-context Retrieval"☆27Apr 17, 2024Updated last year
- Code and documents of LongLoRA and LongAlpaca (ICLR 2024 Oral)☆2,696Aug 14, 2024Updated last year
- Fast and memory-efficient exact attention☆22,231Updated this week
- Large Context Attention☆766Oct 13, 2025Updated 4 months ago
- Transformers at any scale☆42Jan 18, 2024Updated 2 years ago
- Ring attention implementation with flash attention☆980Sep 10, 2025Updated 5 months ago
- Accessible large language models via k-bit quantization for PyTorch.☆7,939Jan 22, 2026Updated 3 weeks ago
- Layer-Condensed KV cache w/ 10 times larger batch size, fewer params and less computation. Dramatic speed up with better task performance…☆157Apr 7, 2025Updated 10 months ago
- Long Range Arena for Benchmarking Efficient Transformers☆777Dec 16, 2023Updated 2 years ago
- Positional Skip-wise Training for Efficient Context Window Extension of LLMs to Extremely Length (ICLR 2024)☆209May 20, 2024Updated last year
- Minimalistic large language model 3D-parallelism training☆2,544Dec 11, 2025Updated 2 months ago
- Foundation Architecture for (M)LLMs☆3,130Apr 11, 2024Updated last year
- Language Modeling with the H3 State Space Model☆522Sep 29, 2023Updated 2 years ago
- Parallelformers: An Efficient Model Parallelization Toolkit for Deployment☆791Apr 24, 2023Updated 2 years ago
- Implementation of Rotary Embeddings, from the Roformer paper, in Pytorch☆804Jan 30, 2026Updated 2 weeks ago
- PyTorch extensions for high performance and large scale training.☆3,397Apr 26, 2025Updated 9 months ago
- Structured state space sequence models☆2,838Jul 17, 2024Updated last year
- Public repo for the NeurIPS 2023 paper "Unlimiformer: Long-Range Transformers with Unlimited Length Input"☆1,066Mar 7, 2024Updated last year
- The RedPajama-Data repository contains code for preparing large datasets for training large language models.☆4,924Dec 7, 2024Updated last year
- [ICML'24 Spotlight] LLM Maybe LongLM: Self-Extend LLM Context Window Without Tuning☆667Jun 1, 2024Updated last year
- Code for the Shortformer model, from the ACL 2021 paper by Ofir Press, Noah A. Smith and Mike Lewis.☆147Jul 26, 2021Updated 4 years ago
- Human preference data for "Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback"☆1,814Jun 17, 2025Updated 7 months ago
- [ICML 2024] Break the Sequential Dependency of LLM Inference Using Lookahead Decoding☆1,316Mar 6, 2025Updated 11 months ago
- Code for the ICLR 2023 paper "GPTQ: Accurate Post-training Quantization of Generative Pretrained Transformers".☆2,254Mar 27, 2024Updated last year
- Transformer related optimization, including BERT, GPT☆6,392Mar 27, 2024Updated last year
- 🚀 A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (i…☆9,491Feb 6, 2026Updated last week
- OSLO: Open Source for Large-scale Optimization☆175Sep 9, 2023Updated 2 years ago
- [ACL 2021] LM-BFF: Better Few-shot Fine-tuning of Language Models https://arxiv.org/abs/2012.15723☆731Aug 29, 2022Updated 3 years ago
- ☆1,560Feb 5, 2026Updated last week
- ☆2,947Jan 15, 2026Updated 3 weeks ago
- [ICML'24] Data and code for our paper "Training-Free Long-Context Scaling of Large Language Models"☆445Oct 16, 2024Updated last year
- maximal update parametrization (µP)☆1,673Jul 17, 2024Updated last year
- ☆292Dec 16, 2024Updated last year