Landmark Attention: Random-Access Infinite Context Length for Transformers
☆426Dec 20, 2023Updated 2 years ago
Alternatives and similar repositories for landmark-attention
Users that are interested in landmark-attention are comparing it to the libraries listed below
Sorting:
- Landmark Attention: Random-Access Infinite Context Length for Transformers QLoRA☆125Jun 16, 2023Updated 2 years ago
- Official implementation of our NeurIPS 2023 paper "Augmenting Language Models with Long-Term Memory".☆822Mar 30, 2024Updated last year
- Positional Skip-wise Training for Efficient Context Window Extension of LLMs to Extremely Length (ICLR 2024)☆209May 20, 2024Updated last year
- YaRN: Efficient Context Window Extension of Large Language Models☆1,685Apr 17, 2024Updated last year
- Pytorch implementation for "Compressed Context Memory For Online Language Model Interaction" (ICLR'24)☆63Apr 18, 2024Updated last year
- Official code for ACL 2023 (short, findings) paper "Recursion of Thought: A Divide and Conquer Approach to Multi-Context Reasoning with L…☆45Jun 13, 2023Updated 2 years ago
- LongLLaMA is a large language model capable of handling long contexts. It is based on OpenLLaMA and fine-tuned with the Focused Transform…☆1,464Nov 7, 2023Updated 2 years ago
- An Autonomous LLM Agent that runs on Wizcoder-15B☆335Oct 21, 2024Updated last year
- Official repository for LongChat and LongEval☆533May 24, 2024Updated last year
- [EMNLP 2023] Adapting Language Models to Compress Long Contexts☆333Sep 9, 2024Updated last year
- [NeurIPS'23] H2O: Heavy-Hitter Oracle for Efficient Generative Inference of Large Language Models.☆507Aug 1, 2024Updated last year
- Code for the paper "Stack Attention: Improving the Ability of Transformers to Model Hierarchical Patterns"☆18Mar 15, 2024Updated 2 years ago
- ☆106Jun 20, 2023Updated 2 years ago
- Code and documents of LongLoRA and LongAlpaca (ICLR 2024 Oral)☆2,697Aug 14, 2024Updated last year
- Public repo for the NeurIPS 2023 paper "Unlimiformer: Long-Range Transformers with Unlimited Length Input"☆1,065Mar 7, 2024Updated 2 years ago
- Large Context Attention☆769Oct 13, 2025Updated 5 months ago
- Naive Bayes-based Context Extension☆327Dec 9, 2024Updated last year
- ☆13May 25, 2023Updated 2 years ago
- Official code for ReLoRA from the paper Stack More Layers Differently: High-Rank Training Through Low-Rank Updates☆474Apr 21, 2024Updated last year
- Official repository for DistFlashAttn: Distributed Memory-efficient Attention for Long-context LLMs Training☆222Aug 19, 2024Updated last year
- Customizable implementation of the self-instruct paper.☆1,052Mar 7, 2024Updated 2 years ago
- QLoRA: Efficient Finetuning of Quantized LLMs☆10,858Jun 10, 2024Updated last year
- A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.☆2,913Sep 30, 2023Updated 2 years ago
- Implementation of paper Data Engineering for Scaling Language Models to 128K Context☆490Mar 19, 2024Updated 2 years ago
- [ICLR 2024] CLEX: Continuous Length Extrapolation for Large Language Models☆78Mar 12, 2024Updated 2 years ago
- LongBench v2 and LongBench (ACL 25'&24')☆1,122Jan 15, 2025Updated last year
- Rectified Rotary Position Embeddings☆388May 20, 2024Updated last year
- ☆306Jul 10, 2025Updated 8 months ago
- TART: A plug-and-play Transformer module for task-agnostic reasoning☆202Jun 22, 2023Updated 2 years ago
- [ICML'24] Data and code for our paper "Training-Free Long-Context Scaling of Large Language Models"☆449Oct 16, 2024Updated last year
- Stick-breaking attention☆63Jul 1, 2025Updated 8 months ago
- Extend existing LLMs way beyond the original training length with constant memory usage, without retraining☆736Apr 10, 2024Updated last year
- ☆150Jun 2, 2023Updated 2 years ago
- Code for paper: "QuIP: 2-Bit Quantization of Large Language Models With Guarantees"☆397Feb 24, 2024Updated 2 years ago
- [ICML'24 Spotlight] LLM Maybe LongLM: Self-Extend LLM Context Window Without Tuning☆666Jun 1, 2024Updated last year
- Reference implementation of "Softmax Attention with Constant Cost per Token" (Heinsen, 2024)☆24Jun 6, 2024Updated last year
- Token Omission Via Attention☆127Oct 13, 2024Updated last year
- OpenLLaMA, a permissively licensed open source reproduction of Meta AI’s LLaMA 7B trained on the RedPajama dataset☆7,537Jul 16, 2023Updated 2 years ago
- Code and data for "Lost in the Middle: How Language Models Use Long Contexts"☆374Jan 4, 2024Updated 2 years ago