Landmark Attention: Random-Access Infinite Context Length for Transformers
☆426Dec 20, 2023Updated 2 years ago
Alternatives and similar repositories for landmark-attention
Users that are interested in landmark-attention are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Landmark Attention: Random-Access Infinite Context Length for Transformers QLoRA☆125Jun 16, 2023Updated 2 years ago
- Official implementation of our NeurIPS 2023 paper "Augmenting Language Models with Long-Term Memory".☆825Mar 30, 2024Updated 2 years ago
- Positional Skip-wise Training for Efficient Context Window Extension of LLMs to Extremely Length (ICLR 2024)☆209May 20, 2024Updated last year
- YaRN: Efficient Context Window Extension of Large Language Models☆1,708Apr 17, 2024Updated 2 years ago
- Pytorch implementation for "Compressed Context Memory For Online Language Model Interaction" (ICLR'24)☆63Apr 18, 2024Updated 2 years ago
- AI Agents on DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- Official code for ACL 2023 (short, findings) paper "Recursion of Thought: A Divide and Conquer Approach to Multi-Context Reasoning with L…☆45Jun 13, 2023Updated 2 years ago
- LongLLaMA is a large language model capable of handling long contexts. It is based on OpenLLaMA and fine-tuned with the Focused Transform…☆1,465Nov 7, 2023Updated 2 years ago
- An Autonomous LLM Agent that runs on Wizcoder-15B☆336Oct 21, 2024Updated last year
- Official repository for LongChat and LongEval☆534May 24, 2024Updated last year
- [EMNLP 2023] Adapting Language Models to Compress Long Contexts☆335Sep 9, 2024Updated last year
- [NeurIPS'23] H2O: Heavy-Hitter Oracle for Efficient Generative Inference of Large Language Models.☆514Aug 1, 2024Updated last year
- Code for the paper "Stack Attention: Improving the Ability of Transformers to Model Hierarchical Patterns"☆18Mar 15, 2024Updated 2 years ago
- ☆107Jun 20, 2023Updated 2 years ago
- Code and documents of LongLoRA and LongAlpaca (ICLR 2024 Oral)☆2,692Aug 14, 2024Updated last year
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- Public repo for the NeurIPS 2023 paper "Unlimiformer: Long-Range Transformers with Unlimited Length Input"☆1,064Mar 7, 2024Updated 2 years ago
- Large Context Attention☆769Oct 13, 2025Updated 6 months ago
- Naive Bayes-based Context Extension☆328Dec 9, 2024Updated last year
- ☆13May 25, 2023Updated 2 years ago
- Official code for ReLoRA from the paper Stack More Layers Differently: High-Rank Training Through Low-Rank Updates☆474Apr 21, 2024Updated 2 years ago
- QLoRA: Efficient Finetuning of Quantized LLMs☆10,899Jun 10, 2024Updated last year
- Customizable implementation of the self-instruct paper.☆1,053Mar 7, 2024Updated 2 years ago
- Official repository for DistFlashAttn: Distributed Memory-efficient Attention for Long-context LLMs Training☆221Aug 19, 2024Updated last year
- A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.☆2,915Sep 30, 2023Updated 2 years ago
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- Implementation of paper Data Engineering for Scaling Language Models to 128K Context☆497Mar 19, 2024Updated 2 years ago
- [ICLR 2024] CLEX: Continuous Length Extrapolation for Large Language Models☆78Mar 12, 2024Updated 2 years ago
- Rectified Rotary Position Embeddings☆395May 20, 2024Updated last year
- LongBench v2 and LongBench (ACL 25'&24')☆1,164Jan 15, 2025Updated last year
- ☆311Jul 10, 2025Updated 9 months ago
- TART: A plug-and-play Transformer module for task-agnostic reasoning☆202Jun 22, 2023Updated 2 years ago
- [ICML'24] Data and code for our paper "Training-Free Long-Context Scaling of Large Language Models"☆450Oct 16, 2024Updated last year
- Stick-breaking attention☆63Jul 1, 2025Updated 10 months ago
- Extend existing LLMs way beyond the original training length with constant memory usage, without retraining☆736Apr 10, 2024Updated 2 years ago
- Deploy on Railway without the complexity - Free Credits Offer • AdConnect your repo and Railway handles the rest with instant previews. Quickly provision container image services, databases, and storage volumes.
- ☆150Jun 2, 2023Updated 2 years ago
- Code for paper: "QuIP: 2-Bit Quantization of Large Language Models With Guarantees"☆397Feb 24, 2024Updated 2 years ago
- [ICML'24 Spotlight] LLM Maybe LongLM: Self-Extend LLM Context Window Without Tuning☆664Jun 1, 2024Updated last year
- Token Omission Via Attention☆127Oct 13, 2024Updated last year
- OpenLLaMA, a permissively licensed open source reproduction of Meta AI’s LLaMA 7B trained on the RedPajama dataset☆7,531Jul 16, 2023Updated 2 years ago
- Code and data for "Lost in the Middle: How Language Models Use Long Contexts"☆377Jan 4, 2024Updated 2 years ago
- Reference implementation of "Softmax Attention with Constant Cost per Token" (Heinsen, 2024)☆25Jun 6, 2024Updated last year