tomaarsen / attention_sinksView external linksLinks
Extend existing LLMs way beyond the original training length with constant memory usage, without retraining
☆737Apr 10, 2024Updated last year
Alternatives and similar repositories for attention_sinks
Users that are interested in attention_sinks are comparing it to the libraries listed below
Sorting:
- [ICLR 2024] Efficient Streaming Language Models with Attention Sinks☆7,188Jul 11, 2024Updated last year
- YaRN: Efficient Context Window Extension of Large Language Models☆1,669Apr 17, 2024Updated last year
- Medusa: Simple Framework for Accelerating LLM Generation with Multiple Decoding Heads☆2,705Jun 25, 2024Updated last year
- Code and documents of LongLoRA and LongAlpaca (ICLR 2024 Oral)☆2,694Aug 14, 2024Updated last year
- Tools for merging pretrained large language models.☆6,783Jan 26, 2026Updated 3 weeks ago
- Robust recipes to align language models with human and AI preferences☆5,495Sep 8, 2025Updated 5 months ago
- [ICML'24 Spotlight] LLM Maybe LongLM: Self-Extend LLM Context Window Without Tuning☆665Jun 1, 2024Updated last year
- [ICML 2024] Break the Sequential Dependency of LLM Inference Using Lookahead Decoding☆1,316Mar 6, 2025Updated 11 months ago
- Efficient few-shot learning with Sentence Transformers☆2,680Dec 11, 2025Updated 2 months ago
- ⚡ Build your chatbot within minutes on your favorite device; offer SOTA compression techniques for LLMs; run LLMs efficiently on Intel Pl…☆2,174Oct 8, 2024Updated last year
- S-LoRA: Serving Thousands of Concurrent LoRA Adapters☆1,897Jan 21, 2024Updated 2 years ago
- Large Language Model Text Generation Inference☆10,769Jan 8, 2026Updated last month
- [EMNLP'23, ACL'24] To speed up LLMs' inference and enhance LLM's perceive of key information, compress the prompt and KV-Cache, which ach…☆5,834Oct 28, 2025Updated 3 months ago
- LongLLaMA is a large language model capable of handling long contexts. It is based on OpenLLaMA and fine-tuned with the Focused Transform…☆1,463Nov 7, 2023Updated 2 years ago
- Official inference library for Mistral models☆10,664Nov 21, 2025Updated 2 months ago
- [ICML'24] Data and code for our paper "Training-Free Long-Context Scaling of Large Language Models"☆445Oct 16, 2024Updated last year
- MII makes low-latency and high-throughput inference possible, powered by DeepSpeed.☆2,093Jun 30, 2025Updated 7 months ago
- Positional Skip-wise Training for Efficient Context Window Extension of LLMs to Extremely Length (ICLR 2024)☆209May 20, 2024Updated last year
- Official repository for LongChat and LongEval☆534May 24, 2024Updated last year
- An easy-to-use LLMs quantization package with user-friendly apis, based on GPTQ algorithm.☆5,028Apr 11, 2025Updated 10 months ago
- Accessible large language models via k-bit quantization for PyTorch.☆7,952Updated this week
- Official implementation of Half-Quadratic Quantization (HQQ)☆913Dec 18, 2025Updated last month
- Public repo for the NeurIPS 2023 paper "Unlimiformer: Long-Range Transformers with Unlimited Length Input"☆1,066Mar 7, 2024Updated last year
- 🤗 HuggingFace Inference Toolkit for Google Cloud Vertex AI (similar to SageMaker's Inference Toolkit, but for Vertex AI and unofficial)☆17Mar 20, 2024Updated last year
- A fast inference library for running LLMs locally on modern consumer-class GPUs☆4,445Dec 9, 2025Updated 2 months ago
- A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.☆2,911Sep 30, 2023Updated 2 years ago
- Freeing data processing from scripting madness by providing a set of platform-agnostic customizable pipeline processing blocks.☆2,885Updated this week
- GEAR: An Efficient KV Cache Compression Recipefor Near-Lossless Generative Inference of LLM☆176Jul 12, 2024Updated last year
- QLoRA: Efficient Finetuning of Quantized LLMs☆10,837Jun 10, 2024Updated last year
- Train transformer language models with reinforcement learning.☆17,360Updated this week
- Minimalistic large language model 3D-parallelism training☆2,559Updated this week
- [MLSys 2024 Best Paper Award] AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration☆3,436Jul 17, 2025Updated 6 months ago
- Implementation of SelfExtend from the paper "LLM Maybe LongLM: Self-Extend LLM Context Window Without Tuning" from Pytorch and Zeta☆13Nov 11, 2024Updated last year
- Rectified Rotary Position Embeddings☆388May 20, 2024Updated last year
- Implementation of NAACL 2024 Outstanding Paper "LM-Infinite: Simple On-the-Fly Length Generalization for Large Language Models"☆152Mar 13, 2025Updated 11 months ago
- Foundation Architecture for (M)LLMs☆3,130Apr 11, 2024Updated last year
- A blazing fast inference solution for text embeddings models☆4,495Updated this week
- Distilabel is a framework for synthetic data and AI feedback for engineers who need fast, reliable and scalable pipelines based on verifi…☆3,095Feb 9, 2026Updated last week
- The TinyLlama project is an open endeavor to pretrain a 1.1B Llama model on 3 trillion tokens.☆8,889May 3, 2024Updated last year