Extend existing LLMs way beyond the original training length with constant memory usage, without retraining
☆736Apr 10, 2024Updated last year
Alternatives and similar repositories for attention_sinks
Users that are interested in attention_sinks are comparing it to the libraries listed below
Sorting:
- [ICLR 2024] Efficient Streaming Language Models with Attention Sinks☆7,196Jul 11, 2024Updated last year
- YaRN: Efficient Context Window Extension of Large Language Models☆1,676Apr 17, 2024Updated last year
- Medusa: Simple Framework for Accelerating LLM Generation with Multiple Decoding Heads☆2,714Jun 25, 2024Updated last year
- Code and documents of LongLoRA and LongAlpaca (ICLR 2024 Oral)☆2,694Aug 14, 2024Updated last year
- Tools for merging pretrained large language models.☆6,842Feb 28, 2026Updated last week
- Robust recipes to align language models with human and AI preferences☆5,510Sep 8, 2025Updated 6 months ago
- [ICML'24 Spotlight] LLM Maybe LongLM: Self-Extend LLM Context Window Without Tuning☆665Jun 1, 2024Updated last year
- [ICML 2024] Break the Sequential Dependency of LLM Inference Using Lookahead Decoding☆1,317Mar 6, 2025Updated last year
- Efficient few-shot learning with Sentence Transformers☆2,690Dec 11, 2025Updated 2 months ago
- ⚡ Build your chatbot within minutes on your favorite device; offer SOTA compression techniques for LLMs; run LLMs efficiently on Intel Pl…☆2,175Oct 8, 2024Updated last year
- S-LoRA: Serving Thousands of Concurrent LoRA Adapters☆1,900Jan 21, 2024Updated 2 years ago
- Large Language Model Text Generation Inference☆10,795Jan 8, 2026Updated 2 months ago
- [EMNLP'23, ACL'24] To speed up LLMs' inference and enhance LLM's perceive of key information, compress the prompt and KV-Cache, which ach…☆5,892Oct 28, 2025Updated 4 months ago
- LongLLaMA is a large language model capable of handling long contexts. It is based on OpenLLaMA and fine-tuned with the Focused Transform…☆1,464Nov 7, 2023Updated 2 years ago
- Official inference library for Mistral models☆10,700Feb 26, 2026Updated last week
- [ICML'24] Data and code for our paper "Training-Free Long-Context Scaling of Large Language Models"☆448Oct 16, 2024Updated last year
- MII makes low-latency and high-throughput inference possible, powered by DeepSpeed.☆2,097Jun 30, 2025Updated 8 months ago
- Positional Skip-wise Training for Efficient Context Window Extension of LLMs to Extremely Length (ICLR 2024)☆209May 20, 2024Updated last year
- Official repository for LongChat and LongEval☆533May 24, 2024Updated last year
- An easy-to-use LLMs quantization package with user-friendly apis, based on GPTQ algorithm.☆5,028Apr 11, 2025Updated 10 months ago
- Official implementation of Half-Quadratic Quantization (HQQ)☆915Feb 26, 2026Updated last week
- Public repo for the NeurIPS 2023 paper "Unlimiformer: Long-Range Transformers with Unlimited Length Input"☆1,064Mar 7, 2024Updated 2 years ago
- Accessible large language models via k-bit quantization for PyTorch.☆8,019Updated this week
- 🤗 HuggingFace Inference Toolkit for Google Cloud Vertex AI (similar to SageMaker's Inference Toolkit, but for Vertex AI and unofficial)☆17Mar 20, 2024Updated last year
- A fast inference library for running LLMs locally on modern consumer-class GPUs☆4,451Updated this week
- A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.☆2,913Sep 30, 2023Updated 2 years ago
- QLoRA: Efficient Finetuning of Quantized LLMs☆10,841Jun 10, 2024Updated last year
- Freeing data processing from scripting madness by providing a set of platform-agnostic customizable pipeline processing blocks.☆2,915Mar 3, 2026Updated last week
- Train transformer language models with reinforcement learning.☆17,523Updated this week
- Minimalistic large language model 3D-parallelism training☆2,588Feb 19, 2026Updated 2 weeks ago
- [MLSys 2024 Best Paper Award] AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration☆3,453Jul 17, 2025Updated 7 months ago
- GEAR: An Efficient KV Cache Compression Recipefor Near-Lossless Generative Inference of LLM☆179Jul 12, 2024Updated last year
- Implementation of SelfExtend from the paper "LLM Maybe LongLM: Self-Extend LLM Context Window Without Tuning" from Pytorch and Zeta☆13Nov 11, 2024Updated last year
- Rectified Rotary Position Embeddings☆389May 20, 2024Updated last year
- Implementation of NAACL 2024 Outstanding Paper "LM-Infinite: Simple On-the-Fly Length Generalization for Large Language Models"☆151Mar 13, 2025Updated 11 months ago
- Foundation Architecture for (M)LLMs☆3,134Apr 11, 2024Updated last year
- Distilabel is a framework for synthetic data and AI feedback for engineers who need fast, reliable and scalable pipelines based on verifi…☆3,114Mar 2, 2026Updated last week
- A blazing fast inference solution for text embeddings models☆4,553Feb 25, 2026Updated last week
- The TinyLlama project is an open endeavor to pretrain a 1.1B Llama model on 3 trillion tokens.☆8,902May 3, 2024Updated last year