Extend existing LLMs way beyond the original training length with constant memory usage, without retraining
☆736Apr 10, 2024Updated 2 years ago
Alternatives and similar repositories for attention_sinks
Users that are interested in attention_sinks are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- [ICLR 2024] Efficient Streaming Language Models with Attention Sinks☆7,225Jul 11, 2024Updated last year
- YaRN: Efficient Context Window Extension of Large Language Models☆1,710Apr 17, 2024Updated 2 years ago
- Medusa: Simple Framework for Accelerating LLM Generation with Multiple Decoding Heads☆2,730Jun 25, 2024Updated last year
- Code and documents of LongLoRA and LongAlpaca (ICLR 2024 Oral)☆2,692Aug 14, 2024Updated last year
- [ICML 2024] Break the Sequential Dependency of LLM Inference Using Lookahead Decoding☆1,333Mar 6, 2025Updated last year
- Managed Database hosting by DigitalOcean • AdPostgreSQL, MySQL, MongoDB, Kafka, Valkey, and OpenSearch available. Automatically scale up storage and focus on building your apps.
- [ICML'24 Spotlight] LLM Maybe LongLM: Self-Extend LLM Context Window Without Tuning☆664Jun 1, 2024Updated last year
- 🤗 HuggingFace Inference Toolkit for Google Cloud Vertex AI (similar to SageMaker's Inference Toolkit, but for Vertex AI and unofficial)☆17Mar 20, 2024Updated 2 years ago
- Tools for merging pretrained large language models.☆7,052Mar 15, 2026Updated last month
- Robust recipes to align language models with human and AI preferences☆5,593Apr 8, 2026Updated last month
- Efficient few-shot learning with Sentence Transformers☆2,728Apr 17, 2026Updated 3 weeks ago
- ⚡ Build your chatbot within minutes on your favorite device; offer SOTA compression techniques for LLMs; run LLMs efficiently on Intel Pl…☆2,178Oct 8, 2024Updated last year
- Positional Skip-wise Training for Efficient Context Window Extension of LLMs to Extremely Length (ICLR 2024)☆209May 20, 2024Updated last year
- Implementation of NAACL 2024 Outstanding Paper "LM-Infinite: Simple On-the-Fly Length Generalization for Large Language Models"☆152Mar 13, 2025Updated last year
- Large Language Model Text Generation Inference☆10,854Mar 21, 2026Updated last month
- Wordpress hosting with auto-scaling - Free Trial Offer • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- S-LoRA: Serving Thousands of Concurrent LoRA Adapters☆1,909Jan 21, 2024Updated 2 years ago
- LongLLaMA is a large language model capable of handling long contexts. It is based on OpenLLaMA and fine-tuned with the Focused Transform…☆1,465Nov 7, 2023Updated 2 years ago
- [EMNLP'23, ACL'24] To speed up LLMs' inference and enhance LLM's perceive of key information, compress the prompt and KV-Cache, which ach…☆6,156Apr 8, 2026Updated last month
- SpanMarker for Named Entity Recognition☆473Apr 10, 2026Updated last month
- Official inference library for Mistral models☆10,798Apr 20, 2026Updated 2 weeks ago
- Official repository for LongChat and LongEval☆535May 24, 2024Updated last year
- MII makes low-latency and high-throughput inference possible, powered by DeepSpeed.☆2,110Jun 30, 2025Updated 10 months ago
- Official implementation of Half-Quadratic Quantization (HQQ)☆933Feb 26, 2026Updated 2 months ago
- [ICML'24] Data and code for our paper "Training-Free Long-Context Scaling of Large Language Models"☆450Oct 16, 2024Updated last year
- Virtual machines for every use case on DigitalOcean • AdGet dependable uptime with 99.99% SLA, simple security tools, and predictable monthly pricing with DigitalOcean's virtual machines, called Droplets.
- Rectified Rotary Position Embeddings☆395May 20, 2024Updated last year
- Accessible large language models via k-bit quantization for PyTorch.☆8,178Updated this week
- QLoRA: Efficient Finetuning of Quantized LLMs☆10,901Jun 10, 2024Updated last year
- [MLSys 2024 Best Paper Award] AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration☆3,529Jul 17, 2025Updated 9 months ago
- Public repo for the NeurIPS 2023 paper "Unlimiformer: Long-Range Transformers with Unlimited Length Input"☆1,064Mar 7, 2024Updated 2 years ago
- Distilabel is a framework for synthetic data and AI feedback for engineers who need fast, reliable and scalable pipelines based on verifi…☆3,209Apr 27, 2026Updated last week
- GEAR: An Efficient KV Cache Compression Recipefor Near-Lossless Generative Inference of LLM☆183Jul 12, 2024Updated last year
- Train transformer language models with reinforcement learning.☆18,282Updated this week
- [ICLR 2025] DuoAttention: Efficient Long-Context LLM Inference with Retrieval and Streaming Heads☆543Feb 10, 2025Updated last year
- End-to-end encrypted email - Proton Mail • AdSpecial offer: 40% Off Yearly / 80% Off First Month. All Proton services are open source and independently audited for security.
- ☆313Jul 10, 2025Updated 9 months ago
- Implementation of SelfExtend from the paper "LLM Maybe LongLM: Self-Extend LLM Context Window Without Tuning" from Pytorch and Zeta☆13Nov 11, 2024Updated last year
- An easy-to-use LLMs quantization package with user-friendly apis, based on GPTQ algorithm.☆5,059Apr 11, 2025Updated last year
- A blazing fast inference solution for text embeddings models☆4,767Apr 30, 2026Updated last week
- Freeing data processing from scripting madness by providing a set of platform-agnostic customizable pipeline processing blocks.☆3,033Apr 20, 2026Updated 2 weeks ago
- A Python wrapper around HuggingFace's TGI (text-generation-inference) and TEI (text-embedding-inference) servers.☆32Sep 19, 2025Updated 7 months ago
- A fast inference library for running LLMs locally on modern consumer-class GPUs☆4,514Mar 4, 2026Updated 2 months ago