Extend existing LLMs way beyond the original training length with constant memory usage, without retraining
☆736Apr 10, 2024Updated last year
Alternatives and similar repositories for attention_sinks
Users that are interested in attention_sinks are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- [ICLR 2024] Efficient Streaming Language Models with Attention Sinks☆7,209Jul 11, 2024Updated last year
- YaRN: Efficient Context Window Extension of Large Language Models☆1,686Apr 17, 2024Updated last year
- Medusa: Simple Framework for Accelerating LLM Generation with Multiple Decoding Heads☆2,722Jun 25, 2024Updated last year
- Code and documents of LongLoRA and LongAlpaca (ICLR 2024 Oral)☆2,696Aug 14, 2024Updated last year
- [ICML 2024] Break the Sequential Dependency of LLM Inference Using Lookahead Decoding☆1,324Mar 6, 2025Updated last year
- DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- [ICML'24 Spotlight] LLM Maybe LongLM: Self-Extend LLM Context Window Without Tuning☆666Jun 1, 2024Updated last year
- 🤗 HuggingFace Inference Toolkit for Google Cloud Vertex AI (similar to SageMaker's Inference Toolkit, but for Vertex AI and unofficial)☆17Mar 20, 2024Updated 2 years ago
- Tools for merging pretrained large language models.☆6,895Mar 15, 2026Updated 2 weeks ago
- Robust recipes to align language models with human and AI preferences☆5,535Sep 8, 2025Updated 6 months ago
- Efficient few-shot learning with Sentence Transformers☆2,703Dec 11, 2025Updated 3 months ago
- ⚡ Build your chatbot within minutes on your favorite device; offer SOTA compression techniques for LLMs; run LLMs efficiently on Intel Pl…☆2,179Oct 8, 2024Updated last year
- Positional Skip-wise Training for Efficient Context Window Extension of LLMs to Extremely Length (ICLR 2024)☆209May 20, 2024Updated last year
- Implementation of NAACL 2024 Outstanding Paper "LM-Infinite: Simple On-the-Fly Length Generalization for Large Language Models"☆152Mar 13, 2025Updated last year
- Large Language Model Text Generation Inference☆10,815Mar 21, 2026Updated last week
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting with the flexibility to host WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Cloudways by DigitalOcean.
- S-LoRA: Serving Thousands of Concurrent LoRA Adapters☆1,905Jan 21, 2024Updated 2 years ago
- LongLLaMA is a large language model capable of handling long contexts. It is based on OpenLLaMA and fine-tuned with the Focused Transform…☆1,464Nov 7, 2023Updated 2 years ago
- [EMNLP'23, ACL'24] To speed up LLMs' inference and enhance LLM's perceive of key information, compress the prompt and KV-Cache, which ach…☆5,965Oct 28, 2025Updated 5 months ago
- SpanMarker for Named Entity Recognition☆465Jan 8, 2025Updated last year
- Official inference library for Mistral models☆10,741Feb 26, 2026Updated last month
- Official repository for LongChat and LongEval☆533May 24, 2024Updated last year
- MII makes low-latency and high-throughput inference possible, powered by DeepSpeed.☆2,105Jun 30, 2025Updated 9 months ago
- Official implementation of Half-Quadratic Quantization (HQQ)☆919Feb 26, 2026Updated last month
- Rectified Rotary Position Embeddings☆388May 20, 2024Updated last year
- Virtual machines for every use case on DigitalOcean • AdGet dependable uptime with 99.99% SLA, simple security tools, and predictable monthly pricing with DigitalOcean's virtual machines, called Droplets.
- [ICML'24] Data and code for our paper "Training-Free Long-Context Scaling of Large Language Models"☆450Oct 16, 2024Updated last year
- Accessible large language models via k-bit quantization for PyTorch.☆8,078Updated this week
- QLoRA: Efficient Finetuning of Quantized LLMs☆10,861Jun 10, 2024Updated last year
- [MLSys 2024 Best Paper Award] AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration☆3,479Jul 17, 2025Updated 8 months ago
- Public repo for the NeurIPS 2023 paper "Unlimiformer: Long-Range Transformers with Unlimited Length Input"☆1,065Mar 7, 2024Updated 2 years ago
- Train transformer language models with reinforcement learning.☆17,781Updated this week
- Distilabel is a framework for synthetic data and AI feedback for engineers who need fast, reliable and scalable pipelines based on verifi…☆3,143Updated this week
- GEAR: An Efficient KV Cache Compression Recipefor Near-Lossless Generative Inference of LLM☆180Jul 12, 2024Updated last year
- [ICLR 2025] DuoAttention: Efficient Long-Context LLM Inference with Retrieval and Streaming Heads☆532Feb 10, 2025Updated last year
- End-to-end encrypted cloud storage - Proton Drive • AdSpecial offer: 40% Off Yearly / 80% Off First Month. Protect your most important files, photos, and documents from prying eyes.
- ☆311Jul 10, 2025Updated 8 months ago
- Implementation of SelfExtend from the paper "LLM Maybe LongLM: Self-Extend LLM Context Window Without Tuning" from Pytorch and Zeta☆13Nov 11, 2024Updated last year
- An easy-to-use LLMs quantization package with user-friendly apis, based on GPTQ algorithm.☆5,039Apr 11, 2025Updated 11 months ago
- Freeing data processing from scripting madness by providing a set of platform-agnostic customizable pipeline processing blocks.☆2,965Mar 16, 2026Updated 2 weeks ago
- A blazing fast inference solution for text embeddings models☆4,625Mar 23, 2026Updated last week
- Fast and memory-efficient exact attention☆22,938Updated this week
- A Python wrapper around HuggingFace's TGI (text-generation-inference) and TEI (text-embedding-inference) servers.☆32Sep 19, 2025Updated 6 months ago