This is the official repo of "QuickLLaMA: Query-aware Inference Acceleration for Large Language Models"
☆55Jul 16, 2024Updated last year
Alternatives and similar repositories for Q-LLM
Users that are interested in Q-LLM are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- The official implementation of paper: SimLayerKV: A Simple Framework for Layer-Level KV Cache Reduction.☆50Oct 18, 2024Updated last year
- The code of our paper "InfLLM: Unveiling the Intrinsic Capacity of LLMs for Understanding Extremely Long Sequences with Training-Free Mem…☆396Apr 20, 2024Updated last year
- ☆14Oct 3, 2024Updated last year
- ☆18Mar 11, 2025Updated last year
- [ICML 2024] Quest: Query-Aware Sparsity for Efficient Long-Context LLM Inference☆377Jul 10, 2025Updated 8 months ago
- DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- ☆11Sep 7, 2024Updated last year
- ☆13Jul 2, 2025Updated 8 months ago
- ☆47Nov 25, 2024Updated last year
- ☆29Apr 7, 2024Updated last year
- ☆52May 13, 2024Updated last year
- Pytorch implementation for "Compressed Context Memory For Online Language Model Interaction" (ICLR'24)☆63Apr 18, 2024Updated last year
- ☆306Jul 10, 2025Updated 8 months ago
- PLM: Efficient Peripheral Language Models Hardware-Co-Designed for Ubiquitous Computing☆20Mar 18, 2025Updated last year
- Implementation of NAACL 2024 Outstanding Paper "LM-Infinite: Simple On-the-Fly Length Generalization for Large Language Models"☆151Mar 13, 2025Updated last year
- Managed Database hosting by DigitalOcean • AdPostgreSQL, MySQL, MongoDB, Kafka, Valkey, and OpenSearch available. Automatically scale up storage and focus on building your apps.
- Layer-Condensed KV cache w/ 10 times larger batch size, fewer params and less computation. Dramatic speed up with better task performance…☆157Apr 7, 2025Updated 11 months ago
- Implement some method of LLM KV Cache Sparsity☆41Jun 6, 2024Updated last year
- To assess the longtext capabilities more comprehensively, we propose Needle-in-a-Haystack PLUS, which shifts the focus from simple fact r…☆13Mar 4, 2024Updated 2 years ago
- The official code for our ECCV22 oral paper: tracking objects as pixel-wise distributions.☆159Sep 21, 2022Updated 3 years ago
- ☆18Jul 11, 2021Updated 4 years ago
- ☆12Apr 29, 2024Updated last year
- (Siggraph Asia 2023) Project Page of "HyperDreamer: Hyper-Realistic 3D Content Generation and Editing from a Single Image"☆10Dec 9, 2023Updated 2 years ago
- ☆14Jun 4, 2024Updated last year
- The official Github repository for paper "R^2AG: Incorporating Retrieval Information into Retrieval Augmented Generation" (EMNLP 2024 Fin…☆38Dec 6, 2024Updated last year
- Virtual machines for every use case on DigitalOcean • AdGet dependable uptime with 99.99% SLA, simple security tools, and predictable monthly pricing with DigitalOcean's virtual machines, called Droplets.
- [ICLR2025] Code and data for paper: Not All Heads Matter: A Head-Level KV Cache Compression Method with Integrated Retrieval and Reasonin…☆40Mar 10, 2025Updated last year
- Official repo of paper LM2☆47Feb 13, 2025Updated last year
- codes for paper "learning to discriminate perturbations for blocking adversarial attacks in text classification" in EMNLP19☆15Feb 25, 2020Updated 6 years ago
- The simplest implementation of recent Sparse Attention patterns for efficient LLM inference.☆91Jul 17, 2025Updated 8 months ago
- ☆31Jul 14, 2025Updated 8 months ago
- Efficient retrieval head analysis with triton flash attention that supports topK probability☆13Jun 15, 2024Updated last year
- [ICLR 2025] TidalDecode: A Fast and Accurate LLM Decoding with Position Persistent Sparse Attention☆52Aug 6, 2025Updated 7 months ago
- [ACL 2025] Squeezed Attention: Accelerating Long Prompt LLM Inference☆58Nov 20, 2024Updated last year
- Homepage for ProLong (Princeton long-context language models) and paper "How to Train Long-Context Language Models (Effectively)"☆247Sep 12, 2025Updated 6 months ago
- DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- Implementation for "Step-DPO: Step-wise Preference Optimization for Long-chain Reasoning of LLMs"☆392Jan 19, 2025Updated last year
- (ICCV2023) IST-Net: Prior-free Category-level Pose Estimation with Implicit Space Transformation☆120Dec 7, 2023Updated 2 years ago
- Official code repo for paper "Great Memory, Shallow Reasoning: Limits of kNN-LMs"☆23Apr 30, 2025Updated 10 months ago
- ☆21Jan 16, 2025Updated last year
- IKEA: Reinforced Internal-External Knowledge Synergistic Reasoning for Efficient Adaptive Search Agent☆69May 13, 2025Updated 10 months ago
- ACL24☆11Jun 7, 2024Updated last year
- [ACL 2024] Long-Context Language Modeling with Parallel Encodings☆169Jun 13, 2024Updated last year