This is the official repo of "QuickLLaMA: Query-aware Inference Acceleration for Large Language Models"
☆55Jul 16, 2024Updated last year
Alternatives and similar repositories for Q-LLM
Users that are interested in Q-LLM are comparing it to the libraries listed below
Sorting:
- ☆18Dec 2, 2024Updated last year
- The official implementation of paper: SimLayerKV: A Simple Framework for Layer-Level KV Cache Reduction.☆51Oct 18, 2024Updated last year
- ☆14Oct 3, 2024Updated last year
- ☆18Mar 11, 2025Updated 11 months ago
- [ICML 2024] Quest: Query-Aware Sparsity for Efficient Long-Context LLM Inference☆374Jul 10, 2025Updated 7 months ago
- Detecting Drift in a Diabetes Dataset using Taipy☆12May 19, 2025Updated 9 months ago
- (Siggraph Asia 2023) Project Page of "HyperDreamer: Hyper-Realistic 3D Content Generation and Editing from a Single Image"☆10Dec 9, 2023Updated 2 years ago
- GPT4 based personalized ArXiv paper assistant bot☆12Mar 1, 2024Updated 2 years ago
- AI Search engine☆13Sep 24, 2025Updated 5 months ago
- ☆13Jul 2, 2025Updated 8 months ago
- Implementation of NAACL 2024 Outstanding Paper "LM-Infinite: Simple On-the-Fly Length Generalization for Large Language Models"☆151Mar 13, 2025Updated 11 months ago
- Official Code Repository for the paper "Key-value memory in the brain"☆31Feb 25, 2025Updated last year
- ☆29Apr 7, 2024Updated last year
- ☆12Apr 29, 2024Updated last year
- To assess the longtext capabilities more comprehensively, we propose Needle-in-a-Haystack PLUS, which shifts the focus from simple fact r…☆13Mar 4, 2024Updated 2 years ago
- Code associated with the paper: "Few-Shot Self-Rationalization with Natural Language Prompts"☆13Apr 27, 2022Updated 3 years ago
- ☆11Sep 7, 2024Updated last year
- A repository to create a quick sales application☆15May 19, 2025Updated 9 months ago
- Pytorch implementation for "Compressed Context Memory For Online Language Model Interaction" (ICLR'24)☆63Apr 18, 2024Updated last year
- Taipy Demo of a Realtime Dashboard of Air Pollution around a Factory☆18May 20, 2025Updated 9 months ago
- PyTorch implementation of StableMask (ICML'24)☆15Jun 27, 2024Updated last year
- ClusterKV: Manipulating LLM KV Cache in Semantic Space for Recallable Compression (DAC'25)☆25Feb 26, 2026Updated last week
- [ICLR 2025] DuoAttention: Efficient Long-Context LLM Inference with Retrieval and Streaming Heads☆527Feb 10, 2025Updated last year
- Layer-Condensed KV cache w/ 10 times larger batch size, fewer params and less computation. Dramatic speed up with better task performance…☆156Apr 7, 2025Updated 10 months ago
- React hooks for connecting to Agent Client Protocol (ACP) servers.☆46Updated this week
- The simplest implementation of recent Sparse Attention patterns for efficient LLM inference.☆91Jul 17, 2025Updated 7 months ago
- Homepage for ProLong (Princeton long-context language models) and paper "How to Train Long-Context Language Models (Effectively)"☆247Sep 12, 2025Updated 5 months ago
- ☆301Jul 10, 2025Updated 7 months ago
- [ACL 2025] Squeezed Attention: Accelerating Long Prompt LLM Inference☆57Nov 20, 2024Updated last year
- Converts text input or URL into knowledge graph and displays☆19Sep 18, 2023Updated 2 years ago
- Implement some method of LLM KV Cache Sparsity☆40Jun 6, 2024Updated last year
- [CVPR 2024 CVinW] Multi-Agent VQA: Exploring Multi-Agent Foundation Models on Zero-Shot Visual Question Answering☆20Sep 21, 2024Updated last year
- ☆21Jan 16, 2025Updated last year
- Pages and Tools☆23Sep 4, 2024Updated last year
- Code for the EMNLP24 paper "A simple and effective L2 norm based method for KV Cache compression."☆18Dec 13, 2024Updated last year
- Official code repo for paper "Great Memory, Shallow Reasoning: Limits of kNN-LMs"☆23Apr 30, 2025Updated 10 months ago
- ☆47Nov 25, 2024Updated last year
- The code for "AttentionPredictor: Temporal Pattern Matters for Efficient LLM Inference", Qingyue Yang, Jie Wang, Xing Li, Zhihai Wang, Ch…☆28Jul 15, 2025Updated 7 months ago
- Adaptation of titans-pytorch to llama models on HF☆25Mar 6, 2025Updated 11 months ago