QJL: 1-Bit Quantized JL transform for KV Cache Quantization with Zero Overhead
☆92Jan 27, 2025Updated last year
Alternatives and similar repositories for QJL
Users that are interested in QJL are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- [NAACL 2022] GlobEnc: Quantifying Global Token Attribution by Incorporating the Whole Encoder Layer in Transformers☆21May 16, 2023Updated 2 years ago
- ☆23Mar 7, 2025Updated last year
- Single-thread, end-to-end C++ implementation of the Bitnet (1.58-bit weight) model☆14Nov 17, 2024Updated last year
- [SIGMOD 2025] Practical and Asymptotically Optimal Quantization of High-Dimensional Vectors in Euclidean Space for Approximate Nearest Ne…☆67Mar 30, 2026Updated 2 weeks ago
- Triton Implementation of HyperAttention Algorithm☆48Dec 11, 2023Updated 2 years ago
- Serverless GPU API endpoints on Runpod - Bonus Credits • AdSkip the infrastructure headaches. Auto-scaling, pay-as-you-go, no-ops approach lets you focus on innovating your application.
- [SIGMOD 2025] PQCache: Product Quantization-based KVCache for Long Context LLM Inference☆85Dec 7, 2025Updated 4 months ago
- Quick ADC☆27May 31, 2019Updated 6 years ago
- Fast Matrix Multiplications for Lookup Table-Quantized LLMs☆390Apr 13, 2025Updated last year
- ☆14Jun 4, 2024Updated last year
- A simple script to plot the Roofline model for given HW platforms and applications☆10Mar 17, 2026Updated last month
- A framework for steering MoE models by detecting and controlling behavior-linked experts.☆33Sep 12, 2025Updated 7 months ago
- ☆25Oct 31, 2024Updated last year
- Residual vector quantization for KV cache compression in large language model☆12Oct 22, 2024Updated last year
- [ICML 2024] Quest: Query-Aware Sparsity for Efficient Long-Context LLM Inference☆380Jul 10, 2025Updated 9 months ago
- Simple, predictable pricing with DigitalOcean hosting • AdAlways know what you'll pay with monthly caps and flat pricing. Enterprise-grade infrastructure trusted by 600k+ customers.
- Learning Accurate Decision Trees with Bandit Feedback via Quantized Gradient Descent☆16Sep 8, 2022Updated 3 years ago
- [ICLR2025] Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decoding☆145Dec 4, 2024Updated last year
- This repo is for CaesarNeRF: Calibrated Semantic Representation for Few-Shot Generalizable Neural Rendering.☆14Mar 6, 2024Updated 2 years ago
- A Suite for Parallel Inference of Diffusion Transformers (DiTs) on multi-GPU Clusters☆58Jul 23, 2024Updated last year
- Official PyTorch implementation of SynergyNeRF: "Synergistic Integration of Coordinate Network and Tensorial Feature for Improving NeRFs …☆12Sep 23, 2024Updated last year
- [ICML 2024 Oral] Any-Precision LLM: Low-Cost Deployment of Multiple, Different-Sized LLMs☆123Jul 4, 2025Updated 9 months ago
- ☆52Nov 5, 2024Updated last year
- Code for the experiments and websites of the paper "Same Task, Different Circuits"☆34Oct 21, 2025Updated 5 months ago
- Compression primitives for uplink compression in Federated Learning that are compatible with Secure Aggregation.☆10Jul 27, 2022Updated 3 years ago
- Managed Kubernetes at scale on DigitalOcean • AdDigitalOcean Kubernetes includes the control plane, bandwidth allowance, container registry, automatic updates, and more for free.
- PQ Fast Scan☆70May 31, 2019Updated 6 years ago
- KV cache compression for high-throughput LLM inference☆156Feb 5, 2025Updated last year
- [ICML 2024] KIVI: A Tuning-Free Asymmetric 2bit Quantization for KV Cache☆384Nov 20, 2025Updated 4 months ago
- Keyformer proposes KV Cache reduction through key tokens identification and without the need for fine-tuning☆57Mar 26, 2024Updated 2 years ago
- [ICLR2025 Spotlight] MagicPIG: LSH Sampling for Efficient LLM Generation☆253Dec 16, 2024Updated last year
- Towards Memorization-Free Diffusion Models (CVPR2024) Codebase☆11Jun 2, 2024Updated last year
- Code for Neurips24 paper: QuaRot, an end-to-end 4-bit inference of large language models.☆503Nov 26, 2024Updated last year
- Implement some method of LLM KV Cache Sparsity☆40Jun 6, 2024Updated last year
- ☆42Mar 28, 2024Updated 2 years ago
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- [OSDI 2025] DecDEC: A Systems Approach to Advancing Low‑Bit LLM Quantization☆23Jan 29, 2026Updated 2 months ago
- An auxiliary project analysis of the characteristics of KV in DiT Attention.☆34Nov 29, 2024Updated last year
- Official implementation of StochSync: a zero-shot approach for image generation in arbitrary spaces via stochastic diffusion synchronizat…☆21Jun 24, 2025Updated 9 months ago
- Official Repo for "SplitQuant / LLM-PQ: Resource-Efficient LLM Offline Serving on Heterogeneous GPUs via Phase-Aware Model Partition and …☆37Aug 29, 2025Updated 7 months ago
- Faster Pytorch bitsandbytes 4bit fp4 nn.Linear ops☆30Mar 16, 2024Updated 2 years ago
- A minimal implementation of spotify/annoy in pure rust☆11Mar 2, 2023Updated 3 years ago
- ☆19Jan 26, 2025Updated last year