naver-aics / lut-gemmView external linksLinks
☆83Apr 1, 2024Updated last year
Alternatives and similar repositories for lut-gemm
Users that are interested in lut-gemm are comparing it to the libraries listed below
Sorting:
- ☆20Sep 28, 2024Updated last year
- A simulator for SK hynix AiM PIM architecture based on Ramulator 2.0☆56Jul 22, 2025Updated 6 months ago
- [MLSys'25] QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving; [MLSys'25] LServe: Efficient Long-sequence LLM Se…☆812Mar 6, 2025Updated 11 months ago
- The official implementation of the DAC 2024 paper GQA-LUT☆20Dec 20, 2024Updated last year
- SpInfer: Leveraging Low-Level Sparsity for Efficient Large Language Model Inference on GPUs☆61Mar 25, 2025Updated 10 months ago
- ShiftAddLLM: Accelerating Pretrained LLMs via Post-Training Multiplication-Less Reparameterization☆112Oct 15, 2024Updated last year
- ☆38Mar 14, 2024Updated last year
- Fast Matrix Multiplications for Lookup Table-Quantized LLMs☆388Apr 13, 2025Updated 10 months ago
- Canvas: End-to-End Kernel Architecture Search in Neural Networks☆27Nov 18, 2024Updated last year
- Residual vector quantization for KV cache compression in large language model☆11Oct 22, 2024Updated last year
- [ICML 2024] Quest: Query-Aware Sparsity for Efficient Long-Context LLM Inference☆372Jul 10, 2025Updated 7 months ago
- This repo contains the source code for: Model Tells You What to Discard: Adaptive KV Cache Compression for LLMs☆44Aug 14, 2024Updated last year
- ☆113Nov 17, 2023Updated 2 years ago
- BitBLAS is a library to support mixed-precision matrix multiplications, especially for quantized LLM deployment.☆752Aug 6, 2025Updated 6 months ago
- Low-bit LLM inference on CPU/NPU with lookup table☆919Jun 5, 2025Updated 8 months ago
- Artifact for paper "PIM is All You Need: A CXL-Enabled GPU-Free System for LLM Inference", ASPLOS 2025☆124May 3, 2025Updated 9 months ago
- [ICLR2025] Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decoding☆142Dec 4, 2024Updated last year
- ☆18Oct 2, 2024Updated last year
- FP16xINT4 LLM inference kernel that can achieve near-ideal ~4x speedups up to medium batchsizes of 16-32 tokens.☆1,011Sep 4, 2024Updated last year
- ☆63Oct 17, 2023Updated 2 years ago
- ☆16Jul 24, 2023Updated 2 years ago
- Tutorials of Extending and importing TVM with CMAKE Include dependency.☆16Oct 11, 2024Updated last year
- PyTorch implementation of Language model compression with weighted low-rank factorization☆13Jun 28, 2023Updated 2 years ago
- ☆158Feb 15, 2025Updated last year
- Awesome LLM compression research papers and tools.☆1,776Nov 10, 2025Updated 3 months ago
- Vortex: A Flexible and Efficient Sparse Attention Framework☆46Jan 21, 2026Updated 3 weeks ago
- ☆15Jun 4, 2024Updated last year
- A family of efficient edge language models in 100M~1B sizes.☆19Feb 14, 2025Updated last year
- [ICML 2023] SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models☆1,607Jul 12, 2024Updated last year
- PIM-DL: Expanding the Applicability of Commodity DRAM-PIMs for Deep Learning via Algorithm-System Co-Optimization☆36Feb 21, 2024Updated last year
- Standalone Flash Attention v2 kernel without libtorch dependency☆114Sep 10, 2024Updated last year
- Whisper in TensorRT-LLM☆17Sep 21, 2023Updated 2 years ago
- Heterogenous ML accelerator☆20May 5, 2025Updated 9 months ago
- Code repo for "CritiPrefill: A Segment-wise Criticality-based Approach for Prefilling Acceleration in LLMs".☆16Sep 15, 2024Updated last year
- [MLSys 2024 Best Paper Award] AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration☆3,436Jul 17, 2025Updated 6 months ago
- [NeurIPS 2024] KVQuant: Towards 10 Million Context Length LLM Inference with KV Cache Quantization☆404Aug 13, 2024Updated last year
- [ICML 2024] SqueezeLLM: Dense-and-Sparse Quantization☆713Aug 13, 2024Updated last year
- [ICLR 2025] Drop-Upcycling: Training Sparse Mixture of Experts with Partial Re-initialization☆24Oct 5, 2025Updated 4 months ago
- ☆17Dec 9, 2022Updated 3 years ago