☆82Apr 1, 2024Updated last year
Alternatives and similar repositories for lut-gemm
Users that are interested in lut-gemm are comparing it to the libraries listed below
Sorting:
- ☆20Sep 28, 2024Updated last year
- A simulator for SK hynix AiM PIM architecture based on Ramulator 2.0☆61Jul 22, 2025Updated 7 months ago
- [MLSys'25] QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving; [MLSys'25] LServe: Efficient Long-sequence LLM Se…☆818Mar 6, 2025Updated last year
- ShiftAddLLM: Accelerating Pretrained LLMs via Post-Training Multiplication-Less Reparameterization☆112Oct 15, 2024Updated last year
- SpInfer: Leveraging Low-Level Sparsity for Efficient Large Language Model Inference on GPUs☆62Mar 25, 2025Updated 11 months ago
- ☆38Mar 14, 2024Updated 2 years ago
- The official implementation of the DAC 2024 paper GQA-LUT☆21Dec 20, 2024Updated last year
- Fast Matrix Multiplications for Lookup Table-Quantized LLMs☆389Apr 13, 2025Updated 11 months ago
- ☆118Nov 17, 2023Updated 2 years ago
- BitBLAS is a library to support mixed-precision matrix multiplications, especially for quantized LLM deployment.☆752Aug 6, 2025Updated 7 months ago
- ☆63Oct 17, 2023Updated 2 years ago
- Low-bit LLM inference on CPU/NPU with lookup table☆932Jun 5, 2025Updated 9 months ago
- Artifact for paper "PIM is All You Need: A CXL-Enabled GPU-Free System for LLM Inference", ASPLOS 2025☆128May 3, 2025Updated 10 months ago
- 서울대학교 전기정보공학부 학사 학위논문 LaTeX (비공식) 템플릿☆19Jun 21, 2021Updated 4 years ago
- This repo contains the source code for: Model Tells You What to Discard: Adaptive KV Cache Compression for LLMs☆43Aug 14, 2024Updated last year
- [ICML 2024] Quest: Query-Aware Sparsity for Efficient Long-Context LLM Inference☆376Jul 10, 2025Updated 8 months ago
- Canvas: End-to-End Kernel Architecture Search in Neural Networks☆27Nov 18, 2024Updated last year
- Tutorials of Extending and importing TVM with CMAKE Include dependency.☆15Oct 11, 2024Updated last year
- ViTALiTy (HPCA'23) Code Repository☆23Mar 13, 2023Updated 3 years ago
- ☆161Feb 15, 2025Updated last year
- [ICLR2025] Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decoding☆144Dec 4, 2024Updated last year
- Awesome LLM compression research papers and tools.☆1,789Feb 23, 2026Updated 3 weeks ago
- FP16xINT4 LLM inference kernel that can achieve near-ideal ~4x speedups up to medium batchsizes of 16-32 tokens.☆1,041Sep 4, 2024Updated last year
- [ICML 2023] SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models☆1,621Jul 12, 2024Updated last year
- [ICML 2024] SqueezeLLM: Dense-and-Sparse Quantization☆713Aug 13, 2024Updated last year
- Residual vector quantization for KV cache compression in large language model☆12Oct 22, 2024Updated last year
- [MLSys 2024 Best Paper Award] AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration☆3,463Jul 17, 2025Updated 8 months ago
- Memory Simulator and Optimizer☆22Oct 23, 2019Updated 6 years ago
- Heterogenous ML accelerator☆20May 5, 2025Updated 10 months ago
- [ACL 2025 Main] EfficientQAT: Efficient Quantization-Aware Training for Large Language Models☆330Nov 26, 2025Updated 3 months ago
- PyTorch implementation of Language model compression with weighted low-rank factorization☆13Jun 28, 2023Updated 2 years ago
- PIM-DL: Expanding the Applicability of Commodity DRAM-PIMs for Deep Learning via Algorithm-System Co-Optimization☆36Feb 21, 2024Updated 2 years ago
- [NeurIPS 2024] KVQuant: Towards 10 Million Context Length LLM Inference with KV Cache Quantization☆408Aug 13, 2024Updated last year
- Several optimization methods of half-precision general matrix multiplication (HGEMM) using tensor core with WMMA API and MMA PTX instruct…☆530Sep 8, 2024Updated last year
- Vortex: A Flexible and Efficient Sparse Attention Framework☆49Jan 21, 2026Updated last month
- A family of efficient edge language models in 100M~1B sizes.☆19Feb 14, 2025Updated last year
- This is the top-level repository for the Accel-Sim framework.☆579Feb 15, 2026Updated last month
- [ICML 2024] KIVI: A Tuning-Free Asymmetric 2bit Quantization for KV Cache☆359Nov 20, 2025Updated 4 months ago
- [ICCAD'22 TinyML Contest] Efficient Heart Stroke Detection on Low-cost Microcontrollers☆15Jan 12, 2023Updated 3 years ago