☆82Apr 1, 2024Updated 2 years ago
Alternatives and similar repositories for lut-gemm
Users that are interested in lut-gemm are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- ☆20Sep 28, 2024Updated last year
- A simulator for SK hynix AiM PIM architecture based on Ramulator 2.0☆64Jul 22, 2025Updated 9 months ago
- [MLSys'25] QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving; [MLSys'25] LServe: Efficient Long-sequence LLM Se…☆834Mar 6, 2025Updated last year
- ShiftAddLLM: Accelerating Pretrained LLMs via Post-Training Multiplication-Less Reparameterization☆113Oct 15, 2024Updated last year
- [OSDI 2025] DecDEC: A Systems Approach to Advancing Low‑Bit LLM Quantization☆23Jan 29, 2026Updated 3 months ago
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- SpInfer: Leveraging Low-Level Sparsity for Efficient Large Language Model Inference on GPUs☆64Mar 25, 2025Updated last year
- ☆39Mar 14, 2024Updated 2 years ago
- The official implementation of the DAC 2024 paper GQA-LUT☆22Dec 20, 2024Updated last year
- Fast Matrix Multiplications for Lookup Table-Quantized LLMs☆391Apr 13, 2025Updated last year
- ☆120Nov 17, 2023Updated 2 years ago
- BitBLAS is a library to support mixed-precision matrix multiplications, especially for quantized LLM deployment.☆762Aug 6, 2025Updated 8 months ago
- Low-bit LLM inference on CPU/NPU with lookup table☆953Jun 5, 2025Updated 10 months ago
- ☆64Oct 17, 2023Updated 2 years ago
- 서울대학교 전기정보공학부 학사 학위논문 LaTeX (비공식) 템플릿