☆82Apr 1, 2024Updated 2 years ago
Alternatives and similar repositories for lut-gemm
Users that are interested in lut-gemm are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- ☆20Sep 28, 2024Updated last year
- A simulator for SK hynix AiM PIM architecture based on Ramulator 2.0☆63Jul 22, 2025Updated 8 months ago
- [MLSys'25] QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving; [MLSys'25] LServe: Efficient Long-sequence LLM Se…☆822Mar 6, 2025Updated last year
- ShiftAddLLM: Accelerating Pretrained LLMs via Post-Training Multiplication-Less Reparameterization☆112Oct 15, 2024Updated last year
- SpInfer: Leveraging Low-Level Sparsity for Efficient Large Language Model Inference on GPUs☆63Mar 25, 2025Updated last year
- Managed Database hosting by DigitalOcean • AdPostgreSQL, MySQL, MongoDB, Kafka, Valkey, and OpenSearch available. Automatically scale up storage and focus on building your apps.
- ☆39Mar 14, 2024Updated 2 years ago
- The official implementation of the DAC 2024 paper GQA-LUT☆22Dec 20, 2024Updated last year
- Fast Matrix Multiplications for Lookup Table-Quantized LLMs☆390Apr 13, 2025Updated 11 months ago
- ☆119Nov 17, 2023Updated 2 years ago
- BitBLAS is a library to support mixed-precision matrix multiplications, especially for quantized LLM deployment.☆758Aug 6, 2025Updated 8 months ago
- Low-bit LLM inference on CPU/NPU with lookup table☆944Jun 5, 2025Updated 10 months ago
- ☆64Oct 17, 2023Updated 2 years ago
- 서울대학교 전기정보공학부 학사 학위논문 LaTeX (비공식) 템플릿☆19Jun 21, 2021Updated 4 years ago
- This repo contains the source code for: Model Tells You What to Discard: Adaptive KV Cache Compression for LLMs☆44Aug 14, 2024Updated last year
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting with the flexibility to host WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Cloudways by DigitalOcean.
- [ICML 2024] Quest: Query-Aware Sparsity for Efficient Long-Context LLM Inference☆380Jul 10, 2025Updated 8 months ago
- Canvas: End-to-End Kernel Architecture Search in Neural Networks☆27Nov 18, 2024Updated last year
- Tutorials of Extending and importing TVM with CMAKE Include dependency.☆16Oct 11, 2024Updated last year
- ViTALiTy (HPCA'23) Code Repository☆23Mar 13, 2023Updated 3 years ago
- [ICLR2025] Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decoding☆145Dec 4, 2024Updated last year
- Awesome LLM compression research papers and tools.☆1,796Feb 23, 2026Updated last month
- FP16xINT4 LLM inference kernel that can achieve near-ideal ~4x speedups up to medium batchsizes of 16-32 tokens.☆1,048Sep 4, 2024Updated last year
- [ICML 2023] SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models☆1,631Jul 12, 2024Updated last year
- ☆21Oct 2, 2024Updated last year
- Wordpress hosting with auto-scaling on Cloudways • AdFully Managed hosting built for WordPress-powered businesses that need reliable, auto-scalable hosting. Cloudways SafeUpdates now available.
- [ICML 2024] SqueezeLLM: Dense-and-Sparse Quantization☆715Aug 13, 2024Updated last year
- Residual vector quantization for KV cache compression in large language model☆12Oct 22, 2024Updated last year
- [MLSys 2024 Best Paper Award] AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration☆3,488Jul 17, 2025Updated 8 months ago
- Memory Simulator and Optimizer☆22Oct 23, 2019Updated 6 years ago
- Heterogenous ML accelerator☆20May 5, 2025Updated 11 months ago
- [ACL 2025 Main] EfficientQAT: Efficient Quantization-Aware Training for Large Language Models☆336Nov 26, 2025Updated 4 months ago
- PyTorch implementation of Language model compression with weighted low-rank factorization☆13Jun 28, 2023Updated 2 years ago
- PIM-DL: Expanding the Applicability of Commodity DRAM-PIMs for Deep Learning via Algorithm-System Co-Optimization☆36Feb 21, 2024Updated 2 years ago
- [NeurIPS 2024] KVQuant: Towards 10 Million Context Length LLM Inference with KV Cache Quantization☆418Aug 13, 2024Updated last year
- NordVPN Threat Protection Pro™ • AdTake your cybersecurity to the next level. Block phishing, malware, trackers, and ads. Lightweight app that works with all browsers.
- Vortex: A Flexible and Efficient Sparse Attention Framework☆50Updated this week
- Several optimization methods of half-precision general matrix multiplication (HGEMM) using tensor core with WMMA API and MMA PTX instruct…☆535Sep 8, 2024Updated last year
- A family of efficient edge language models in 100M~1B sizes.☆19Feb 14, 2025Updated last year
- This is the top-level repository for the Accel-Sim framework.☆586Mar 24, 2026Updated 2 weeks ago
- [ICML 2024] KIVI: A Tuning-Free Asymmetric 2bit Quantization for KV Cache☆381Nov 20, 2025Updated 4 months ago
- [ICLR'25] ViDiT-Q: Efficient and Accurate Quantization of Diffusion Transformers for Image and Video Generation☆157Mar 21, 2025Updated last year
- Standalone Flash Attention v2 kernel without libtorch dependency☆113Sep 10, 2024Updated last year