ChengZhang-98 / llm-mixed-q
Official implementation of EMNLP'23 paper "Revisiting Block-based Quantisation: What is Important for Sub-8-bit LLM Inference?"
☆19Updated last year
Alternatives and similar repositories for llm-mixed-q:
Users that are interested in llm-mixed-q are comparing it to the libraries listed below
- Implementation of Microscaling data formats in SystemVerilog.☆15Updated 6 months ago
- ☆23Updated this week
- A co-design architecture on sparse attention☆50Updated 3 years ago
- ☆23Updated 3 months ago
- ☆93Updated last year
- MICRO22 artifact evaluation for Sparseloop☆43Updated 2 years ago
- An efficient spatial accelerator enabling hybrid sparse attention mechanisms for long sequences☆25Updated last year
- Tender: Accelerating Large Language Models via Tensor Decompostion and Runtime Requantization (ISCA'24)☆13Updated 8 months ago
- ☆26Updated 3 months ago
- [TCAD'23] AccelTran: A Sparsity-Aware Accelerator for Transformers☆38Updated last year
- ☆43Updated 3 years ago
- [HPCA'21] SpAtten: Efficient Sparse Attention Architecture with Cascade Token and Head Pruning☆83Updated 6 months ago
- ☆39Updated 8 months ago
- Simulator for BitFusion☆97Updated 4 years ago
- ☆13Updated 2 years ago
- ☆19Updated 2 years ago
- ☆33Updated 3 years ago
- Open-source of MSD framework☆16Updated last year
- Linux docker for the DNN accelerator exploration infrastructure composed of Accelergy and Timeloop☆50Updated 2 weeks ago
- RTL implementation of Flex-DPE.☆98Updated 5 years ago
- A framework for fast exploration of the depth-first scheduling space for DNN accelerators☆37Updated 2 years ago
- ViTALiTy (HPCA'23) Code Repository☆21Updated 2 years ago
- [ASPLOS 2024] CIM-MLC: A Multi-level Compilation Stack for Computing-In-Memory Accelerators☆29Updated 10 months ago
- Multi-core HW accelerator mapping optimization framework for layer-fused ML workloads.☆48Updated last month
- Fast Emulation of Approximate DNN Accelerators in PyTorch☆22Updated last year
- ☆27Updated 2 years ago
- ☆32Updated 4 years ago
- An open-source parameterizable NPU generator with full-stack multi-target compilation stack for intelligent workloads.☆48Updated last week
- BSQ: Exploring Bit-Level Sparsity for Mixed-Precision Neural Network Quantization (ICLR 2021)☆40Updated 4 years ago
- Collection of kernel accelerators optimised for LLM execution☆16Updated last week