ChengZhang-98 / llm-mixed-qLinks
Official implementation of EMNLP'23 paper "Revisiting Block-based Quantisation: What is Important for Sub-8-bit LLM Inference?"
☆24Updated 2 years ago
Alternatives and similar repositories for llm-mixed-q
Users that are interested in llm-mixed-q are comparing it to the libraries listed below
Sorting:
- ☆32Updated last week
- ☆112Updated 2 years ago
- Implementation of Microscaling data formats in SystemVerilog.☆28Updated 5 months ago
- Simulator for BitFusion☆102Updated 5 years ago
- Tender: Accelerating Large Language Models via Tensor Decompostion and Runtime Requantization (ISCA'24)☆24Updated last year
- ☆28Updated last month
- An efficient spatial accelerator enabling hybrid sparse attention mechanisms for long sequences☆30Updated last year
- A co-design architecture on sparse attention☆54Updated 4 years ago
- ☆47Updated 4 years ago
- BSQ: Exploring Bit-Level Sparsity for Mixed-Precision Neural Network Quantization (ICLR 2021)☆42Updated 4 years ago
- [HPCA'21] SpAtten: Efficient Sparse Attention Architecture with Cascade Token and Head Pruning☆115Updated last year
- MICRO22 artifact evaluation for Sparseloop☆44Updated 3 years ago
- [HPCA 2023] ViTCoD: Vision Transformer Acceleration via Dedicated Algorithm and Accelerator Co-Design☆124Updated 2 years ago
- ☆19Updated 4 years ago
- ☆35Updated 5 years ago
- Torch2Chip (MLSys, 2024)☆55Updated 8 months ago
- ☆74Updated 2 months ago
- ☆42Updated last year
- A framework for fast exploration of the depth-first scheduling space for DNN accelerators☆42Updated 2 years ago
- H2-LLM: Hardware-Dataflow Co-Exploration for Heterogeneous Hybrid-Bonding-based Low-Batch LLM Inference☆77Updated 7 months ago
- Adaptive floating-point based numerical format for resilient deep learning☆14Updated 3 years ago
- RTL implementation of Flex-DPE.☆115Updated 5 years ago
- [TCAD'23] AccelTran: A Sparsity-Aware Accelerator for Transformers☆54Updated 2 years ago
- Fast Emulation of Approximate DNN Accelerators in PyTorch☆28Updated last year
- [ECCV 2024] CLAMP-ViT: Contrastive Data-Free Learning for Adaptive Post-Training Quantization of ViTs☆15Updated last year
- Model LLM inference on single-core dataflow accelerators☆16Updated last week
- [ASPLOS 2024] CIM-MLC: A Multi-level Compilation Stack for Computing-In-Memory Accelerators☆45Updated last year
- Multi-core HW accelerator mapping optimization framework for layer-fused ML workloads.☆64Updated 5 months ago
- An open-source parameterizable NPU generator with full-stack multi-target compilation stack for intelligent workloads.☆69Updated 2 months ago
- [TRETS 2025][FPGA 2024] FPGA Accelerator for Imbalanced SpMV using HLS☆17Updated 3 months ago