SamsungLabs / PMPDLinks
Codebase for the Progressive Mixed-Precision Decoding paper.
☆19Updated 6 months ago
Alternatives and similar repositories for PMPD
Users that are interested in PMPD are comparing it to the libraries listed below
Sorting:
- ☆113Updated 2 years ago
- Simulator for BitFusion☆102Updated 5 years ago
- Tender: Accelerating Large Language Models via Tensor Decompostion and Runtime Requantization (ISCA'24)☆24Updated last year
- ☆35Updated last month
- Official implementation of EMNLP'23 paper "Revisiting Block-based Quantisation: What is Important for Sub-8-bit LLM Inference?"☆24Updated 2 years ago
- BSQ: Exploring Bit-Level Sparsity for Mixed-Precision Neural Network Quantization (ICLR 2021)☆42Updated 5 years ago
- ☆223Updated 3 months ago
- Training with Block Minifloat number representation☆18Updated 4 years ago
- ☆30Updated 3 months ago
- An efficient spatial accelerator enabling hybrid sparse attention mechanisms for long sequences☆31Updated last year
- ☆35Updated 5 years ago
- NeuPIMs: NPU-PIM Heterogeneous Acceleration for Batched LLM Inferencing☆109Updated last year
- ☆15Updated last year
- ☆48Updated 4 years ago
- ☆84Updated last year
- [HPCA 2023] ViTCoD: Vision Transformer Acceleration via Dedicated Algorithm and Accelerator Co-Design☆126Updated 2 years ago
- [ECCV 2024] CLAMP-ViT: Contrastive Data-Free Learning for Adaptive Post-Training Quantization of ViTs☆18Updated last year
- [ICASSP'20] DNN-Chip Predictor: An Analytical Performance Predictor for DNN Accelerators with Various Dataflows and Hardware Architecture…☆25Updated 3 years ago
- ☆19Updated 4 years ago
- [HPCA'21] SpAtten: Efficient Sparse Attention Architecture with Cascade Token and Head Pruning☆122Updated last year
- [TCAD'23] AccelTran: A Sparsity-Aware Accelerator for Transformers☆55Updated 2 years ago
- ☆32Updated 4 years ago
- A co-design architecture on sparse attention☆55Updated 4 years ago
- [ICML 2021] "Auto-NBA: Efficient and Effective Search Over the Joint Space of Networks, Bitwidths, and Accelerators" by Yonggan Fu, Yonga…☆16Updated 4 years ago
- ☆18Updated 4 months ago
- ONNXim is a fast cycle-level simulator that can model multi-core NPUs for DNN inference☆185Updated 3 weeks ago
- ☆10Updated 10 months ago
- Linux docker for the DNN accelerator exploration infrastructure composed of Accelergy and Timeloop☆62Updated 3 months ago
- Neural Network Quantization With Fractional Bit-widths☆11Updated 4 years ago
- MICRO22 artifact evaluation for Sparseloop☆46Updated 3 years ago