SamsungLabs / PMPDLinks
Codebase for the Progressive Mixed-Precision Decoding paper.
☆19Updated 6 months ago
Alternatives and similar repositories for PMPD
Users that are interested in PMPD are comparing it to the libraries listed below
Sorting:
- ☆113Updated 2 years ago
- Official implementation of EMNLP'23 paper "Revisiting Block-based Quantisation: What is Important for Sub-8-bit LLM Inference?"☆24Updated 2 years ago
- ☆35Updated last month
- Tender: Accelerating Large Language Models via Tensor Decompostion and Runtime Requantization (ISCA'24)☆24Updated last year
- Simulator for BitFusion☆102Updated 5 years ago
- BSQ: Exploring Bit-Level Sparsity for Mixed-Precision Neural Network Quantization (ICLR 2021)☆42Updated 5 years ago
- ☆224Updated 3 months ago
- ☆35Updated 5 years ago
- ☆19Updated 4 years ago
- This is a repository of Binary General Matrix Multiply (BGEMM) by customized CUDA kernel. Thank FP6-LLM for the wheels!☆18Updated last year
- Official implementation of "Searching for Winograd-aware Quantized Networks" (MLSys'20)☆27Updated 2 years ago
- ONNXim is a fast cycle-level simulator that can model multi-core NPUs for DNN inference☆184Updated last month
- GoldenEye is a functional simulator with fault injection capabilities for common and emerging numerical formats, implemented for the PyTo…☆27Updated last year
- Static Block Floating Point Quantization for CNN☆37Updated 4 years ago
- ☆83Updated last year
- [HPCA'21] SpAtten: Efficient Sparse Attention Architecture with Cascade Token and Head Pruning☆122Updated last year
- NeuPIMs: NPU-PIM Heterogeneous Acceleration for Batched LLM Inferencing☆108Updated last year
- Training with Block Minifloat number representation☆18Updated 4 years ago
- ☆30Updated 3 months ago
- Code Repository of Evaluating Quantized Large Language Models☆136Updated last year
- An efficient spatial accelerator enabling hybrid sparse attention mechanisms for long sequences☆31Updated last year
- ☆48Updated 4 years ago
- Implementation of Microscaling data formats in SystemVerilog.☆29Updated 7 months ago
- ☆170Updated 2 years ago
- A co-design architecture on sparse attention☆55Updated 4 years ago
- The official PyTorch implementation of the NeurIPS2022 (spotlight) paper, Outlier Suppression: Pushing the Limit of Low-bit Transformer L…☆49Updated 3 years ago
- [HPCA 2023] ViTCoD: Vision Transformer Acceleration via Dedicated Algorithm and Accelerator Co-Design☆127Updated 2 years ago
- This repo contains the code for studying the interplay between quantization and sparsity methods☆26Updated 11 months ago
- [TCAD'23] AccelTran: A Sparsity-Aware Accelerator for Transformers☆56Updated 2 years ago
- H2-LLM: Hardware-Dataflow Co-Exploration for Heterogeneous Hybrid-Bonding-based Low-Batch LLM Inference☆87Updated 9 months ago