clevercool / ANT-QuantizationLinks
☆98Updated last year
Alternatives and similar repositories for ANT-Quantization
Users that are interested in ANT-Quantization are comparing it to the libraries listed below
Sorting:
- ☆41Updated 5 months ago
- ☆27Updated this week
- A co-design architecture on sparse attention☆52Updated 3 years ago
- Tender: Accelerating Large Language Models via Tensor Decompostion and Runtime Requantization (ISCA'24)☆14Updated 11 months ago
- [HPCA'21] SpAtten: Efficient Sparse Attention Architecture with Cascade Token and Head Pruning☆93Updated 9 months ago
- NeuPIMs: NPU-PIM Heterogeneous Acceleration for Batched LLM Inferencing☆83Updated 11 months ago
- MICRO22 artifact evaluation for Sparseloop☆43Updated 2 years ago
- Simulator for BitFusion☆100Updated 4 years ago
- An efficient spatial accelerator enabling hybrid sparse attention mechanisms for long sequences☆27Updated last year
- ☆45Updated 3 years ago
- Open-source Framework for HPCA2024 paper: Gemini: Mapping and Architecture Co-exploration for Large-scale DNN Chiplet Accelerators☆80Updated last month
- ☆148Updated 11 months ago
- [HPCA 2023] ViTCoD: Vision Transformer Acceleration via Dedicated Algorithm and Accelerator Co-Design☆107Updated last year
- The framework for the paper "Inter-layer Scheduling Space Definition and Exploration for Tiled Accelerators" in ISCA 2023.☆67Updated 2 months ago
- [TCAD'23] AccelTran: A Sparsity-Aware Accelerator for Transformers☆43Updated last year
- [ASPLOS 2024] CIM-MLC: A Multi-level Compilation Stack for Computing-In-Memory Accelerators☆36Updated last year
- ☆69Updated 11 months ago
- ☆34Updated 4 years ago
- A framework for fast exploration of the depth-first scheduling space for DNN accelerators☆39Updated 2 years ago
- RTL implementation of Flex-DPE.☆100Updated 5 years ago
- Official implementation of EMNLP'23 paper "Revisiting Block-based Quantisation: What is Important for Sub-8-bit LLM Inference?"☆22Updated last year
- PALM: A Efficient Performance Simulator for Tiled Accelerators with Large-scale Model Training☆16Updated 11 months ago
- ONNXim is a fast cycle-level simulator that can model multi-core NPUs for DNN inference☆118Updated 3 months ago
- An analytical framework that models hardware dataflow of tensor applications on spatial architectures using the relation-centric notation…☆85Updated last year
- Open source RTL implementation of Tensor Core, Sparse Tensor Core, BitWave and SparSynergy in the article: "SparSynergy: Unlocking Flexib…☆16Updated 2 months ago
- Serpens is an HBM FPGA accelerator for SpMV☆19Updated 10 months ago
- ☆51Updated last year
- Linux docker for the DNN accelerator exploration infrastructure composed of Accelergy and Timeloop☆52Updated last month
- UPMEM LLM Framework allows profiling PyTorch layers and functions and simulate those layers/functions with a given hardware profile.☆29Updated this week
- The codes and artifacts associated with our MICRO'22 paper titled: "Adaptable Butterfly Accelerator for Attention-based NNs via Hardware …☆135Updated 2 years ago