SeoLabCornell / torch2chip
Torch2Chip (MLSys, 2024)
☆51Updated this week
Alternatives and similar repositories for torch2chip:
Users that are interested in torch2chip are comparing it to the libraries listed below
- [HPCA'21] SpAtten: Efficient Sparse Attention Architecture with Cascade Token and Head Pruning☆83Updated 6 months ago
- ☆53Updated last week
- An open-source parameterizable NPU generator with full-stack multi-target compilation stack for intelligent workloads.☆48Updated last week
- ☆32Updated 4 years ago
- The codes and artifacts associated with our MICRO'22 paper titled: "Adaptable Butterfly Accelerator for Attention-based NNs via Hardware …☆123Updated last year
- [TCAD'23] AccelTran: A Sparsity-Aware Accelerator for Transformers☆38Updated last year
- ☆43Updated 3 years ago
- Multi-core HW accelerator mapping optimization framework for layer-fused ML workloads.☆48Updated last month
- Linux docker for the DNN accelerator exploration infrastructure composed of Accelergy and Timeloop☆50Updated 2 weeks ago
- ☆23Updated 3 months ago
- ☆23Updated this week
- Implementation of Microscaling data formats in SystemVerilog.☆15Updated 6 months ago
- Adaptive floating-point based numerical format for resilient deep learning☆14Updated 2 years ago
- ☆26Updated 3 months ago
- Tender: Accelerating Large Language Models via Tensor Decompostion and Runtime Requantization (ISCA'24)☆13Updated 8 months ago
- [ICML 2021] "Auto-NBA: Efficient and Effective Search Over the Joint Space of Networks, Bitwidths, and Accelerators" by Yonggan Fu, Yonga…☆15Updated 3 years ago
- ☆93Updated last year
- A framework for fast exploration of the depth-first scheduling space for DNN accelerators☆37Updated 2 years ago
- ViTALiTy (HPCA'23) Code Repository☆21Updated 2 years ago
- SSR: Spatial Sequential Hybrid Architecture for Latency Throughput Tradeoff in Transformer Acceleration (Full Paper Accepted in FPGA'24)☆30Updated this week
- An FPGA accelerator for general-purpose Sparse-Matrix Dense-Matrix Multiplication (SpMM).☆77Updated 7 months ago
- A survey on Hardware Accelerated LLMs☆49Updated 2 months ago
- A co-design architecture on sparse attention☆50Updated 3 years ago
- ☆39Updated 8 months ago
- Official implementation of EMNLP'23 paper "Revisiting Block-based Quantisation: What is Important for Sub-8-bit LLM Inference?"☆19Updated last year
- ☆23Updated 2 years ago
- A Spatial Accelerator Generation Framework for Tensor Algebra.☆55Updated 3 years ago
- A DAG processor and compiler for a tree-based spatial datapath.☆13Updated 2 years ago
- An efficient spatial accelerator enabling hybrid sparse attention mechanisms for long sequences☆25Updated last year