SeoLabCornell / torch2chipLinks
Torch2Chip (MLSys, 2024)
☆54Updated 6 months ago
Alternatives and similar repositories for torch2chip
Users that are interested in torch2chip are comparing it to the libraries listed below
Sorting:
- [HPCA'21] SpAtten: Efficient Sparse Attention Architecture with Cascade Token and Head Pruning☆106Updated last year
- ☆66Updated this week
- Tender: Accelerating Large Language Models via Tensor Decompostion and Runtime Requantization (ISCA'24)☆20Updated last year
- ☆51Updated 2 months ago
- ☆31Updated this week
- Machine-Learning Accelerator System Exploration Tools☆178Updated this week
- [HPCA 2023] ViTCoD: Vision Transformer Acceleration via Dedicated Algorithm and Accelerator Co-Design☆117Updated 2 years ago
- An open-source parameterizable NPU generator with full-stack multi-target compilation stack for intelligent workloads.☆66Updated last week
- ☆35Updated 5 years ago
- ☆112Updated last year
- H2-LLM: Hardware-Dataflow Co-Exploration for Heterogeneous Hybrid-Bonding-based Low-Batch LLM Inference☆65Updated 5 months ago
- The official implementation of HPCA 2025 paper, Prosperity: Accelerating Spiking Neural Networks via Product Sparsity☆36Updated last month
- An FPGA accelerator for general-purpose Sparse-Matrix Dense-Matrix Multiplication (SpMM).☆84Updated last year
- Linux docker for the DNN accelerator exploration infrastructure composed of Accelergy and Timeloop☆57Updated 5 months ago
- ☆48Updated 4 years ago
- Implementation of Microscaling data formats in SystemVerilog.☆24Updated 3 months ago
- SSR: Spatial Sequential Hybrid Architecture for Latency Throughput Tradeoff in Transformer Acceleration (Full Paper Accepted in FPGA'24)☆33Updated last week
- Adaptive floating-point based numerical format for resilient deep learning☆14Updated 3 years ago
- [TCAD'23] AccelTran: A Sparsity-Aware Accelerator for Transformers☆51Updated last year
- Multi-core HW accelerator mapping optimization framework for layer-fused ML workloads.☆58Updated 3 months ago
- ViTALiTy (HPCA'23) Code Repository☆23Updated 2 years ago
- Codebase for ICML'24 paper: Learning from Students: Applying t-Distributions to Explore Accurate and Efficient Formats for LLMs☆27Updated last year
- Fast Emulation of Approximate DNN Accelerators in PyTorch☆26Updated last year
- ☆30Updated 6 months ago
- Official implementation of EMNLP'23 paper "Revisiting Block-based Quantisation: What is Important for Sub-8-bit LLM Inference?"☆23Updated last year
- ☆40Updated last year
- ONNXim is a fast cycle-level simulator that can model multi-core NPUs for DNN inference☆149Updated 7 months ago
- A survey on Hardware Accelerated LLMs☆59Updated 8 months ago
- MICRO22 artifact evaluation for Sparseloop☆44Updated 3 years ago
- ACM TODAES Best Paper Award, 2022☆28Updated last year