snu-comparch / Tender
Tender: Accelerating Large Language Models via Tensor Decompostion and Runtime Requantization (ISCA'24)
☆14Updated 10 months ago
Alternatives and similar repositories for Tender:
Users that are interested in Tender are comparing it to the libraries listed below
- ☆31Updated 4 months ago
- ☆97Updated last year
- ☆26Updated last week
- An efficient spatial accelerator enabling hybrid sparse attention mechanisms for long sequences☆26Updated last year
- A co-design architecture on sparse attention☆52Updated 3 years ago
- MICRO22 artifact evaluation for Sparseloop☆43Updated 2 years ago
- ViTALiTy (HPCA'23) Code Repository☆22Updated 2 years ago
- ☆45Updated 3 years ago
- NeuPIMs: NPU-PIM Heterogeneous Acceleration for Batched LLM Inferencing☆80Updated 10 months ago
- Official implementation of EMNLP'23 paper "Revisiting Block-based Quantisation: What is Important for Sub-8-bit LLM Inference?"☆19Updated last year
- ☆64Updated 10 months ago
- UPMEM LLM Framework allows profiling PyTorch layers and functions and simulate those layers/functions with a given hardware profile.☆27Updated last week
- mNPUsim: A Cycle-accurate Multi-core NPU Simulator (IISWC 2023)☆53Updated 4 months ago
- [HPCA'21] SpAtten: Efficient Sparse Attention Architecture with Cascade Token and Head Pruning☆86Updated 8 months ago
- ☆138Updated 10 months ago
- ☆45Updated last year
- Implementation of Microscaling data formats in SystemVerilog.☆17Updated 8 months ago
- [FPGA 2024]FPGA Accelerator for Imbalanced SpMV using HLS☆12Updated 2 months ago
- Artifact for paper "PIM is All You Need: A CXL-Enabled GPU-Free System for LLM Inference", ASPLOS 2025☆52Updated this week
- PALM: A Efficient Performance Simulator for Tiled Accelerators with Large-scale Model Training☆16Updated 10 months ago
- Open-source Framework for HPCA2024 paper: Gemini: Mapping and Architecture Co-exploration for Large-scale DNN Chiplet Accelerators☆80Updated last week
- DOSA: Differentiable Model-Based One-Loop Search for DNN Accelerators☆13Updated 6 months ago
- [HPCA'24] Smart-Infinity: Fast Large Language Model Training using Near-Storage Processing on a Real System☆44Updated last year
- LLM Inference with Microscaling Format☆22Updated 5 months ago
- ☆55Updated last year
- PIM-DL: Expanding the Applicability of Commodity DRAM-PIMs for Deep Learning via Algorithm-System Co-Optimization☆29Updated last year
- An open-source parameterizable NPU generator with full-stack multi-target compilation stack for intelligent workloads.☆51Updated last month
- Serpens is an HBM FPGA accelerator for SpMV☆18Updated 9 months ago
- [ASPLOS 2024] CIM-MLC: A Multi-level Compilation Stack for Computing-In-Memory Accelerators☆30Updated 11 months ago
- ☆28Updated 2 years ago