Tender: Accelerating Large Language Models via Tensor Decompostion and Runtime Requantization (ISCA'24)
☆25Jul 4, 2024Updated last year
Alternatives and similar repositories for Tender
Users that are interested in Tender are comparing it to the libraries listed below
Sorting:
- ☆115Nov 17, 2023Updated 2 years ago
- Torch2Chip (MLSys, 2024)☆55Apr 2, 2025Updated 10 months ago
- ☆58May 4, 2024Updated last year
- MICRO 2024 Evaluation Artifact for FuseMax☆16Aug 26, 2024Updated last year
- ☆35Dec 22, 2025Updated 2 months ago
- This repository presents the source code for the paper "MILLION: Mastering Long-Context LLM Inference Via Outlier-Immunized KV Product Qu…☆23Apr 2, 2025Updated 10 months ago
- ☆15Mar 18, 2025Updated 11 months ago
- ☆14Jun 4, 2024Updated last year
- mNPUsim: A Cycle-accurate Multi-core NPU Simulator (IISWC 2023)☆71Dec 29, 2025Updated 2 months ago
- eyeriss-chisel3☆40May 2, 2022Updated 3 years ago
- Simulator for BitFusion☆101Aug 6, 2020Updated 5 years ago
- ☆20Feb 10, 2025Updated last year
- A systolic array simulator for multi-cycle MACs and varying-byte words, with the paper accepted to HPCA 2022.☆84Nov 7, 2021Updated 4 years ago
- ☆34Aug 27, 2025Updated 6 months ago
- AdaSkip: Adaptive Sublayer Skipping for Accelerating Long-Context LLM Inference☆20Jan 24, 2025Updated last year
- ☆143Jul 19, 2025Updated 7 months ago
- HW/SW co-design of sentence-level energy optimizations for latency-aware multi-task NLP inference☆54Mar 24, 2024Updated last year
- This is my hobby project with System Verilog to accelerate LeViT Network which contain CNN and Attention layer.☆33Aug 13, 2024Updated last year
- FireQ: Fast INT4-FP8 Kernel and RoPE-aware Quantization for LLM Inference Acceleration☆20Jun 27, 2025Updated 8 months ago
- ViTALiTy (HPCA'23) Code Repository☆23Mar 13, 2023Updated 2 years ago
- LLMServingSim 2.0: A Unified Simulator for Heterogeneous and Disaggregated LLM Serving Infrastructure☆177Updated this week
- Open source RTL implementation of Tensor Core, Sparse Tensor Core, BitWave and SparSynergy in the article: "SparSynergy: Unlocking Flexib…☆22Mar 29, 2025Updated 11 months ago
- ☆133Jun 24, 2024Updated last year
- Differentiable Weightless Neural Networks☆33Feb 2, 2026Updated 3 weeks ago
- InfiniGen: Efficient Generative Inference of Large Language Models with Dynamic KV Cache Management (OSDI'24)☆174Jul 10, 2024Updated last year
- ☆32Nov 11, 2024Updated last year
- LLM Inference with Microscaling Format☆34Nov 12, 2024Updated last year
- ☆38Oct 21, 2025Updated 4 months ago
- Processing-In-Memory (PIM) Simulator☆222Dec 12, 2024Updated last year
- [HPCA 2023] ViTCoD: Vision Transformer Acceleration via Dedicated Algorithm and Accelerator Co-Design☆128Jun 27, 2023Updated 2 years ago
- [ICML 2024] Sparse Model Inversion: Efficient Inversion of Vision Transformers with Less Hallucination☆13Apr 29, 2025Updated 10 months ago
- Victima is a new software-transparent technique that greatly extends the address translation reach of modern processors by leveraging the…☆32Oct 13, 2023Updated 2 years ago
- A Cycle-level simulator for M2NDP☆34Aug 14, 2025Updated 6 months ago
- [NeurIPS 2023] ShiftAddViT: Mixture of Multiplication Primitives Towards Efficient Vision Transformer☆30Dec 6, 2023Updated 2 years ago
- Pytorch implementation of Bit-Flip based adversarial weight Attack (BFA)☆33Jul 3, 2021Updated 4 years ago
- Post-training sparsity-aware quantization☆34Feb 26, 2023Updated 3 years ago
- The SEAL-CPU backend is a Reference backend engine for HEBench which is a shared library that implements the required functions specified…☆11Mar 3, 2023Updated 2 years ago
- ☆43Mar 31, 2025Updated 11 months ago
- H2-LLM: Hardware-Dataflow Co-Exploration for Heterogeneous Hybrid-Bonding-based Low-Batch LLM Inference☆89Apr 26, 2025Updated 10 months ago