intel / npu-nn-cost-modelLinks
Library for modelling performance costs of different Neural Network workloads on NPU devices
☆35Updated last month
Alternatives and similar repositories for npu-nn-cost-model
Users that are interested in npu-nn-cost-model are comparing it to the libraries listed below
Sorting:
- Fork of upstream onnxruntime focused on supporting risc-v accelerators☆87Updated 2 years ago
- ☆107Updated this week
- The Riallto Open Source Project from AMD☆83Updated 5 months ago
- ☆32Updated 2 years ago
- IREE plugin repository for the AMD AIE accelerator☆103Updated last week
- ☆46Updated 5 years ago
- ONNXim is a fast cycle-level simulator that can model multi-core NPUs for DNN inference☆149Updated 7 months ago
- A Toy-Purpose TPU Simulator☆19Updated last year
- News and Paper Collections for Machine Learning Hardware☆22Updated last year
- Ventus GPGPU ISA Simulator Based on Spike☆46Updated last week
- A repository that compliments gpgpu-sim, providing automated regression scripts, simulation launching utilities and the code + arguments …☆75Updated 5 years ago
- Example for running IREE in a bare-metal Arm environment.☆40Updated last month
- FRAME: Fast Roofline Analytical Modeling and Estimation☆38Updated last year
- Fork of LLVM to support AMD AIEngine processors☆164Updated this week
- HeteroCL-MLIR dialect for accelerator design☆41Updated last year
- muRISCV-NN is a collection of efficient deep learning kernels for embedded platforms and microcontrollers.☆86Updated last month
- ☆16Updated 5 years ago
- ☆32Updated last week
- ☆60Updated 2 years ago
- ARIES: An Agile MLIR-Based Compilation Flow for Reconfigurable Devices with AI Engines (FPGA 2025 Best Paper Nominee)☆47Updated last week
- ☆31Updated 10 months ago
- ☆35Updated 5 months ago
- LLVM OpenCL C compiler suite for ventus GPGPU☆55Updated this week
- Learn NVDLA by SOMNIA☆43Updated 5 years ago
- Nebula: Deep Neural Network Benchmarks in C++☆13Updated 8 months ago
- agile hardware-software co-design☆51Updated 3 years ago
- HW/SW co-design of sentence-level energy optimizations for latency-aware multi-task NLP inference☆52Updated last year
- A scalable High-Level Synthesis framework on MLIR☆275Updated last year
- Tool for the deployment and analysis of TinyML applications on TFLM and MicroTVM backends☆35Updated this week
- ☆46Updated 3 months ago