IBM / analog-nasLinks
Analog AI Neural Architecture Search (analog-nas) is a modular and flexible framework to facilitate implementation of Analog-aware Neural Architecture Search.
☆50Updated 2 months ago
Alternatives and similar repositories for analog-nas
Users that are interested in analog-nas are comparing it to the libraries listed below
Sorting:
- Torch2Chip (MLSys, 2024)☆54Updated 7 months ago
- IBM Analog Hardware Acceleration Kit☆437Updated last week
- ☆40Updated last year
- Simulator for LLM inference on an abstract 3D AIMC-based accelerator☆25Updated 2 months ago
- CrossSim: accuracy simulation of analog in-memory computing☆185Updated 7 months ago
- A Simulation Framework for Memristive Deep Learning Systems☆171Updated last year
- Code for Edge Learning Using a Fully Integrated Neuro-Inspired Memristor Chip☆17Updated 2 years ago
- ☆72Updated last month
- ReckOn: A Spiking RNN Processor Enabling On-Chip Learning over Second-Long Timescales - HDL source code and documentation.☆91Updated 3 years ago
- Verilog and Python drivers and APIs for Neurram 48-core chip☆42Updated 3 years ago
- The official implementation of HPCA 2025 paper, Prosperity: Accelerating Spiking Neural Networks via Product Sparsity☆37Updated 3 months ago
- Stochastic Computing for Deep Neural Networks☆33Updated 4 years ago
- Benchmark framework of compute-in-memory based accelerators for deep neural network (on-chip training chip focused)☆54Updated 4 years ago
- High Granularity Quantizarion for Ultra-Fast Machine Learning Applications on FPGAs☆37Updated 4 months ago
- Scalable HW-Aware Training for Analog In-Memory Computing☆33Updated this week
- Benchmark harness and baseline results for the NeuroBench algorithm track.☆100Updated 3 months ago
- Benchmark framework of compute-in-memory based accelerators for deep neural network (on-chip training chip focused)☆172Updated last year
- Central repository for all NeuroSim versions. Each version is uploaded in a separate branch. Updates to the versions will be reflected he…☆86Updated 3 weeks ago
- Resource Utilization and Latency Estimation for ML on FPGA.☆17Updated 2 months ago
- From Pytorch model to C++ for Vitis HLS☆18Updated last month
- Benchmark framework of compute-in-memory based accelerators for deep neural network (inference engine focused)☆74Updated last year
- Quantization-aware training with spiking neural networks☆48Updated 3 years ago
- [FPL 2021] SyncNN: Evaluating and Accelerating Spiking Neural Networks on FPGAs.☆62Updated 4 years ago
- Benchmark framework of compute-in-memory based accelerators for deep neural network (inference engine focused)☆76Updated 8 months ago
- Repository collecting papers about neuromorphic hardware, such as ASIC and FPGA implementations of SNNs and stuff.☆192Updated 2 years ago
- Floating-Point Optimized On-Device Learning Library for the PULP Platform.☆37Updated 3 weeks ago
- Models and training scripts for "LSTMs for Keyword Spotting with ReRAM-based Compute-In-Memory Architectures" (ISCAS 2021).☆16Updated 4 years ago
- Curated content for DNN approximation, acceleration ... with a focus on hardware accelerator and deployment☆27Updated last year
- Benchmark framework of compute-in-memory based accelerators for deep neural network☆45Updated 5 years ago
- PolyLUT is the first quantized neural network training methodology that maps a neuron to a LUT while using multivariate polynomial functi…☆54Updated last year