booniebears / CoMNLinks
☆18Updated last year
Alternatives and similar repositories for CoMN
Users that are interested in CoMN are comparing it to the libraries listed below
Sorting:
- Benchmark framework of compute-in-memory based accelerators for deep neural network (inference engine focused)☆73Updated 6 months ago
- Collection of kernel accelerators optimised for LLM execution☆21Updated 5 months ago
- C++ code for HLS FPGA implementation of transformer☆18Updated last year
- [TVLSI'23] This repository contains the source code for the paper "FireFly: A High-Throughput Hardware Accelerator for Spiking Neural Net…☆20Updated last year
- Code of "Eva-CiM: A System-Level Performance and Energy Evaluation Framework for Computing-in-Memory Architectures", TCAD 2020☆11Updated 4 years ago
- Attentionlego☆12Updated last year
- Accelerate multihead attention transformer model using HLS for FPGA☆12Updated last year
- A bit-level sparsity-awared multiply-accumulate process element.☆16Updated last year
- Open-source of MSD framework☆16Updated 2 years ago
- Benchmark framework of compute-in-memory based accelerators for deep neural network (on-chip training chip focused)☆52Updated 4 years ago
- Model LLM inference on single-core dataflow accelerators☆14Updated last month
- bitfusion verilog implementation☆12Updated 3 years ago
- a Computing In Memory emULATOR framework☆14Updated last year
- A Unified Framework for Training, Mapping and Simulation of ReRAM-Based Convolutional Neural Network Acceleration☆35Updated 3 years ago
- A Behavior-Level Modeling Tool for Memristor-based Neuromorphic Computing Systems☆175Updated 9 months ago
- FPGA implement of 8x8 weight stationary systolic array DNN accelerator☆12Updated 4 years ago
- The official implementation of HPCA 2025 paper, Prosperity: Accelerating Spiking Neural Networks via Product Sparsity☆36Updated last month
- Benchmark framework of compute-in-memory based accelerators for deep neural network (on-chip training chip focused)☆162Updated last year
- LoAS: Fully Temporal-Parallel Dataflow for Dual-Sparse Spiking Neural Networks, MICRO 2024.☆12Updated 5 months ago
- A collection of research papers on SRAM-based compute-in-memory architectures.☆29Updated last year
- The project includes SRAM In Memory Computing Accelerator with updates in design/circuits submitted previously in MPW7, by IITD researche…☆13Updated 2 years ago
- ☆17Updated 4 years ago
- [TCAD'23] AccelTran: A Sparsity-Aware Accelerator for Transformers