BidyutSaha / TinyTNASLinks
TinyTNAS is a hardware-aware, multi-objective, time-bound Neural Architecture Search (NAS) tool designed for TinyML time series classification. Unlike GPU-based NAS methods, it runs efficiently on CPUs.
☆18Updated 10 months ago
Alternatives and similar repositories for TinyTNAS
Users that are interested in TinyTNAS are comparing it to the libraries listed below
Sorting:
- The official implementation of "NAS-BNN: Neural Architecture Search for Binary Neural Networks"☆12Updated last year
- ☆68Updated last year
- [CVPR 2024] Offical implementation for A&B BNN: Add&Bit-Operation-Only Hardware-Friendly Binary Neural Network☆24Updated 10 months ago
- ☆42Updated 7 months ago
- ☆75Updated 8 months ago
- C++ and Cuda ops for fused FourierKAN☆81Updated last year
- A comprehensive paper list of Transformer & Attention for Vision Recognition / Foundation Model, including papers, codes, and related web…☆20Updated 2 years ago
- Differentiable Weightless Neural Networks☆25Updated 7 months ago
- [ECCV 2022] SuperTickets: Drawing Task-Agnostic Lottery Tickets from Supernets via Jointly Architecture Searching and Parameter Pruning☆20Updated 3 years ago
- Pytorch implementation of Spiking Neural Networks for Human Activity Recognition.☆21Updated 2 years ago
- [CVPR'25 Highlight] The official implementation of "GG-SSMs: Graph-Generating State Space Models"☆28Updated 4 months ago
- ☆16Updated last year
- [CVPR 2023] Castling-ViT: Compressing Self-Attention via Switching Towards Linear-Angular Attention During Vision Transformer Inference☆30Updated last year
- [ICML 2024] When Linear Attention Meets Autoregressive Decoding: Towards More Effective and Efficient Linearized Large Language Models☆36Updated last year
- This repository contains an experimental PyTorch implementation exploring the NoProp algorithm, presented in the paper "NOPROP: TRAINING …☆15Updated 4 months ago
- PyTorch Lightning implementation of the paper Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and H…☆32Updated 10 months ago
- Official PyTorch implementation of LilNetX: Lightweight Networks with EXtreme Model Compression and Structured Sparsification☆47Updated 3 years ago
- PyTorch Implementation of Spiking Transformer with Spatial-Temporal Attention (CVPR 2025)☆52Updated 4 months ago
- ☆19Updated last year
- Transformer model based on Kolmogorov–Arnold Network(KAN), which is an alternative of Multi-Layer Perceptron(MLP)☆28Updated 3 months ago
- [NeurIPS 2023] ShiftAddViT: Mixture of Multiplication Primitives Towards Efficient Vision Transformer☆30Updated last year
- [CVPR'23] SparseViT: Revisiting Activation Sparsity for Efficient High-Resolution Vision Transformer☆75Updated last year
- BinaryViT: Pushing Binary Vision Transformers Towards Convolutional Models☆37Updated last year
- State Space Models☆70Updated last year
- Kolmogorov-Arnold Networks (KAN) using Jacobi polynomials instead of B-splines.☆40Updated last year
- A Triton Kernel for incorporating Bi-Directionality in Mamba2☆75Updated 10 months ago
- [TCAD 2021] Block Convolution: Towards Memory-Efficient Inference of Large-Scale CNNs on FPGA☆17Updated 3 years ago
- ☆10Updated last year
- The official GitHub page for the survey paper "A Survey of RWKV".☆29Updated 9 months ago
- This code implements a Radial Basis Function (RBF) based Kolmogorov-Arnold Network (KAN) for function approximation.☆29Updated last year