BidyutSaha / TinyTNASLinks
TinyTNAS is a hardware-aware, multi-objective, time-bound Neural Architecture Search (NAS) tool designed for TinyML time series classification. Unlike GPU-based NAS methods, it runs efficiently on CPUs.
☆19Updated last year
Alternatives and similar repositories for TinyTNAS
Users that are interested in TinyTNAS are comparing it to the libraries listed below
Sorting:
- C++ and Cuda ops for fused FourierKAN☆82Updated last year
- The official implementation of "NAS-BNN: Neural Architecture Search for Binary Neural Networks"☆13Updated last year
- ☆79Updated 11 months ago
- ☆69Updated last year
- ☆46Updated last year
- ☆46Updated 10 months ago
- Differentiable Weightless Neural Networks☆31Updated 10 months ago
- Pytorch implementation of Spiking Neural Networks for Human Activity Recognition.☆21Updated 3 years ago
- A comprehensive paper list of Transformer & Attention for Vision Recognition / Foundation Model, including papers, codes, and related web…☆20Updated 2 years ago
- This code implements a Radial Basis Function (RBF) based Kolmogorov-Arnold Network (KAN) for function approximation.☆29Updated last year
- [CVPR'25 Highlight] The official implementation of "GG-SSMs: Graph-Generating State Space Models"☆31Updated 7 months ago
- Transformers w/o Attention, based fully on MLPs☆97Updated last year
- Kolmogorov-Arnold Networks (KAN) using Jacobi polynomials instead of B-splines.☆42Updated last year
- A More Fair and Comprehensive Comparison between KAN and MLP☆178Updated last year
- [CVPR 2024] Official implementation for "A&B BNN: Add&Bit-Operation-Only Hardware-Friendly Binary Neural Network"☆24Updated last month
- ☆16Updated 2 years ago
- ☆10Updated last year
- Transformer model based on Kolmogorov–Arnold Network(KAN), which is an alternative of Multi-Layer Perceptron(MLP)☆29Updated last week
- State Space Models☆72Updated last year
- This repository contains an experimental PyTorch implementation exploring the NoProp algorithm, presented in the paper "NOPROP: TRAINING …☆16Updated last week
- [CVPR 2023] Castling-ViT: Compressing Self-Attention via Switching Towards Linear-Angular Attention During Vision Transformer Inference☆30Updated last year
- Benchmarking and Testing FastKAN☆91Updated last year
- "Graph Convolutions Enrich the Self-Attention in Transformers!" NeurIPS 2024☆27Updated 10 months ago
- Combine B-Splines (BS) and Radial Basis Functions (RBF) in Kolmogorov-Arnold Networks (KANs)☆28Updated 2 months ago
- PyTorch Implementation of Spiking Transformer with Spatial-Temporal Attention (CVPR 2025)☆67Updated 7 months ago
- ☆98Updated last year
- Pytorch implementation for Hypformer: Exploring Efficient Hyperbolic Transformer Fully in Hyperbolic Space (KDD 2024)☆35Updated 5 months ago
- [CVPR'23] SparseViT: Revisiting Activation Sparsity for Efficient High-Resolution Vision Transformer☆80Updated last year
- Official PyTorch implementation of The Linear Attention Resurrection in Vision Transformer☆15Updated last year
- ☆43Updated last year