vra / flopthLinks
A simple program to calculate and visualize the FLOPs and Parameters of Pytorch models, with handy CLI and easy-to-use Python API.
☆129Updated 6 months ago
Alternatives and similar repositories for flopth
Users that are interested in flopth are comparing it to the libraries listed below
Sorting:
- Estimate/count FLOPS for a given neural network using pytorch☆304Updated 3 years ago
- Seamless analysis of your PyTorch models (RAM usage, FLOPs, MACs, receptive field, etc.)☆218Updated 2 months ago
- Implementation of fused cosine similarity attention in the same style as Flash Attention☆214Updated 2 years ago
- TF/Keras code for DiffStride, a pooling layer with learnable strides.☆124Updated 3 years ago
- Recent Advances in MLP-based Models (MLP is all you need!)☆115Updated 2 years ago
- Compare neural networks by their feature similarity☆361Updated 2 years ago
- Transformers w/o Attention, based fully on MLPs☆93Updated last year
- Official PyTorch Implementation of Long-Short Transformer (NeurIPS 2021).☆225Updated 3 years ago
- Code release for "Dropout Reduces Underfitting"☆313Updated 2 years ago
- Implementation of Linformer for Pytorch☆286Updated last year
- Implementation of ASAM: Adaptive Sharpness-Aware Minimization for Scale-Invariant Learning of Deep Neural Networks, ICML 2021.☆145Updated 3 years ago
- Implementation of a memory efficient multi-head attention as proposed in the paper, "Self-attention Does Not Need O(n²) Memory"☆379Updated last year
- A simple minimal implementation of Reversible Vision Transformers☆125Updated last year
- AlphaNet Improved Training of Supernet with Alpha-Divergence☆98Updated 3 years ago
- A general and accurate MACs / FLOPs profiler for PyTorch models☆613Updated last year
- Implementation of the Adan (ADAptive Nesterov momentum algorithm) Optimizer in Pytorch☆252Updated 2 years ago
- Deep Learning project template for PyTorch (multi-gpu training is supported)☆138Updated last year
- [ICLR 2022] "Deep AutoAugment" by Yu Zheng, Zhi Zhang, Shen Yan, Mi Zhang☆64Updated 8 months ago
- Is the attention layer even necessary? (https://arxiv.org/abs/2105.02723)☆485Updated 4 years ago
- Demystify RAM Usage in Multi-Process Data Loaders☆194Updated 2 years ago
- ☆50Updated 2 years ago
- A library that contains a rich collection of performant PyTorch model metrics, a simple interface to create new metrics, a toolkit to fac…☆233Updated 4 months ago
- FFCV-SSL Fast Forward Computer Vision for Self-Supervised Learning.☆207Updated last year
- A research library for pytorch-based neural network pruning, compression, and more.☆162Updated 2 years ago
- A compilation of network architectures for vision and others without usage of self-attention mechanism☆80Updated 2 years ago
- Recent Advances on Efficient Vision Transformers☆51Updated 2 years ago
- Code repository of the paper "Modelling Long Range Dependencies in ND: From Task-Specific to a General Purpose CNN" https://arxiv.org/abs…☆184Updated 3 weeks ago
- (ICML 2022) Official PyTorch implementation of “Blurs Behave Like Ensembles: Spatial Smoothings to Improve Accuracy, Uncertainty, and Rob…☆78Updated 2 years ago
- Unofficial implementation of MLP-Mixer: An all-MLP Architecture for Vision☆217Updated 4 years ago
- Check gradient flow in Pytorch☆87Updated 6 years ago