CerebrasResearch / RevBiFPNLinks
RevBiFPN: The Fully Reversible Bidirectional Feature Pyramid Network
☆15Updated 3 years ago
Alternatives and similar repositories for RevBiFPN
Users that are interested in RevBiFPN are comparing it to the libraries listed below
Sorting:
- Seamless analysis of your PyTorch models (RAM usage, FLOPs, MACs, receptive field, etc.)☆224Updated 10 months ago
- A research library for pytorch-based neural network pruning, compression, and more.☆162Updated 3 years ago
- PyTorch implementation of LARS (Layer-wise Adaptive Rate Scaling)☆19Updated 6 years ago
- Transformers w/o Attention, based fully on MLPs☆97Updated last year
- [NeurIPS 2022 Spotlight] This is the official PyTorch implementation of "EcoFormer: Energy-Saving Attention with Linear Complexity"☆73Updated 3 years ago
- A implement of run-length encoding for Pytorch tensor using CUDA☆14Updated 4 years ago
- Batch Renormalization in Pytorch☆45Updated 2 years ago
- Estimate/count FLOPS for a given neural network using pytorch☆305Updated 3 years ago
- Simple CIFAR-10 classification with ConvMixer☆45Updated 4 years ago
- ☆43Updated 2 years ago
- Easily benchmark PyTorch model FLOPs, latency, throughput, allocated gpu memory and energy consumption☆109Updated 2 years ago
- Binarize convolutional neural networks using pytorch☆149Updated 3 years ago
- Using ideas from product quantization for state-of-the-art neural network compression.☆145Updated 4 years ago
- [Preprint] ConvMLP: Hierarchical Convolutional MLPs for Vision, 2021☆167Updated 3 years ago
- A crash course on PyTorch hooks☆40Updated 5 years ago
- [ICML 2022] "DepthShrinker: A New Compression Paradigm Towards Boosting Real-Hardware Efficiency of Compact Neural Networks", by Yonggan …☆73Updated 3 years ago
- [ICLR 2023] "More ConvNets in the 2020s: Scaling up Kernels Beyond 51x51 using Sparsity"; [ICML 2023] "Are Large Kernels Better Teachers…☆284Updated 2 years ago
- Nested Hierarchical Transformer https://arxiv.org/pdf/2105.12723.pdf☆201Updated last year
- FFCV-SSL Fast Forward Computer Vision for Self-Supervised Learning.☆212Updated 2 years ago
- Official implementation for "SimA: Simple Softmax-free Attention for Vision Transformers"☆45Updated last year
- Collections of model quantization algorithms. Any issues, please contact Peng Chen (blueardour@gmail.com)☆73Updated 4 years ago
- [ECCV 2022] EdgeViT: Competing Light-weight CNNs on Mobile Devices with Vision Transformers☆115Updated 2 years ago
- Pruning is all you need (hopefully)☆12Updated 3 years ago
- Recent Advances on Efficient Vision Transformers☆55Updated 3 years ago
- Official PyTorch implementation of the paper: "Solving ImageNet: a Unified Scheme for Training any Backbone to Top Results" (2022)☆193Updated 3 years ago
- Pre-trained NFNets with 99% of the accuracy of the official paper "High-Performance Large-Scale Image Recognition Without Normalization".☆158Updated 4 years ago
- Neural Architecture Search for Neural Network Libraries☆60Updated 2 years ago
- A better PyTorch implementation of image local attention which reduces the GPU memory by an order of magnitude.☆142Updated 4 years ago
- ☆58Updated 3 years ago
- Implementation of fused cosine similarity attention in the same style as Flash Attention☆220Updated 2 years ago