rjbruin / flexconvLinks
Code repository for the ICLR 2022 paper "FlexConv: Continuous Kernel Convolutions With Differentiable Kernel Sizes" https://openreview.net/forum?id=3jooF27-0Wy
☆115Updated 2 years ago
Alternatives and similar repositories for flexconv
Users that are interested in flexconv are comparing it to the libraries listed below
Sorting:
- TF/Keras code for DiffStride, a pooling layer with learnable strides.☆124Updated 3 years ago
- Code repository of the paper "CKConv: Continuous Kernel Convolution For Sequential Data" published at ICLR 2022. https://arxiv.org/abs/21…☆121Updated 2 years ago
- Code repository of the paper "Modelling Long Range Dependencies in ND: From Task-Specific to a General Purpose CNN" https://arxiv.org/abs…☆184Updated 3 weeks ago
- Drop-in replacement for any ResNet with a significantly reduced memory footprint and better representation capabilities☆209Updated last year
- Implementation of Nyström Self-attention, from the paper Nyströmformer☆135Updated 2 months ago
- Implementation of "compositional attention" from MILA, a multi-head attention variant that is reframed as a two-step attention process wi…☆51Updated 3 years ago
- A project to improve out-of-distribution detection (open set recognition) and uncertainty estimation by changing a few lines of code in y…☆45Updated 2 years ago
- ☆67Updated 3 years ago
- A better PyTorch implementation of image local attention which reduces the GPU memory by an order of magnitude.☆141Updated 3 years ago
- A simple minimal implementation of Reversible Vision Transformers☆125Updated last year
- Implementation of LogAvgExp for Pytorch☆36Updated last month
- Learnable Fourier Features for Multi-Dimensional Spatial Positional Encoding☆49Updated 8 months ago
- ☆41Updated 4 years ago
- Implementation of Hourglass Transformer, in Pytorch, from Google and OpenAI☆90Updated 3 years ago
- ☆163Updated 2 years ago
- Recent Advances in MLP-based Models (MLP is all you need!)☆115Updated 2 years ago
- A compilation of network architectures for vision and others without usage of self-attention mechanism☆80Updated 2 years ago
- Implementation of fused cosine similarity attention in the same style as Flash Attention