dcato98 / fastsparseLinks
Fastai+PyTorch implementation of sparse model training methods (SET, SNFS, RigL) + customize-your-own.
☆10Updated 2 years ago
Alternatives and similar repositories for fastsparse
Users that are interested in fastsparse are comparing it to the libraries listed below
Sorting:
- Team [Save the Prostate] - solution☆18Updated 4 years ago
- (unofficial) - customized fork of DETR, optimized for intelligent obj detection on 'real world' custom datasets☆12Updated 4 years ago
- A minimal TPU compatible Jax implementation of NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis.☆13Updated 3 years ago
- A fastai-free onnx implementation for fastai☆12Updated 2 years ago
- An open source implementation of CLIP.☆32Updated 2 years ago
- ☆15Updated 3 years ago
- A simple Transformer where the softmax has been replaced with normalization☆20Updated 4 years ago
- Describe the format of image/text datasets☆11Updated 3 years ago
- reproduces experiments from "Grounding inductive biases in natural images: invariance stems from variations in data"☆17Updated 8 months ago
- Unofficially Implements https://arxiv.org/abs/2112.05682 to get Linear Memory Cost on Attention for PyTorch☆12Updated 3 years ago
- This repository hosts the code to port NumPy model weights of BiT-ResNets to TensorFlow SavedModel format.☆14Updated 3 years ago
- Three experiments for data efficient video transformers.☆9Updated 3 years ago
- Load any clip model with a standardized interface☆21Updated last year
- Tensorflow 2.x implementation of Gradient Origin Networks☆12Updated 4 years ago
- Implementation of the Remixer Block from the Remixer paper, in Pytorch☆36Updated 3 years ago
- Large dataset storage format for Pytorch☆45Updated 3 years ago
- Local Attention - Flax module for Jax☆22Updated 4 years ago
- Directed masked autoencoders☆14Updated 2 years ago
- Implements EvoNorms B0 and S0 as proposed in Evolving Normalization-Activation Layers.☆11Updated 5 years ago
- Implementation of Kronecker Attention in Pytorch☆19Updated 4 years ago
- Implementation of a Transformer using ReLA (Rectified Linear Attention) from https://arxiv.org/abs/2104.07012☆50Updated 3 years ago
- Shows how to do parameter ensembling using differential evolution.☆10Updated 3 years ago
- Contains my experiments with the `big_vision` repo to train ViTs on ImageNet-1k.☆22Updated 2 years ago
- ☆15Updated 3 years ago
- PyTorch Personal Trainer: My framework for deep learning experiments☆10Updated 2 years ago
- ☆8Updated last year
- PyTorch implementation of FNet: Mixing Tokens with Fourier transforms☆27Updated 4 years ago
- Visual Clustering: Clustering Plotted Data by Image Segmentation☆25Updated 3 months ago
- Official Pytorch implementation of the paper: "Locally Shifted Attention With Early Global Integration"☆15Updated 3 years ago
- PyTorch reimplementation of the paper "HyperMixer: An MLP-based Green AI Alternative to Transformers" [arXiv 2022].☆17Updated 3 years ago