takashiishida / flooding
[ICML 2020] code for the flooding regularizer proposed in "Do We Need Zero Training Loss After Achieving Zero Training Error?"
☆91Updated 2 years ago
Alternatives and similar repositories for flooding
Users that are interested in flooding are comparing it to the libraries listed below
Sorting:
- Official implementation of Auxiliary Learning by Implicit Differentiation [ICLR 2021]☆84Updated 9 months ago
- Masked Convolutional Flow☆59Updated 5 years ago
- Pytorch implementation of the hamburger module from the ICLR 2021 paper "Is Attention Better Than Matrix Decomposition"☆98Updated 4 years ago
- PyTorch Implementations of Dropout Variants☆87Updated 7 years ago
- Exemplar VAE: Linking Generative Models, Nearest Neighbor Retrieval, and Data Augmentation☆69Updated 4 years ago
- Implementation of the reversible residual network in pytorch☆104Updated 3 years ago
- [ICML 2020] code for "PowerNorm: Rethinking Batch Normalization in Transformers" https://arxiv.org/abs/2003.07845☆120Updated 3 years ago
- implements optimal transport algorithms in pytorch☆97Updated 3 years ago
- A Pytorch implementation of the optimal transport kernel embedding☆116Updated 4 years ago
- Gradients as Features for Deep Representation Learning☆43Updated 5 years ago
- Full implementation of the paper "Rethinking Softmax with Cross-Entropy: Neural Network Classifier as Mutual Information Estimator".☆101Updated 5 years ago
- ICML 2019: Understanding and Utilizing Deep Neural Networks Trained with Noisy Labels☆91Updated 4 years ago
- Official PyTorch implementation of the paper "Self-Supervised Relational Reasoning for Representation Learning", NeurIPS 2020 Spotlight.☆142Updated last year
- pytorch implementation of VAE-Gumble-Softmax☆63Updated 4 years ago
- Code for the paper: On Symmetric Losses for Learning from Corrupted Labels☆19Updated 6 years ago
- NeurIPS 2020, Debiased Contrastive Learning☆282Updated 2 years ago
- A simple to use pytorch wrapper for contrastive self-supervised learning on any neural network☆135Updated 4 years ago
- MODALS: Modality-agnostic Automated Data Augmentation in the Latent Space☆41Updated 4 years ago
- Loss and accuracy go opposite ways...right?☆93Updated 5 years ago
- ☆81Updated 9 months ago
- Sliced Wasserstein Distance (SWD) in PyTorch☆111Updated 5 years ago
- [NeurIPS'21] "Ultra-Data-Efficient GAN Training: Drawing A Lottery Ticket First, Then Training It Toughly", Tianlong Chen, Yu Cheng, Zhe …☆85Updated 3 years ago
- [NeurIPS '18] "Can We Gain More from Orthogonality Regularizations in Training Deep CNNs?" Official Implementation.☆129Updated 3 years ago
- ☆68Updated 6 years ago
- Unsupervised Data Augmentation experiments in PyTorch☆59Updated 5 years ago
- Hybrid Discriminative-Generative Training via Contrastive Learning☆75Updated 2 years ago
- A pytorch implementation for the LSTM experiments in the paper: Why Gradient Clipping Accelerates Training: A Theoretical Justification f…☆46Updated 5 years ago
- [ICLR 2019] ProbGAN: Towards Probabilistic GAN with Theoretical Guarantees☆32Updated 5 years ago
- MTAdam: Automatic Balancing of Multiple Training Loss Terms☆36Updated 4 years ago
- Hyperspherical Prototype Networks☆66Updated 5 years ago