juntang-zhuang / ACProp-Optimizer
Implementation for ACProp ( Momentum centering and asynchronous update for adaptive gradient methdos, NeurIPS 2021)
☆15Updated 3 years ago
Alternatives and similar repositories for ACProp-Optimizer:
Users that are interested in ACProp-Optimizer are comparing it to the libraries listed below
- Self-Distillation with weighted ground-truth targets; ResNet and Kernel Ridge Regression☆17Updated 3 years ago
- Code base for SRSGD.☆28Updated 4 years ago
- ☆15Updated last year
- We investigated corruption robustness across different architectures including Convolutional Neural Networks, Vision Transformers, and th…☆15Updated 3 years ago
- Implementation of Kronecker Attention in Pytorch☆18Updated 4 years ago
- An adaptive training algorithm for residual network☆15Updated 4 years ago
- ☆25Updated 4 years ago
- ☆15Updated last year
- ☆15Updated last year
- ☆19Updated 3 years ago
- ImageNet-12k subset of ImageNet-21k (fall11)☆21Updated last year
- ☆17Updated 2 years ago
- reproduces experiments from "Grounding inductive biases in natural images: invariance stems from variations in data"☆17Updated 3 months ago
- Code repo for the paper "AIO-P: Expanding Neural Performance Predictors Beyond Image Classification", accepted to AAAI-23.☆10Updated 7 months ago
- DiWA: Diverse Weight Averaging for Out-of-Distribution Generalization☆29Updated last year
- ☆29Updated 2 years ago
- Implementation of the Remixer Block from the Remixer paper, in Pytorch☆35Updated 3 years ago
- Implementation for NATv2.☆23Updated 3 years ago
- A simple implementation of a deep linear Pytorch module☆19Updated 4 years ago
- Official Pytorch implementation of the paper: "Locally Shifted Attention With Early Global Integration"☆15Updated 3 years ago
- ☆12Updated 3 years ago
- Tensorflow 2.x implementation of Gradient Origin Networks☆12Updated 4 years ago
- ☆40Updated last year
- ☆11Updated 10 months ago
- ☆13Updated 2 years ago
- Shows how to do parameter ensembling using differential evolution.☆10Updated 3 years ago
- Implementation of a Transformer using ReLA (Rectified Linear Attention) from https://arxiv.org/abs/2104.07012☆49Updated 2 years ago
- Directed masked autoencoders☆14Updated last year