JunwenBai / DAPCLinks
Deep Autoencoding Predictive Components
☆10Updated 4 years ago
Alternatives and similar repositories for DAPC
Users that are interested in DAPC are comparing it to the libraries listed below
Sorting:
- [EMNLP'19] Summary for Transformer Understanding☆53Updated 5 years ago
- MODALS: Modality-agnostic Automated Data Augmentation in the Latent Space☆41Updated 4 years ago
- Repository for Multimodal AutoML Benchmark☆65Updated 3 years ago
- MTAdam: Automatic Balancing of Multiple Training Loss Terms☆36Updated 5 years ago
- Pytorch implementation for "The Surprising Positive Knowledge Transfer in Continual 3D Object Shape Reconstruction"☆33Updated 3 years ago
- Implementation of Multistream Transformers in Pytorch☆54Updated 4 years ago
- ☆24Updated 6 months ago
- ☆47Updated 4 years ago
- Tensorflow implementation of "Meta Dropout: Learning to Perturb Latent Features for Generalization" (ICLR 2020)☆27Updated 5 years ago
- On Calibration of Modern Neural Networks - tensorflow implementation☆30Updated 7 years ago
- Implementation of COCO-LM, Correcting and Contrasting Text Sequences for Language Model Pretraining, in Pytorch☆46Updated 4 years ago
- [ICML 2020] code for "PowerNorm: Rethinking Batch Normalization in Transformers" https://arxiv.org/abs/2003.07845☆120Updated 4 years ago
- Github for the conference paper GLOD-Gaussian Likelihood OOD detector☆16Updated 3 years ago
- Code for Multi-Head Attention: Collaborate Instead of Concatenate☆151Updated 2 years ago
- High performance pytorch modules☆18Updated 2 years ago
- A PyTorch implementation of the paper - "Synthesizer: Rethinking Self-Attention in Transformer Models"☆73Updated 2 years ago
- ☆12Updated 6 years ago
- Implementing SYNTHESIZER: Rethinking Self-Attention in Transformer Models using Pytorch☆70Updated 5 years ago
- Skyformer: Remodel Self-Attention with Gaussian Kernel and Nystr\"om Method (NeurIPS 2021)☆63Updated 3 years ago
- Interactive Weak Supervision: Learning Useful Heuristics for Data Labeling☆31Updated 4 years ago
- An implementation of Transformer with Expire-Span, a circuit for learning which memories to retain☆34Updated 5 years ago
- An implementation of (Induced) Set Attention Block, from the Set Transformers paper☆65Updated 2 years ago
- A PyTorch implementation of our proposed loss function from the paper "SimLoss: Class Similarities in Cross Entropy"☆25Updated 4 years ago
- Code for the paper PermuteFormer☆42Updated 4 years ago
- ☆32Updated 3 years ago
- [ICML 2020] code for the flooding regularizer proposed in "Do We Need Zero Training Loss After Achieving Zero Training Error?"☆95Updated 2 years ago
- Implementation of a Transformer using ReLA (Rectified Linear Attention) from https://arxiv.org/abs/2104.07012☆49Updated 3 years ago
- Layerwise Batch Entropy Regularization☆24Updated 3 years ago
- A simple implementation of a deep linear Pytorch module☆21Updated 5 years ago
- Code for "Understanding and Improving Layer Normalization"☆46Updated 5 years ago