ItzikMalkiel / MTAdam
MTAdam: Automatic Balancing of Multiple Training Loss Terms
☆36Updated 4 years ago
Alternatives and similar repositories for MTAdam:
Users that are interested in MTAdam are comparing it to the libraries listed below
- Code for the paper PermuteFormer☆42Updated 3 years ago
- ☆34Updated 6 years ago
- ☆47Updated 4 years ago
- ☆24Updated 11 months ago
- (Batched) advanced indexing for PyTorch.☆53Updated 3 months ago
- ☆24Updated 3 years ago
- ☆32Updated 5 years ago
- Code for "MIM: Mutual Information Machine" paper.☆16Updated 2 years ago
- Memory efficient MAML using gradient checkpointing☆84Updated 5 years ago
- ☆36Updated 4 years ago
- Implementation of the retriever distillation procedure as outlined in the paper "Distilling Knowledge from Reader to Retriever"☆32Updated 4 years ago
- The official repository for our paper "The Devil is in the Detail: Simple Tricks Improve Systematic Generalization of Transformers". We s…☆67Updated 2 years ago
- Implementation of OmniNet, Omnidirectional Representations from Transformers, in Pytorch☆57Updated 4 years ago
- Exemplar VAE: Linking Generative Models, Nearest Neighbor Retrieval, and Data Augmentation☆69Updated 4 years ago
- ☆63Updated 2 years ago
- "Learning Discrete and Continuous Factors of Data via Alternating Disentanglement" accepted at ICML2019☆21Updated 5 years ago
- custom cuda kernel for {2, 3}d relative attention with pytorch wrapper☆43Updated 4 years ago
- PyTorch Implementations of Dropout Variants☆87Updated 7 years ago
- [ICML 2020] code for "PowerNorm: Rethinking Batch Normalization in Transformers" https://arxiv.org/abs/2003.07845☆119Updated 3 years ago
- ☆27Updated 4 years ago
- Official code repository of the paper Learning Associative Inference Using Fast Weight Memory by Schlag et al.☆28Updated 4 years ago
- ☆25Updated 4 years ago
- Implementation of "compositional attention" from MILA, a multi-head attention variant that is reframed as a two-step attention process wi…☆50Updated 2 years ago
- Official code repository of the paper Linear Transformers Are Secretly Fast Weight Programmers.☆104Updated 3 years ago
- Re-implementation of the Noise Contrastive Estimation algorithm for pyTorch, following "Noise-contrastive estimation: A new estimation pr…☆45Updated 5 years ago
- [EMNLP'19] Summary for Transformer Understanding☆53Updated 5 years ago
- An implementation of Transformer with Expire-Span, a circuit for learning which memories to retain☆33Updated 4 years ago
- An implementation of MixMatch with PyTorch☆36Updated 4 years ago
- PyTorch Examples repo for "ReZero is All You Need: Fast Convergence at Large Depth"☆62Updated 8 months ago
- MODALS: Modality-agnostic Automated Data Augmentation in the Latent Space☆40Updated 4 years ago