omry / hydra-article-codeLinks
☆18Updated 5 years ago
Alternatives and similar repositories for hydra-article-code
Users that are interested in hydra-article-code are comparing it to the libraries listed below
Sorting:
- Repository for Multimodal AutoML Benchmark☆66Updated 4 years ago
- [ICML 2020] code for "PowerNorm: Rethinking Batch Normalization in Transformers" https://arxiv.org/abs/2003.07845☆120Updated 4 years ago
- Loss and accuracy go opposite ways...right?☆95Updated 5 years ago
- PiRank: Learning to Rank via Differentiable Sorting☆61Updated 2 years ago
- Unofficial PyTorch implementation of Attention Free Transformer (AFT) layers by Apple Inc.☆244Updated 3 years ago
- Unofficial PyTorch implementation of Fastformer based on paper "Fastformer: Additive Attention Can Be All You Need"."☆132Updated 4 years ago
- Code for Multi-Head Attention: Collaborate Instead of Concatenate☆153Updated 2 years ago
- A pytorch &keras implementation and demo of Fastformer.☆191Updated 3 years ago
- Feature Interaction Interpretability via Interaction Detection☆35Updated 2 years ago
- MTAdam: Automatic Balancing of Multiple Training Loss Terms☆36Updated 5 years ago
- [ICML 2020] code for the flooding regularizer proposed in "Do We Need Zero Training Loss After Achieving Zero Training Error?"☆95Updated 3 years ago
- ☆32Updated 3 years ago
- Official cleanlab repo is at https://github.com/cleanlab/cleanlab☆58Updated 2 years ago
- Tutorials about AutoML☆87Updated 3 years ago
- Learning to Rank in PyTorch☆91Updated 2 years ago
- pytorch implement of Lookahead Optimizer☆195Updated 3 years ago
- Code for the PAPA paper☆27Updated 3 years ago
- ☆84Updated 4 years ago
- a lightweight transformer library for PyTorch☆72Updated 4 years ago
- The accompanying code for "Memory-efficient Transformers via Top-k Attention" (Ankit Gupta, Guy Dar, Shaya Goodman, David Ciprut, Jonatha…☆70Updated 4 years ago
- Code for the paper "BERT Loses Patience: Fast and Robust Inference with Early Exit".☆66Updated 4 years ago
- [ICML 2021 Oral] We show pure attention suffers rank collapse, and how different mechanisms combat it.☆170Updated 4 years ago
- Implementation of RealFormer using pytorch☆101Updated 5 years ago
- Some improvements on Adam☆28Updated 5 years ago
- 🎲 Iterable dataset resampling in PyTorch☆91Updated 4 years ago
- Official code repository of the paper Linear Transformers Are Secretly Fast Weight Programmers.☆111Updated 4 years ago
- [EMNLP'19] Summary for Transformer Understanding☆53Updated 6 years ago
- Code for the anonymous submission "Cockpit: A Practical Debugging Tool for Training Deep Neural Networks"☆31Updated 5 years ago
- Implementing SYNTHESIZER: Rethinking Self-Attention in Transformer Models using Pytorch☆71Updated 5 years ago
- Differentiable Product Quantization for End-to-End Embedding Compression.☆64Updated 3 years ago