clovaai / group-transformerLinks
Official code for Group-Transformer (Scale down Transformer by Grouping Features for a Lightweight Character-level Language Model, COLING-2020).
☆27Updated 5 years ago
Alternatives and similar repositories for group-transformer
Users that are interested in group-transformer are comparing it to the libraries listed below
Sorting:
- Implementation of RealFormer using pytorch☆101Updated 5 years ago
- Learning Features with Parameter-Free Layers, ICLR 2022☆84Updated 2 years ago
- PyTorch, PyTorch Lightning framework for trying knowledge distillation in image classification problems☆32Updated last year
- Automatic Mixed Precision Tutorials using pytorch. Based on PyTorch 1.6 Official Features, implement classification codebase using custo…☆90Updated 5 years ago
- Zero-Shot Knowledge Distillation in Deep Networks in ICML2019☆49Updated 6 years ago
- An implementation of drophead regularization for pytorch transformers☆19Updated 4 years ago
- Implementing SYNTHESIZER: Rethinking Self-Attention in Transformer Models using Pytorch☆71Updated 5 years ago
- Implementation of Online Label Smoothing in PyTorch☆95Updated 3 years ago
- Implementation of Long-Short Transformer, combining local and global inductive biases for attention over long sequences, in Pytorch☆120Updated 4 years ago
- ☆46Updated 4 years ago
- custom pytorch implementation of MoCo v3☆46Updated 4 years ago
- The codes for recent knowledge distillation algorithms and benchmark results via TF2.0 low-level API☆112Updated 3 years ago
- Official MXNet implementation of "Embedding Expansion: Augmentation in Embedding Space for Deep Metric Learning" (CVPR 2020)☆79Updated 3 years ago
- Simple project base template for PyTorch deep Learning project. Features clean implementation of DDP training and Hydra config.☆62Updated last year
- PyTorch distributed training comparison☆15Updated 4 years ago
- Unofficial PyTorch implementation of the paper "cosFormer: Rethinking Softmax In Attention".☆44Updated 4 years ago
- [ICML 2020] code for "PowerNorm: Rethinking Batch Normalization in Transformers" https://arxiv.org/abs/2003.07845☆120Updated 4 years ago
- ☆246Updated 4 years ago
- Implementation of Mixout with PyTorch☆75Updated 3 years ago
- A PyTorch implementation of Transformer in "Attention is All You Need"☆106Updated 5 years ago
- ☆57Updated 4 years ago
- Official Pytorch Implementation of Length-Adaptive Transformer (ACL 2021)☆102Updated 5 years ago
- Skyformer: Remodel Self-Attention with Gaussian Kernel and Nystr\"om Method (NeurIPS 2021)☆63Updated 3 years ago
- A PyTorch implementation of the paper - "Synthesizer: Rethinking Self-Attention in Transformer Models"☆73Updated 3 years ago
- running LayoutLMv2☆11Updated 3 years ago
- The official implementation of You Only Compress Once: Towards Effective and Elastic BERT Compression via Exploit-Explore Stochastic Natu…☆48Updated 4 years ago
- Official PyTorch Implementation of Long-Short Transformer (NeurIPS 2021).☆228Updated 3 years ago
- Official implementation for (Show, Attend and Distill: Knowledge Distillation via Attention-based Feature Matching, AAAI-2021)☆118Updated 4 years ago
- My 6th🥇 place solution for Kaggle Shopee competition.☆27Updated 3 years ago
- PyTorch implementation of Pay Attention to MLPs☆40Updated 4 years ago