microsoft / GEMLinks
☆25Updated 4 years ago
Alternatives and similar repositories for GEM
Users that are interested in GEM are comparing it to the libraries listed below
Sorting:
- Implementation of Multistream Transformers in Pytorch☆54Updated 4 years ago
- A Pytorch implementation of Attention on Attention module (both self and guided variants), for Visual Question Answering☆43Updated 5 years ago
- A python library for highly configurable transformers - easing model architecture search and experimentation.☆49Updated 4 years ago
- Implementation of a Transformer using ReLA (Rectified Linear Attention) from https://arxiv.org/abs/2104.07012☆49Updated 3 years ago
- Implementation of COCO-LM, Correcting and Contrasting Text Sequences for Language Model Pretraining, in Pytorch☆46Updated 4 years ago
- codebase for the SIMAT dataset and evaluation☆38Updated 3 years ago
- Implementation of the Remixer Block from the Remixer paper, in Pytorch☆36Updated 4 years ago
- High performance pytorch modules☆18Updated 3 years ago
- Implementation / replication of DALL-E, OpenAI's Text to Image Transformer, in Pytorch☆59Updated 5 years ago
- Implementation of OmniNet, Omnidirectional Representations from Transformers, in Pytorch☆59Updated 4 years ago
- Implementation for NATv2.☆23Updated 4 years ago
- ☆20Updated 4 years ago
- ☆24Updated 4 years ago
- Directed masked autoencoders☆14Updated this week
- An open source implementation of CLIP.☆33Updated 3 years ago
- Implementation of Kronecker Attention in Pytorch☆19Updated 5 years ago
- Official Pytorch Implementation for the paper 'SUPER-ADAM: Faster and Universal Framework of Adaptive Gradients'☆17Updated 4 years ago
- ☆32Updated 3 years ago
- Command-line tool for downloading and extending the RedCaps dataset.☆50Updated 2 years ago
- ☆11Updated 5 years ago
- source code and pre-trained/fine-tuned checkpoint for NAACL 2021 paper LightningDOT☆72Updated 3 years ago
- Official PyTorch implementation of Time-aware Large Kernel (TaLK) Convolutions (ICML 2020)☆29Updated 5 years ago
- Implementation of Cross Transformer for spatially-aware few-shot transfer, in Pytorch☆54Updated 4 years ago
- ☆27Updated 4 years ago
- Large Scale BERT Distillation☆33Updated 2 years ago
- Implementation of the retriever distillation procedure as outlined in the paper "Distilling Knowledge from Reader to Retriever"☆32Updated 5 years ago
- ☆13Updated 6 years ago
- Repository for the paper Do SSL Models Have Déjà Vu? A Case of Unintended Memorization in Self-supervised Learning☆36Updated 2 years ago
- ☆34Updated 7 years ago
- LV-BERT: Exploiting Layer Variety for BERT (Findings of ACL 2021)☆18Updated 2 years ago