guolinke / fused_opsLinks
☆10Updated 3 years ago
Alternatives and similar repositories for fused_ops
Users that are interested in fused_ops are comparing it to the libraries listed below
Sorting:
- ☆20Updated 4 years ago
- (ACL-IJCNLP 2021) Convolutions and Self-Attention: Re-interpreting Relative Positions in Pre-trained Language Models.☆21Updated 3 years ago
- Pre-trained Language Model for Scientific Text☆45Updated last year
- A dual learning toolkit developed by Microsoft Research☆73Updated 2 years ago
- Unofficially Implements https://arxiv.org/abs/2112.05682 to get Linear Memory Cost on Attention for PyTorch☆12Updated 4 years ago
- [ICML 2020] code for "PowerNorm: Rethinking Batch Normalization in Transformers" https://arxiv.org/abs/2003.07845☆120Updated 4 years ago
- Code for the ICML'20 paper "Improving Transformer Optimization Through Better Initialization"☆89Updated 5 years ago
- This package implements THOR: Transformer with Stochastic Experts.☆65Updated 4 years ago
- Source code for "Efficient Training of BERT by Progressively Stacking"☆113Updated 6 years ago
- A Toolkit for Training, Tracking, Saving Models and Syncing Results☆62Updated 5 years ago
- 基于Transformer的单模型、多尺度的VAE模型☆58Updated 4 years ago
- [EVA ICLR'23; LARA ICML'22] Efficient attention mechanisms via control variates, random features, and importance sampling☆87Updated 2 years ago
- ☆11Updated 2 years ago
- This repository contains the code for the paper in Findings of EMNLP 2021: "EfficientBERT: Progressively Searching Multilayer Perceptron …☆33Updated 2 years ago
- Code for the paper "Query-Key Normalization for Transformers"☆51Updated 4 years ago
- Block Sparse movement pruning☆83Updated 5 years ago
- ☆37Updated 2 years ago
- Efficient Neural Interaction Functions Search for Collaborative Filtering☆18Updated 5 years ago
- Source code for NAACL 2021 paper "TR-BERT: Dynamic Token Reduction for Accelerating BERT Inference"☆48Updated 3 years ago
- triton ver of gqa flash attn, based on the tutorial☆12Updated last year
- Source code repo for paper "TLDR: Token Loss Dynamic Reweighting for Reducing Repetitive Utterance Generation"☆10Updated 2 years ago
- Transformers at any scale☆42Updated 2 years ago
- Code for EMNLP 2020 paper CoDIR☆41Updated 3 years ago
- FairSeq repo with Apollo optimizer☆114Updated 2 years ago
- Implementation of COCO-LM, Correcting and Contrasting Text Sequences for Language Model Pretraining, in Pytorch☆46Updated 4 years ago
- The official implementation of You Only Compress Once: Towards Effective and Elastic BERT Compression via Exploit-Explore Stochastic Natu…☆48Updated 4 years ago
- The implementation of multi-branch attentive Transformer (MAT).☆33Updated 5 years ago
- A logging tool for deep learning.☆65Updated 10 months ago
- lanmt ebm☆12Updated 5 years ago
- Princeton NLP's pre-training library based on fairseq with DeepSpeed kernel integration 🚃☆116Updated 3 years ago