guolinke / fused_opsLinks
☆10Updated 3 years ago
Alternatives and similar repositories for fused_ops
Users that are interested in fused_ops are comparing it to the libraries listed below
Sorting:
- ☆20Updated 4 years ago
- ☆37Updated 2 years ago
- [ICML 2020] code for "PowerNorm: Rethinking Batch Normalization in Transformers" https://arxiv.org/abs/2003.07845☆120Updated 4 years ago
- (ACL-IJCNLP 2021) Convolutions and Self-Attention: Re-interpreting Relative Positions in Pre-trained Language Models.☆21Updated 3 years ago
- [EVA ICLR'23; LARA ICML'22] Efficient attention mechanisms via control variates, random features, and importance sampling☆87Updated 2 years ago
- Code for the ICML'20 paper "Improving Transformer Optimization Through Better Initialization"☆89Updated 4 years ago
- Code for the paper "Query-Key Normalization for Transformers"☆49Updated 4 years ago
- A dual learning toolkit developed by Microsoft Research☆73Updated 2 years ago
- A supplementary code for Editable Neural Networks, an ICLR 2020 submission.☆46Updated 5 years ago
- The official implementation of You Only Compress Once: Towards Effective and Elastic BERT Compression via Exploit-Explore Stochastic Natu…☆48Updated 4 years ago
- This package implements THOR: Transformer with Stochastic Experts.☆65Updated 4 years ago
- ☆12Updated 2 years ago
- This repository contains the code for the paper in Findings of EMNLP 2021: "EfficientBERT: Progressively Searching Multilayer Perceptron …☆33Updated 2 years ago
- Implementation of the Triangle Multiplicative module, used in Alphafold2 as an efficient way to mix rows or columns of a 2d feature map, …☆39Updated 4 years ago
- Using FlexAttention to compute attention with different masking patterns☆47Updated last year
- ☆11Updated last year
- lanmt ebm☆12Updated 5 years ago
- Unofficially Implements https://arxiv.org/abs/2112.05682 to get Linear Memory Cost on Attention for PyTorch☆12Updated 3 years ago
- A logging tool for deep learning.☆63Updated 8 months ago
- Torch Distributed Experimental☆117Updated last year
- Codes for DATA: Differentiable ArchiTecture Approximation.☆11Updated 4 years ago
- Implementation of Lie Transformer, Equivariant Self-Attention, in Pytorch☆96Updated 4 years ago
- A Toolkit for Training, Tracking, Saving Models and Syncing Results☆62Updated 5 years ago
- ☆12Updated 6 months ago
- 基于Transformer的单模型、多尺度的VAE模型☆57Updated 4 years ago
- Pre-trained Language Model for Scientific Text☆46Updated last year
- 32 times longer context window than vanilla Transformers and up to 4 times longer than memory efficient Transformers.☆49Updated 2 years ago
- Sparse Backpropagation for Mixture-of-Expert Training☆29Updated last year
- Parameter Efficient Transfer Learning with Diff Pruning☆74Updated 4 years ago
- ☆32Updated last year