guolinke / fused_opsLinks
☆10Updated 3 years ago
Alternatives and similar repositories for fused_ops
Users that are interested in fused_ops are comparing it to the libraries listed below
Sorting:
- ☆20Updated 4 years ago
- Python pdb for multiple processes☆55Updated 3 months ago
- This package implements THOR: Transformer with Stochastic Experts.☆65Updated 3 years ago
- ☆37Updated 2 years ago
- Code for the ICML'20 paper "Improving Transformer Optimization Through Better Initialization"☆89Updated 4 years ago
- [ICML 2020] code for "PowerNorm: Rethinking Batch Normalization in Transformers" https://arxiv.org/abs/2003.07845☆120Updated 4 years ago
- Implementation of Lie Transformer, Equivariant Self-Attention, in Pytorch☆94Updated 4 years ago
- A dual learning toolkit developed by Microsoft Research☆71Updated 2 years ago
- Code for the paper "Query-Key Normalization for Transformers"☆47Updated 4 years ago
- A logging tool for deep learning.☆60Updated 5 months ago
- Efficient Neural Interaction Functions Search for Collaborative Filtering☆18Updated 5 years ago
- 基于Transformer的单模型、多尺度的VAE模型☆57Updated 4 years ago
- (ACL-IJCNLP 2021) Convolutions and Self-Attention: Re-interpreting Relative Positions in Pre-trained Language Models.☆21Updated 3 years ago
- Contextual Position Encoding but with some custom CUDA Kernels https://arxiv.org/abs/2405.18719☆22Updated last year
- lanmt ebm☆12Updated 5 years ago
- [EVA ICLR'23; LARA ICML'22] Efficient attention mechanisms via control variates, random features, and importance sampling☆86Updated 2 years ago
- CLASP - Contrastive Language-Aminoacid Sequence Pretraining☆143Updated 3 years ago
- A Toolkit for Training, Tracking, Saving Models and Syncing Results☆62Updated 5 years ago
- [EMNLP'19] Summary for Transformer Understanding☆53Updated 5 years ago
- ☆29Updated 2 years ago
- ☆97Updated 2 years ago
- A template for deep learning projects.☆16Updated 4 months ago
- Codes for "Understanding and Improving Transformer From a Multi-Particle Dynamic System Point of View"☆148Updated 6 years ago
- Source code for "Efficient Training of BERT by Progressively Stacking"☆113Updated 6 years ago
- Pre-trained Language Model for Scientific Text☆46Updated last year
- Implementation of the Triangle Multiplicative module, used in Alphafold2 as an efficient way to mix rows or columns of a 2d feature map, …☆35Updated 4 years ago
- Unofficially Implements https://arxiv.org/abs/2112.05682 to get Linear Memory Cost on Attention for PyTorch☆12Updated 3 years ago
- Source code of paper "BP-Transformer: Modelling Long-Range Context via Binary Partitioning"☆128Updated 4 years ago
- FairSeq repo with Apollo optimizer☆114Updated last year
- Code for EMNLP 2020 paper CoDIR☆41Updated 2 years ago