guolinke / fused_opsLinks
☆10Updated 2 years ago
Alternatives and similar repositories for fused_ops
Users that are interested in fused_ops are comparing it to the libraries listed below
Sorting:
- ☆20Updated 4 years ago
- Python pdb for multiple processes☆49Updated last month
- (ACL-IJCNLP 2021) Convolutions and Self-Attention: Re-interpreting Relative Positions in Pre-trained Language Models.☆21Updated 2 years ago
- ☆11Updated last year
- Unofficially Implements https://arxiv.org/abs/2112.05682 to get Linear Memory Cost on Attention for PyTorch☆12Updated 3 years ago
- ☆12Updated 2 years ago
- lanmt ebm☆12Updated 5 years ago
- Contextual Position Encoding but with some custom CUDA Kernels https://arxiv.org/abs/2405.18719☆22Updated last year
- Source code for "Efficient Training of BERT by Progressively Stacking"☆112Updated 5 years ago
- A logging tool for deep learning.☆59Updated 3 months ago
- Implementation of the Triangle Multiplicative module, used in Alphafold2 as an efficient way to mix rows or columns of a 2d feature map, …☆32Updated 3 years ago
- a high performance system for customized-precision distributed deep learning☆12Updated 4 years ago
- BANG is a new pretraining model to Bridge the gap between Autoregressive (AR) and Non-autoregressive (NAR) Generation. AR and NAR generat…☆28Updated 3 years ago
- Odysseus: Playground of LLM Sequence Parallelism☆70Updated last year
- Implementation of a Transformer using ReLA (Rectified Linear Attention) from https://arxiv.org/abs/2104.07012☆49Updated 3 years ago
- ☆22Updated last year
- Code for the ICML'20 paper "Improving Transformer Optimization Through Better Initialization"☆89Updated 4 years ago
- Implementation of Denoising Diffusion for protein design, but using the new Equiformer (successor to SE3 Transformers) with some addition…☆57Updated 2 years ago
- [ICML 2020] code for "PowerNorm: Rethinking Batch Normalization in Transformers" https://arxiv.org/abs/2003.07845☆120Updated 4 years ago
- Transformation library for LightGBM☆33Updated last year
- ☆37Updated 2 years ago
- Distributed preprocessing and data loading for language datasets☆39Updated last year
- Implementation of Tranception, an attention network, paired with retrieval, that is SOTA for protein fitness prediction☆32Updated 3 years ago
- This package implements THOR: Transformer with Stochastic Experts.☆65Updated 3 years ago
- Using FlexAttention to compute attention with different masking patterns☆44Updated 9 months ago
- [JMLR'20] NeurIPS 2019 MicroNet Challenge Efficient Language Modeling, Champion☆40Updated 4 years ago
- The implementation of multi-branch attentive Transformer (MAT).☆33Updated 4 years ago
- Code for EMNLP 2020 paper CoDIR☆41Updated 2 years ago
- ☆29Updated 2 years ago
- ☆31Updated last year