guolinke / fused_opsLinks
☆10Updated 3 years ago
Alternatives and similar repositories for fused_ops
Users that are interested in fused_ops are comparing it to the libraries listed below
Sorting:
- ☆20Updated 4 years ago
- [ICML 2020] code for "PowerNorm: Rethinking Batch Normalization in Transformers" https://arxiv.org/abs/2003.07845☆120Updated 4 years ago
- Source code for "Efficient Training of BERT by Progressively Stacking"☆113Updated 6 years ago
- ☆37Updated 2 years ago
- A logging tool for deep learning.☆60Updated 6 months ago
- Code for the ICML'20 paper "Improving Transformer Optimization Through Better Initialization"☆89Updated 4 years ago
- (ACL-IJCNLP 2021) Convolutions and Self-Attention: Re-interpreting Relative Positions in Pre-trained Language Models.☆21Updated 3 years ago
- This package implements THOR: Transformer with Stochastic Experts.☆65Updated 3 years ago
- ☆29Updated 3 years ago
- Unofficially Implements https://arxiv.org/abs/2112.05682 to get Linear Memory Cost on Attention for PyTorch☆12Updated 3 years ago
- [EVA ICLR'23; LARA ICML'22] Efficient attention mechanisms via control variates, random features, and importance sampling☆86Updated 2 years ago
- Torch Distributed Experimental☆117Updated last year
- Distributed preprocessing and data loading for language datasets☆39Updated last year
- Implementation of Lie Transformer, Equivariant Self-Attention, in Pytorch☆96Updated 4 years ago
- Python pdb for multiple processes☆59Updated 4 months ago
- Using FlexAttention to compute attention with different masking patterns☆44Updated last year
- Implementation of the Triangle Multiplicative module, used in Alphafold2 as an efficient way to mix rows or columns of a 2d feature map, …☆36Updated 4 years ago
- CLASP - Contrastive Language-Aminoacid Sequence Pretraining☆143Updated 4 years ago
- PyTorch Examples repo for "ReZero is All You Need: Fast Convergence at Large Depth"☆62Updated last year
- [KDD'22] Learned Token Pruning for Transformers☆100Updated 2 years ago
- Implementation of a Transformer using ReLA (Rectified Linear Attention) from https://arxiv.org/abs/2104.07012☆49Updated 3 years ago
- [NeurIPS 2022] Your Transformer May Not be as Powerful as You Expect (official implementation)☆33Updated 2 years ago
- [ICLR 2024] This is the official PyTorch implementation of "QLLM: Accurate and Efficient Low-Bitwidth Quantization for Large Language Mod…☆30Updated last year
- [EMNLP'19] Summary for Transformer Understanding☆53Updated 5 years ago
- An implementation of Transformer with Expire-Span, a circuit for learning which memories to retain☆34Updated 4 years ago
- lanmt ebm☆12Updated 5 years ago
- 32 times longer context window than vanilla Transformers and up to 4 times longer than memory efficient Transformers.☆49Updated 2 years ago
- Unofficial PyTorch implementation of "Step-unrolled Denoising Autoencoders for Text Generation"☆24Updated 2 years ago
- Implementation of COCO-LM, Correcting and Contrasting Text Sequences for Language Model Pretraining, in Pytorch☆46Updated 4 years ago
- Code for the paper "Query-Key Normalization for Transformers"☆48Updated 4 years ago