microsoft / fastformersLinks
FastFormers - highly efficient transformer models for NLU
☆705Updated 3 months ago
Alternatives and similar repositories for fastformers
Users that are interested in fastformers are comparing it to the libraries listed below
Sorting:
- Repository for the paper "Optimal Subarchitecture Extraction for BERT"☆473Updated 3 years ago
- An efficient implementation of the popular sequence models for text generation, summarization, and translation tasks. https://arxiv.org/p…☆433Updated 2 years ago
- Library for 8-bit optimizers and quantization routines.☆715Updated 2 years ago
- Fast BPE☆671Updated last year
- SentAugment is a data augmentation technique for NLP that retrieves similar sentences from a large bank of sentences. It can be used in c…☆362Updated 3 years ago
- Flexible components pairing 🤗 Transformers with Pytorch Lightning☆610Updated 2 years ago
- Integrating the Best of TF into PyTorch, for Machine Learning, Natural Language Processing, and Text Generation. This is part of the CAS…☆745Updated 3 years ago
- Fast Block Sparse Matrices for Pytorch☆548Updated 4 years ago
- ⚡ boost inference speed of T5 models by 5x & reduce the model size by 3x.☆581Updated 2 years ago
- A Visual Analysis Tool to Explore Learned Representations in Transformers Models☆595Updated last year
- Prune a model while finetuning or training.☆403Updated 3 years ago
- Fast, general, and tested differentiable structured prediction in PyTorch☆1,113Updated 3 years ago
- XTREME is a benchmark for the evaluation of the cross-lingual generalization ability of pre-trained multilingual models that covers 40 ty…☆644Updated 2 years ago
- NL-Augmenter 🦎 → 🐍 A Collaborative Repository of Natural Language Transformations☆785Updated last year
- Code for using and evaluating SpanBERT.☆898Updated last year
- Repository containing code for "How to Train BERT with an Academic Budget" paper☆313Updated last year
- An Analysis Toolkit for Natural Language Generation (Translation, Captioning, Summarization, etc.)☆445Updated this week
- Understanding the Difficulty of Training Transformers☆329Updated 3 years ago
- Parallelformers: An Efficient Model Parallelization Toolkit for Deployment☆790Updated 2 years ago
- jiant is an nlp toolkit☆1,669Updated last year
- MASS: Masked Sequence to Sequence Pre-training for Language Generation☆1,116Updated 2 years ago
- Pretrain and finetune ELECTRA with fastai and huggingface. (Results of the paper replicated !)☆330Updated last year
- Transformer training code for sequential tasks☆612Updated 3 years ago
- Simple XLNet implementation with Pytorch Wrapper☆581Updated 5 years ago
- Code associated with the Don't Stop Pretraining ACL 2020 paper☆532Updated 3 years ago
- ☆510Updated last year
- Statistics and accepted paper list of NLP conferences with arXiv link☆431Updated 3 years ago
- ⛵️The official PyTorch implementation for "BERT-of-Theseus: Compressing BERT by Progressive Module Replacing" (EMNLP 2020).☆313Updated 2 years ago
- Efficient, check-pointed data loading for deep learning with massive data sets.☆208Updated 2 years ago
- Transformers for Longer Sequences☆615Updated 2 years ago