microsoft / fastformers
FastFormers - highly efficient transformer models for NLU
☆703Updated last year
Alternatives and similar repositories for fastformers:
Users that are interested in fastformers are comparing it to the libraries listed below
- Repository for the paper "Optimal Subarchitecture Extraction for BERT"☆471Updated 2 years ago
- An efficient implementation of the popular sequence models for text generation, summarization, and translation tasks. https://arxiv.org/p…☆429Updated 2 years ago
- SentAugment is a data augmentation technique for NLP that retrieves similar sentences from a large bank of sentences. It can be used in c…☆361Updated 2 years ago
- A Visual Analysis Tool to Explore Learned Representations in Transformers Models☆587Updated 11 months ago
- Repository containing code for "How to Train BERT with an Academic Budget" paper☆310Updated last year
- XTREME is a benchmark for the evaluation of the cross-lingual generalization ability of pre-trained multilingual models that covers 40 ty…☆637Updated 2 years ago
- Fast BPE☆659Updated 7 months ago
- Flexible components pairing 🤗 Transformers with Pytorch Lightning☆612Updated 2 years ago
- Fast, general, and tested differentiable structured prediction in PyTorch☆1,109Updated 2 years ago
- Integrating the Best of TF into PyTorch, for Machine Learning, Natural Language Processing, and Text Generation. This is part of the CAS…☆745Updated 2 years ago
- Library for 8-bit optimizers and quantization routines.☆717Updated 2 years ago
- Prune a model while finetuning or training.☆397Updated 2 years ago
- An Analysis Toolkit for Natural Language Generation (Translation, Captioning, Summarization, etc.)☆444Updated 3 months ago
- Transformer training code for sequential tasks☆609Updated 3 years ago
- Simple XLNet implementation with Pytorch Wrapper☆580Updated 5 years ago
- ⛵️The official PyTorch implementation for "BERT-of-Theseus: Compressing BERT by Progressive Module Replacing" (EMNLP 2020).☆310Updated last year
- Fast Block Sparse Matrices for Pytorch☆546Updated 4 years ago
- ☆489Updated 11 months ago
- MASS: Masked Sequence to Sequence Pre-training for Language Generation☆1,119Updated 2 years ago
- Transformers for Longer Sequences☆586Updated 2 years ago
- Summarization, translation, sentiment-analysis, text-generation and more at blazing speed using a T5 version implemented in ONNX.☆253Updated 2 years ago
- Understanding the Difficulty of Training Transformers☆328Updated 2 years ago
- Pretrain and finetune ELECTRA with fastai and huggingface. (Results of the paper replicated !)☆325Updated last year
- Plug and Play Language Model implementation. Allows to steer topic and attributes of GPT-2 models.☆1,135Updated 11 months ago
- ⚡ boost inference speed of T5 models by 5x & reduce the model size by 3x.☆572Updated last year
- Repository of code for the tutorial on Transfer Learning in NLP held at NAACL 2019 in Minneapolis, MN, USA☆720Updated 5 years ago
- jiant is an nlp toolkit☆1,657Updated last year
- Pre-trained subword embeddings in 275 languages, based on Byte-Pair Encoding (BPE)☆1,193Updated 3 months ago
- Code for using and evaluating SpanBERT.☆894Updated last year
- Super easy library for BERT based NLP models☆1,877Updated 5 months ago