Ki6an / fastT5
⚡ boost inference speed of T5 models by 5x & reduce the model size by 3x.
☆572Updated last year
Alternatives and similar repositories for fastT5:
Users that are interested in fastT5 are comparing it to the libraries listed below
- An efficient implementation of the popular sequence models for text generation, summarization, and translation tasks. https://arxiv.org/p…☆429Updated 2 years ago
- Summarization, translation, sentiment-analysis, text-generation and more at blazing speed using a T5 version implemented in ONNX.☆253Updated 2 years ago
- ☆490Updated 11 months ago
- NeuSpell: A Neural Spelling Correction Toolkit☆684Updated last year
- FastFormers - highly efficient transformer models for NLU☆703Updated last year
- Models to perform neural summarization (extractive and abstractive) using machine learning transformers and a tool to convert abstractive…☆428Updated last year
- [ACL 2021] Learning Dense Representations of Phrases at Scale; EMNLP'2021: Phrase Retrieval Learns Passage Retrieval, Too https://arxiv.o…☆604Updated 2 years ago
- Parallelformers: An Efficient Model Parallelization Toolkit for Deployment☆781Updated last year
- Autoregressive Entity Retrieval☆777Updated last year
- simpleT5 is built on top of PyTorch-lightning⚡️ and Transformers🤗 that lets you quickly train your T5 models.☆390Updated last year
- Tools to download and cleanup Common Crawl data☆981Updated last year
- Powerful unsupervised domain adaptation method for dense retrieval. Requires only unlabeled corpus and yields massive improvement: "GPL: …☆328Updated last year
- Compute Sentence Embeddings Fast!☆618Updated last year
- This repository contains the code for "Generating Datasets with Pretrained Language Models".☆187Updated 3 years ago
- Repository containing code for "How to Train BERT with an Academic Budget" paper☆310Updated last year
- NL-Augmenter 🦎 → 🐍 A Collaborative Repository of Natural Language Transformations☆779Updated 8 months ago
- ICML'2022: NLP From Scratch Without Large-Scale Pretraining: A Simple and Efficient Framework☆258Updated last year
- Efficient, scalable and enterprise-grade CPU/GPU inference server for 🤗 Hugging Face transformer models 🚀☆1,673Updated 3 months ago
- Repository for the paper "Optimal Subarchitecture Extraction for BERT"☆471Updated 2 years ago
- Pretrain and finetune ELECTRA with fastai and huggingface. (Results of the paper replicated !)☆325Updated last year
- Code associated with the Don't Stop Pretraining ACL 2020 paper☆530Updated 3 years ago
- Language model fine-tuning on NER with an easy interface and cross-domain evaluation. "T-NER: An All-Round Python Library for Transformer…☆383Updated last year
- docTTTTTquery document expansion model☆360Updated last year
- XtremeDistil framework for distilling/compressing massive multilingual neural network models to tiny and efficient models for AI at scale☆154Updated last year
- Efficient Attention for Long Sequence Processing☆91Updated last year
- Implementation of RETRO, Deepmind's Retrieval based Attention net, in Pytorch☆857Updated last year
- Fast + Non-Autoregressive Grammatical Error Correction using BERT. Code and Pre-trained models for paper "Parallel Iterative Edit Models …☆228Updated last year
- XTREME is a benchmark for the evaluation of the cross-lingual generalization ability of pre-trained multilingual models that covers 40 ty…☆637Updated 2 years ago
- ☆1,262Updated 2 years ago
- Interpretable Evaluation for AI Systems☆361Updated last year