Ki6an / fastT5
⚡ boost inference speed of T5 models by 5x & reduce the model size by 3x.
☆579Updated last year
Alternatives and similar repositories for fastT5:
Users that are interested in fastT5 are comparing it to the libraries listed below
- An efficient implementation of the popular sequence models for text generation, summarization, and translation tasks. https://arxiv.org/p…☆432Updated 2 years ago
- ☆501Updated last year
- FastFormers - highly efficient transformer models for NLU☆705Updated 2 weeks ago
- Powerful unsupervised domain adaptation method for dense retrieval. Requires only unlabeled corpus and yields massive improvement: "GPL: …☆331Updated last year
- Autoregressive Entity Retrieval☆783Updated last year
- Parallelformers: An Efficient Model Parallelization Toolkit for Deployment☆785Updated last year
- [ACL 2021] Learning Dense Representations of Phrases at Scale; EMNLP'2021: Phrase Retrieval Learns Passage Retrieval, Too https://arxiv.o…☆603Updated 2 years ago
- NeuSpell: A Neural Spelling Correction Toolkit☆692Updated last year
- Summarization, translation, sentiment-analysis, text-generation and more at blazing speed using a T5 version implemented in ONNX.☆253Updated 2 years ago
- Efficient, scalable and enterprise-grade CPU/GPU inference server for 🤗 Hugging Face transformer models 🚀☆1,683Updated 5 months ago
- Implementation of RETRO, Deepmind's Retrieval based Attention net, in Pytorch☆861Updated last year
- Models to perform neural summarization (extractive and abstractive) using machine learning transformers and a tool to convert abstractive…☆430Updated last year
- Repository containing code for "How to Train BERT with an Academic Budget" paper☆312Updated last year
- Pretrain and finetune ELECTRA with fastai and huggingface. (Results of the paper replicated !)☆327Updated last year
- Guide: Finetune GPT2-XL (1.5 Billion Parameters) and finetune GPT-NEO (2.7 B) on a single GPU with Huggingface Transformers using DeepSpe…☆437Updated last year
- XTREME is a benchmark for the evaluation of the cross-lingual generalization ability of pre-trained multilingual models that covers 40 ty…☆643Updated 2 years ago
- This dataset contains synthetic training data for grammatical error correction. The corpus is generated by corrupting clean sentences fro…☆160Updated 6 months ago
- XtremeDistil framework for distilling/compressing massive multilingual neural network models to tiny and efficient models for AI at scale☆154Updated last year
- NL-Augmenter 🦎 → 🐍 A Collaborative Repository of Natural Language Transformations☆781Updated 10 months ago
- Language model fine-tuning on NER with an easy interface and cross-domain evaluation. "T-NER: An All-Round Python Library for Transformer…☆386Updated last year
- BLEURT is a metric for Natural Language Generation based on transfer learning.☆726Updated last year
- Fast Inference Solutions for BLOOM☆561Updated 5 months ago
- Tools to download and cleanup Common Crawl data☆995Updated last year
- simpleT5 is built on top of PyTorch-lightning⚡️ and Transformers🤗 that lets you quickly train your T5 models.☆394Updated last year
- SentAugment is a data augmentation technique for NLP that retrieves similar sentences from a large bank of sentences. It can be used in c…☆362Updated 3 years ago
- ☆345Updated 3 years ago
- UnifiedQA: Crossing Format Boundaries With a Single QA System☆432Updated 2 years ago
- An open collection of implementation tips, tricks and resources for training large language models☆471Updated 2 years ago
- This repository contains the code for "Generating Datasets with Pretrained Language Models".☆188Updated 3 years ago
- This dataset contains 108,463 human-labeled and 656k noisily labeled pairs that feature the importance of modeling structure, context, an…☆558Updated 3 years ago