simpleT5 is built on top of PyTorch-lightning⚡️ and Transformers🤗 that lets you quickly train your T5 models.
☆400May 19, 2023Updated 2 years ago
Alternatives and similar repositories for simpleT5
Users that are interested in simpleT5 are comparing it to the libraries listed below
Sorting:
- ⚡ boost inference speed of T5 models by 5x & reduce the model size by 3x.☆589Apr 24, 2023Updated 2 years ago
- A collection of scripts to preprocess ASR datasets and finetune language-specific Wav2Vec2 XLSR models☆30Apr 21, 2021Updated 4 years ago
- Fine tune a T5 transformer model using PyTorch & Transformers🤗☆220Feb 10, 2021Updated 5 years ago
- Code for EMNLP 2021 paper: Improving Sequence-to-Sequence Pre-training via Sequence Span Rewriting☆17Nov 30, 2021Updated 4 years ago
- Transformers for Information Retrieval, Text Classification, NER, QA, Language Modelling, Language Generation, T5, Multi-Modal, and Conve…☆4,231Aug 25, 2025Updated 6 months ago
- Models to perform neural summarization (extractive and abstractive) using machine learning transformers and a tool to convert abstractive…☆439May 26, 2025Updated 9 months ago
- Keywords to Sentences☆452Aug 15, 2023Updated 2 years ago
- Convenient Text-to-Text Training for Transformers☆19Dec 10, 2021Updated 4 years ago
- Official code and data repository for our EMNLP 2020 long paper "Reformulating Unsupervised Style Transfer as Paraphrase Generation" (htt…☆239Jun 13, 2022Updated 3 years ago
- [NeurIPS'22 Spotlight] A Contrastive Framework for Neural Text Generation☆475Mar 7, 2024Updated last year
- Efficient, scalable and enterprise-grade CPU/GPU inference server for 🤗 Hugging Face transformer models 🚀☆1,688Oct 23, 2024Updated last year
- Creative Instructions Project☆11Sep 4, 2023Updated 2 years ago
- Load What You Need: Smaller Multilingual Transformers for Pytorch and TensorFlow 2.0.☆105May 20, 2022Updated 3 years ago
- Reproduce results and replicate training fo T0 (Multitask Prompted Training Enables Zero-Shot Task Generalization)☆465Nov 5, 2022Updated 3 years ago
- A Neural Language Style Transfer framework to transfer natural language text smoothly between fine-grained language styles like formal/ca…☆493Dec 12, 2023Updated 2 years ago
- Official code and model checkpoints for our EMNLP 2022 paper "RankGen - Improving Text Generation with Large Ranking Models" (https://arx…☆137Aug 2, 2023Updated 2 years ago
- A framework for detecting, highlighting and correcting grammatical errors on natural language text. Created by Prithiviraj Damodaran. Ope…☆1,573Feb 15, 2023Updated 3 years ago
- Efficient few-shot learning with Sentence Transformers☆2,688Dec 11, 2025Updated 2 months ago
- Neural question generation using transformers☆1,143Apr 5, 2024Updated last year
- ☆10Mar 29, 2021Updated 4 years ago
- Computationally Modelling Resisting Strategies in Persuasive Conversations☆12Feb 6, 2022Updated 4 years ago
- [NAACL 2021] Factual Probing Is [MASK]: Learning vs. Learning to Recall https://arxiv.org/abs/2104.05240☆168Oct 7, 2022Updated 3 years ago
- Alternate Implementation for Zero Shot Text Classification: Instead of reframing NLI/XNLI, this reframes the text backbone of CLIP models…☆37Apr 5, 2022Updated 3 years ago
- Happy Transformer makes it easy to fine-tune and perform inference with NLP Transformer models.☆542Jan 10, 2026Updated last month
- Text2Text Language Modeling Toolkit☆304Jan 14, 2025Updated last year
- Powerful unsupervised domain adaptation method for dense retrieval. Requires only unlabeled corpus and yields massive improvement: "GPL: …☆340Jul 6, 2023Updated 2 years ago
- ☆12Apr 25, 2022Updated 3 years ago
- Covid Doctor chatbot using DialoGPT☆13May 25, 2022Updated 3 years ago
- Minimal keyword extraction with BERT☆4,116Feb 3, 2026Updated last month
- ☆2,946Jan 15, 2026Updated last month
- Code for the paper "Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer"☆6,489Jan 14, 2026Updated last month
- 🤖📇 handling multiple nlp task in one pipeline☆57Sep 18, 2025Updated 5 months ago
- State-of-the-Art Text Embeddings☆18,323Updated this week
- Fast & Simple repository for pre-training and fine-tuning T5-style models☆1,017Aug 21, 2024Updated last year
- Train large COMET (T5-3B/GPT2-XL) with small memory (on 11GB memory GPUs like 1080/2080) using DeepSpeed.☆14Jan 23, 2022Updated 4 years ago
- ☆23Dec 8, 2025Updated 2 months ago
- Unsupervised spoken sentence embeddings☆14Dec 14, 2022Updated 3 years ago
- Data augmentation for NLP☆4,645Jun 24, 2024Updated last year
- This is a repository of the study performed under the Adversarial Paraphrasing Task (APT).☆25Oct 5, 2021Updated 4 years ago