Fast & Simple repository for pre-training and fine-tuning T5-style models
☆1,017Aug 21, 2024Updated last year
Alternatives and similar repositories for nanoT5
Users that are interested in nanoT5 are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Cramming the training of a (BERT-type) language model into limited compute.☆1,360Jun 13, 2024Updated last year
- Efficient Transformers with Dynamic Token Pooling☆68May 20, 2023Updated 2 years ago
- ☆145Mar 31, 2023Updated 2 years ago
- Implementation of the LLaMA language model based on nanoGPT. Supports flash attention, Int8 and GPTQ 4bit quantization, LoRA and LLaMA-Ad…☆6,083Jul 1, 2025Updated 8 months ago
- Experiments for efforts to train a new and improved t5☆76Apr 15, 2024Updated last year
- Managed Database hosting by DigitalOcean • AdPostgreSQL, MySQL, MongoDB, Kafka, Valkey, and OpenSearch available. Automatically scale up storage and focus on building your apps.
- A tiny library for coding with large language models.☆1,233Jul 10, 2024Updated last year
- The simplest implementation of recent Sparse Attention patterns for efficient LLM inference.☆91Jul 17, 2025Updated 8 months ago
- Minimalistic large language model 3D-parallelism training☆2,626Feb 19, 2026Updated last month
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.☆97Feb 9, 2023Updated 3 years ago
- Flacuna was developed by fine-tuning Vicuna on Flan-mini, a comprehensive instruction collection encompassing various tasks. Vicuna is al…☆111Sep 10, 2023Updated 2 years ago
- A fast implementation of T5/UL2 in PyTorch using Flash Attention☆115Oct 30, 2025Updated 5 months ago
- Revisiting Efficient Training Algorithms For Transformer-based Language Models (NeurIPS 2023)☆81Aug 30, 2023Updated 2 years ago
- A modular RL library to fine-tune language models to human preferences☆2,383Mar 1, 2024Updated 2 years ago
- 🤖 A PyTorch library of curated Transformer models and their composable components☆895Apr 17, 2024Updated last year
- DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- RWKV (pronounced RwaKuv) is an RNN with great LLM performance, which can also be directly trained like a GPT transformer (parallelizable)…☆14,431Mar 5, 2026Updated 3 weeks ago
- Foundation Architecture for (M)LLMs☆3,134Apr 11, 2024Updated last year
- Robust recipes to align language models with human and AI preferences☆5,535Sep 8, 2025Updated 6 months ago
- Accessible large language models via k-bit quantization for PyTorch.☆8,078Updated this week
- The official implementation of “Sophia: A Scalable Stochastic Second-order Optimizer for Language Model Pre-training”☆988Jan 30, 2024Updated 2 years ago
- Official Repository of Pretraining Without Attention (BiGS), BiGS is the first model to achieve BERT-level transfer learning on the GLUE …☆118Mar 16, 2024Updated 2 years ago
- A repo for distributed training of language models with Reinforcement Learning via Human Feedback (RLHF)☆4,742Jan 8, 2024Updated 2 years ago
- The hub for EleutherAI's work on interpretability and learning dynamics☆2,751Nov 15, 2025Updated 4 months ago
- Train transformer language models with reinforcement learning.☆17,781Updated this week
- Virtual machines for every use case on DigitalOcean • AdGet dependable uptime with 99.99% SLA, simple security tools, and predictable monthly pricing with DigitalOcean's virtual machines, called Droplets.
- What would you do with 1000 H100s...☆1,166Jan 10, 2024Updated 2 years ago
- Large language models (LLMs) made easy, EasyLM is a one stop solution for pre-training, finetuning, evaluating and serving LLMs in JAX/Fl…☆2,521Aug 13, 2024Updated last year
- Meta Lingua: a lean, efficient, and easy-to-hack codebase to research LLMs.☆4,752Jul 18, 2025Updated 8 months ago
- Freeing data processing from scripting madness by providing a set of platform-agnostic customizable pipeline processing blocks.☆2,965Mar 16, 2026Updated 2 weeks ago
- Scaling Data-Constrained Language Models☆342Jun 28, 2025Updated 9 months ago
- Simple and efficient pytorch-native transformer text generation in <1000 LOC of python.☆6,186Aug 22, 2025Updated 7 months ago
- ☆317Jun 21, 2024Updated last year
- A repository for research on medium sized language models.☆536Jun 6, 2025Updated 9 months ago
- The TinyLlama project is an open endeavor to pretrain a 1.1B Llama model on 3 trillion tokens.☆8,928May 3, 2024Updated last year
- Managed Database hosting by DigitalOcean • AdPostgreSQL, MySQL, MongoDB, Kafka, Valkey, and OpenSearch available. Automatically scale up storage and focus on building your apps.
- An open collection of implementation tips, tricks and resources for training large language models☆497Mar 8, 2023Updated 3 years ago
- Minimalistic, extremely fast, and hackable researcher's toolbench for GPT models in 307 lines of code. Reaches <3.8 validation loss on wi…☆354Jul 29, 2024Updated last year
- 20+ high-performance LLMs with recipes to pretrain, finetune and deploy at scale.☆13,256Mar 22, 2026Updated last week
- Salesforce open-source LLMs with 8k sequence length.☆726Jan 31, 2025Updated last year
- Ungreedy subword tokenizer and vocabulary trainer for Python, Go & Javascript☆619Jul 2, 2024Updated last year
- Efficient few-shot learning with Sentence Transformers☆2,703Dec 11, 2025Updated 3 months ago
- Running large language models on a single GPU for throughput-oriented scenarios.☆9,380Oct 28, 2024Updated last year