Knowledgator / TurboT5
Truly flash T5 realization!
☆61Updated 8 months ago
Alternatives and similar repositories for TurboT5:
Users that are interested in TurboT5 are comparing it to the libraries listed below
- A fast implementation of T5/UL2 in PyTorch using Flash Attention☆82Updated 2 weeks ago
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.☆93Updated 2 years ago
- Code for NeurIPS LLM Efficiency Challenge☆55Updated 10 months ago
- Code for Zero-Shot Tokenizer Transfer☆120Updated last month
- Utilities for Training Very Large Models☆57Updated 4 months ago
- Exploring finetuning public checkpoints on filter 8K sequences on Pile☆115Updated last year
- The simplest implementation of recent Sparse Attention patterns for efficient LLM inference.☆56Updated 3 weeks ago
- The source code of our work "Prepacking: A Simple Method for Fast Prefilling and Increased Throughput in Large Language Models"☆59Updated 4 months ago
- minimal pytorch implementation of bm25 (with sparse tensors)☆97Updated 11 months ago
- Lightweight demos for finetuning LLMs. Powered by 🤗 transformers and open-source datasets.☆67Updated 3 months ago
- ☆88Updated 8 months ago
- Fast, Modern, Memory Efficient, and Low Precision PyTorch Optimizers☆80Updated 7 months ago
- Supercharge huggingface transformers with model parallelism.☆76Updated 4 months ago
- XTR: Rethinking the Role of Token Retrieval in Multi-Vector Retrieval☆44Updated 7 months ago
- ☆47Updated 5 months ago
- ☆41Updated 2 weeks ago
- This is a new metric that can be used to evaluate faithfulness of text generated by LLMs. The work behind this repository can be found he…☆31Updated last year
- ☆76Updated last year
- ☆31Updated 7 months ago
- Training and evaluation code for the paper "Headless Language Models: Learning without Predicting with Contrastive Weight Tying" (https:/…☆25Updated 9 months ago
- Implementation of the paper: "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention" from Google in pyTO…☆53Updated this week
- One Initialization to Rule them All: Fine-tuning via Explained Variance Adaptation☆36Updated 4 months ago
- Improving Text Embedding of Language Models Using Contrastive Fine-tuning☆59Updated 6 months ago
- Index of URLs to pdf files all over the internet and scripts☆21Updated last year
- some common Huggingface transformers in maximal update parametrization (µP)☆78Updated 2 years ago
- MEXMA: Token-level objectives improve sentence representations☆40Updated last month
- ☆43Updated 3 months ago
- Aioli: A unified optimization framework for language model data mixing☆20Updated 3 weeks ago
- Official repository for "Scaling Retrieval-Based Langauge Models with a Trillion-Token Datastore".☆156Updated this week
- ☆72Updated 9 months ago