yurakuratov / t5-experimentsLinks
Tools and scripts for experimenting with Transformers: Bert, T5...
☆60Updated last year
Alternatives and similar repositories for t5-experiments
Users that are interested in t5-experiments are comparing it to the libraries listed below
Sorting:
- Seahorse is a dataset for multilingual, multi-faceted summarization evaluation. It consists of 96K summaries with human ratings along 6 q…☆89Updated last year
- Helper scripts and notes that were used while porting various nlp models☆48Updated 3 years ago
- ☆101Updated 2 years ago
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.☆96Updated 2 years ago
- Embedding Recycling for Language models☆38Updated 2 years ago
- SWIM-IR is a Synthetic Wikipedia-based Multilingual Information Retrieval training set with 28 million query-passage pairs spanning 33 la…☆49Updated 2 years ago
- ☆67Updated 3 years ago
- ☆78Updated last year
- Code for the paper "The Impact of Positional Encoding on Length Generalization in Transformers", NeurIPS 2023☆138Updated last year
- ☆44Updated last year
- Repo for ICML23 "Why do Nearest Neighbor Language Models Work?"☆59Updated 2 years ago
- 🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.☆81Updated 3 years ago
- Official code and model checkpoints for our EMNLP 2022 paper "RankGen - Improving Text Generation with Large Ranking Models" (https://arx…☆138Updated 2 years ago
- Starbucks: Improved Training for 2D Matryoshka Embeddings☆22Updated 5 months ago
- Open source library for few shot NLP☆78Updated 2 years ago
- Pretraining Efficiently on S2ORC!☆173Updated last year
- ☆72Updated 2 years ago
- Exploring finetuning public checkpoints on filter 8K sequences on Pile☆116Updated 2 years ago
- A library for computing diverse text characteristics and using them to analyze data sets and models with ease.☆40Updated 3 years ago
- ☆76Updated last year
- Official Repository of Pretraining Without Attention (BiGS), BiGS is the first model to achieve BERT-level transfer learning on the GLUE …☆115Updated last year
- Training and evaluation code for the paper "Headless Language Models: Learning without Predicting with Contrastive Weight Tying" (https:/…☆28Updated last year
- ☆39Updated last year
- Hierarchical Attention Transformers (HAT)☆59Updated last year
- ☆65Updated 2 years ago
- Transformers at any scale☆41Updated last year
- Query-focused summarization data☆42Updated 2 years ago
- A Toolkit for Distributional Control of Generative Models☆73Updated 4 months ago
- Evaluation pipeline for the BabyLM Challenge 2023.☆77Updated 2 years ago
- Experiments with generating opensource language model assistants☆97Updated 2 years ago