huggingface / olm-trainingLinks
Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.
β96Updated 3 years ago
Alternatives and similar repositories for olm-training
Users that are interested in olm-training are comparing it to the libraries listed below
Sorting:
- β102Updated 3 years ago
- Tutorial to pretrain & fine-tune a π€ Flax T5 model on a TPUv3-8 with GCPβ58Updated 3 years ago
- Pipeline for pulling and processing online language model pretraining data from the webβ177Updated 2 years ago
- Our open source implementation of MiniLMv2 (https://aclanthology.org/2021.findings-acl.188)β61Updated 2 years ago
- Official code and model checkpoints for our EMNLP 2022 paper "RankGen - Improving Text Generation with Large Ranking Models" (https://arxβ¦β138Updated 2 years ago
- β65Updated 2 years ago
- Embedding Recycling for Language modelsβ38Updated 2 years ago
- XtremeDistil framework for distilling/compressing massive multilingual neural network models to tiny and efficient models for AI at scaleβ157Updated 2 years ago
- Load What You Need: Smaller Multilingual Transformers for Pytorch and TensorFlow 2.0.β105Updated 3 years ago
- β72Updated 2 years ago
- BLOOM+1: Adapting BLOOM model to support a new unseen languageβ74Updated last year
- Helper scripts and notes that were used while porting various nlp modelsβ49Updated 3 years ago
- π€ Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.β81Updated 3 years ago
- β52Updated 2 years ago
- QAmeleon introduces synthetic multilingual QA data using PaLM, a 540B large language model. This dataset was generated by prompt tuning Pβ¦β35Updated 2 years ago
- Minimum Bayes Risk Decoding for Hugging Face Transformersβ60Updated last year
- β46Updated 3 years ago
- No Parameter Left Behind: How Distillation and Model Size Affect Zero-Shot Retrievalβ29Updated 3 years ago
- β54Updated 3 years ago
- Training and evaluation code for the paper "Headless Language Models: Learning without Predicting with Contrastive Weight Tying" (https:/β¦β28Updated last year
- Exploring finetuning public checkpoints on filter 8K sequences on Pileβ116Updated 2 years ago
- Plug-and-play Search Interfaces with Pyserini and Hugging Faceβ32Updated 2 years ago
- Experiments with generating opensource language model assistantsβ97Updated 2 years ago
- Ensembling Hugging Face transformers made easyβ61Updated 3 years ago
- β67Updated 3 years ago
- A fast implementation of T5/UL2 in PyTorch using Flash Attentionβ113Updated 3 months ago
- β44Updated last year
- β21Updated 4 years ago
- Implementation of the paper 'Sentence Bottleneck Autoencoders from Transformer Language Models'β17Updated 3 years ago
- A library for parameter-efficient and composable transfer learning for NLP with sparse fine-tunings.β75Updated last year