leogao2 / lm_dataformat
☆77Updated last year
Alternatives and similar repositories for lm_dataformat:
Users that are interested in lm_dataformat are comparing it to the libraries listed below
- ☆67Updated 2 years ago
- ☆96Updated 2 years ago
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.☆93Updated 2 years ago
- ☆97Updated 2 years ago
- Implementation of Marge, Pre-training via Paraphrasing, in Pytorch☆75Updated 4 years ago
- Transformers at any scale☆41Updated last year
- Techniques used to run BLOOM at inference in parallel☆37Updated 2 years ago
- Official code and model checkpoints for our EMNLP 2022 paper "RankGen - Improving Text Generation with Large Ranking Models" (https://arx…☆136Updated last year
- A framework for few-shot evaluation of autoregressive language models.☆102Updated last year
- ☆89Updated 2 years ago
- XtremeDistil framework for distilling/compressing massive multilingual neural network models to tiny and efficient models for AI at scale☆154Updated last year
- Python tools for processing the stackexchange data dumps into a text dataset for Language Models☆81Updated last year
- Tools for managing datasets for governance and training.☆82Updated last month
- The official code of EMNLP 2022, "SCROLLS: Standardized CompaRison Over Long Language Sequences".☆69Updated last year
- ☆73Updated last year
- ☆67Updated 2 years ago
- Pipeline for pulling and processing online language model pretraining data from the web☆175Updated last year
- ☆87Updated 2 years ago
- A diff tool for language models☆42Updated last year
- Experiments with generating opensource language model assistants☆97Updated last year
- 🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.☆82Updated 2 years ago
- Code and data to support the paper "PAQ 65 Million Probably-Asked Questions andWhat You Can Do With Them"☆202Updated 3 years ago
- Tutorial to pretrain & fine-tune a 🤗 Flax T5 model on a TPUv3-8 with GCP☆58Updated 2 years ago
- ARCHIVED. Please use https://docs.adapterhub.ml/huggingface_hub.html || 🔌 A central repository collecting pre-trained adapter modules☆68Updated 9 months ago
- ☆44Updated 3 months ago
- This project studies the performance and robustness of language models and task-adaptation methods.☆145Updated 9 months ago
- ☆45Updated last year
- Repo to hold code and track issues for the collection of permissively licensed data☆23Updated 2 months ago
- A Dataset for Tuning and Evaluation of Sentence Simplification Models with Multiple Rewriting Transformations☆55Updated 2 years ago
- Helper scripts and notes that were used while porting various nlp models☆45Updated 2 years ago