leogao2 / lm_dataformat
☆76Updated 11 months ago
Related projects ⓘ
Alternatives and complementary repositories for lm_dataformat
- ☆86Updated 2 years ago
- ☆95Updated last year
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.☆92Updated last year
- Python tools for processing the stackexchange data dumps into a text dataset for Language Models☆76Updated 11 months ago
- Google's BigBird (Jax/Flax & PyTorch) @ 🤗Transformers☆47Updated last year
- Pipeline for pulling and processing online language model pretraining data from the web☆174Updated last year
- ☆97Updated 2 years ago
- A framework for few-shot evaluation of autoregressive language models.☆101Updated last year
- Experiments with generating opensource language model assistants☆97Updated last year
- Tools for managing datasets for governance and training.☆77Updated last week
- Techniques used to run BLOOM at inference in parallel☆37Updated 2 years ago
- ARCHIVED. Please use https://docs.adapterhub.ml/huggingface_hub.html || 🔌 A central repository collecting pre-trained adapter modules☆68Updated 5 months ago
- Tutorial to pretrain & fine-tune a 🤗 Flax T5 model on a TPUv3-8 with GCP☆58Updated 2 years ago
- ☆73Updated last year
- Transformers at any scale☆41Updated 9 months ago
- Open Instruction Generalist is an assistant trained on massive synthetic instructions to perform many millions of tasks☆206Updated 9 months ago
- Datasets collection and preprocessings framework for NLP extreme multitask learning☆147Updated 3 months ago
- The official code of EMNLP 2022, "SCROLLS: Standardized CompaRison Over Long Language Sequences".☆68Updated 9 months ago
- ☆64Updated 2 years ago
- 🤗 Transformers: State-of-the-art Natural Language Processing for TensorFlow 2.0 and PyTorch.☆13Updated this week
- Official code and model checkpoints for our EMNLP 2022 paper "RankGen - Improving Text Generation with Large Ranking Models" (https://arx…☆136Updated last year
- ☆67Updated 2 years ago
- Repo for ICML23 "Why do Nearest Neighbor Language Models Work?"☆56Updated last year
- Our open source implementation of MiniLMv2 (https://aclanthology.org/2021.findings-acl.188)☆60Updated last year
- The official repository for Efficient Long-Text Understanding Using Short-Text Models (Ivgi et al., 2022) paper☆68Updated last year
- ☆63Updated last month
- [ICLR 2023] Guess the Instruction! Flipped Learning Makes Language Models Stronger Zero-Shot Learners☆111Updated last month
- [AAAI 2024] Investigating the Effectiveness of Task-Agnostic Prefix Prompt for Instruction Following☆79Updated last month
- Implementation of Marge, Pre-training via Paraphrasing, in Pytorch☆75Updated 3 years ago
- ☆71Updated 6 months ago