google-research-datasets / QAmeleonLinks
QAmeleon introduces synthetic multilingual QA data using PaLM, a 540B large language model. This dataset was generated by prompt tuning PaLM with only five examples per language. We use the synthetic data to finetune downstream QA models leading to improved accuracy in comparison to English-only and translation-based baselines.
☆34Updated 2 years ago
Alternatives and similar repositories for QAmeleon
Users that are interested in QAmeleon are comparing it to the libraries listed below
Sorting:
- Training and evaluation code for the paper "Headless Language Models: Learning without Predicting with Contrastive Weight Tying" (https:/…☆27Updated last year
- Embedding Recycling for Language models☆39Updated 2 years ago
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.☆95Updated 2 years ago
- Codes and files for the paper Are Emergent Abilities in Large Language Models just In-Context Learning☆33Updated 8 months ago
- ☆44Updated 10 months ago
- My explorations into editing the knowledge and memories of an attention network☆35Updated 2 years ago
- Exploring finetuning public checkpoints on filter 8K sequences on Pile☆116Updated 2 years ago
- Experiments with generating opensource language model assistants☆97Updated 2 years ago
- SWIM-IR is a Synthetic Wikipedia-based Multilingual Information Retrieval training set with 28 million query-passage pairs spanning 33 la…☆49Updated last year
- Code for NeurIPS LLM Efficiency Challenge☆59Updated last year
- Plug-and-play Search Interfaces with Pyserini and Hugging Face☆32Updated 2 years ago
- QLoRA for Masked Language Modeling☆22Updated 2 years ago
- Genalog is an open source, cross-platform python package allowing generation of synthetic document images with custom degradations and te…☆44Updated last year
- ☆39Updated last year
- This is a new metric that can be used to evaluate faithfulness of text generated by LLMs. The work behind this repository can be found he…☆31Updated 2 years ago
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Updated 2 years ago
- QLoRA with Enhanced Multi GPU Support☆37Updated 2 years ago
- A library for squeakily cleaning and filtering language datasets.☆47Updated 2 years ago
- Index of URLs to pdf files all over the internet and scripts☆24Updated 2 years ago
- 🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.☆81Updated 3 years ago
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆59Updated last year
- ☆69Updated last year
- Using short models to classify long texts☆21Updated 2 years ago
- ☆52Updated last year
- Code for the examples presented in the talk "Training a Llama in your backyard: fine-tuning very large models on consumer hardware" given…☆14Updated last year
- some common Huggingface transformers in maximal update parametrization (µP)☆82Updated 3 years ago
- Experiments for efforts to train a new and improved t5☆76Updated last year
- Official Repository of Pretraining Without Attention (BiGS), BiGS is the first model to achieve BERT-level transfer learning on the GLUE …☆115Updated last year
- ☆13Updated 9 months ago
- This repository contains code for cleaning your training data of benchmark data to help combat data snooping.☆27Updated 2 years ago