google-research-datasets / QAmeleon
QAmeleon introduces synthetic multilingual QA data using PaLM, a 540B large language model. This dataset was generated by prompt tuning PaLM with only five examples per language. We use the synthetic data to finetune downstream QA models leading to improved accuracy in comparison to English-only and translation-based baselines.
☆34Updated last year
Alternatives and similar repositories for QAmeleon:
Users that are interested in QAmeleon are comparing it to the libraries listed below
- Embedding Recycling for Language models☆38Updated last year
- Code for the examples presented in the talk "Training a Llama in your backyard: fine-tuning very large models on consumer hardware" given…☆14Updated last year
- Training and evaluation code for the paper "Headless Language Models: Learning without Predicting with Contrastive Weight Tying" (https:/…☆26Updated last year
- Codes and files for the paper Are Emergent Abilities in Large Language Models just In-Context Learning☆33Updated 3 months ago
- My explorations into editing the knowledge and memories of an attention network☆34Updated 2 years ago
- ☆44Updated 5 months ago
- SWIM-IR is a Synthetic Wikipedia-based Multilingual Information Retrieval training set with 28 million query-passage pairs spanning 33 la…☆48Updated last year
- QLoRA for Masked Language Modeling☆22Updated last year
- This is a new metric that can be used to evaluate faithfulness of text generated by LLMs. The work behind this repository can be found he…☆31Updated last year
- A library for squeakily cleaning and filtering language datasets.☆47Updated last year
- Index of URLs to pdf files all over the internet and scripts☆23Updated 2 years ago
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.☆93Updated 2 years ago
- Experiments with generating opensource language model assistants☆97Updated last year
- Plug-and-play Search Interfaces with Pyserini and Hugging Face☆31Updated last year
- This repository contains code for cleaning your training data of benchmark data to help combat data snooping.☆25Updated 2 years ago
- Code for NeurIPS LLM Efficiency Challenge☆57Updated last year
- ☆11Updated 4 months ago
- Repository containing the SPIN experiments on the DIBT 10k ranked prompts☆24Updated last year
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Updated last year
- Using short models to classify long texts☆21Updated 2 years ago
- Trully flash implementation of DeBERTa disentangled attention mechanism.☆46Updated 3 weeks ago
- ☆14Updated 7 months ago
- See https://github.com/cuda-mode/triton-index/ instead!☆11Updated 11 months ago
- ☆24Updated last year
- Code for paper "Do Language Models Have Beliefs? Methods for Detecting, Updating, and Visualizing Model Beliefs"☆28Updated 3 years ago
- ☆38Updated last year
- ☆33Updated 10 months ago
- Utilities for Training Very Large Models☆58Updated 7 months ago
- Ranking of fine-tuned HF models as base models.☆35Updated last year
- Tutorial to pretrain & fine-tune a 🤗 Flax T5 model on a TPUv3-8 with GCP☆58Updated 2 years ago