google-research-datasets / swim-irLinks
SWIM-IR is a Synthetic Wikipedia-based Multilingual Information Retrieval training set with 28 million query-passage pairs spanning 33 languages, generated using PaLM 2 and summarize-then-ask prompting.
☆49Updated last year
Alternatives and similar repositories for swim-ir
Users that are interested in swim-ir are comparing it to the libraries listed below
Sorting:
- Plug-and-play Search Interfaces with Pyserini and Hugging Face☆32Updated last year
- Starbucks: Improved Training for 2D Matryoshka Embeddings☆21Updated last month
- ☆54Updated 2 years ago
- ☆100Updated 2 years ago
- IntructIR, a novel benchmark specifically designed to evaluate the instruction following ability in information retrieval models. Our foc…☆32Updated last year
- Embedding Recycling for Language models☆39Updated 2 years ago
- ☆14Updated 9 months ago
- [EACL 2023] CoTEVer: Chain of Thought Prompting Annotation Toolkit for Explanation Verification☆41Updated 2 years ago
- ☆29Updated last year
- 🌏 Modular retrievers for zero-shot multilingual IR.☆28Updated last year
- [ICLR 2023] Guess the Instruction! Flipped Learning Makes Language Models Stronger Zero-Shot Learners☆116Updated last month
- SPRINT Toolkit helps you evaluate diverse neural sparse models easily using a single click on any IR dataset.☆47Updated 2 years ago
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.☆93Updated 2 years ago
- This is a new metric that can be used to evaluate faithfulness of text generated by LLMs. The work behind this repository can be found he…☆31Updated last year
- ☆39Updated last year
- CLIR version of ColBERT☆71Updated last month
- No Parameter Left Behind: How Distillation and Model Size Affect Zero-Shot Retrieval☆29Updated 2 years ago
- [AAAI 2024] Investigating the Effectiveness of Task-Agnostic Prefix Prompt for Instruction Following☆79Updated 10 months ago
- QAmeleon introduces synthetic multilingual QA data using PaLM, a 540B large language model. This dataset was generated by prompt tuning P…☆34Updated last year
- Can LLMs generate code-mixed sentences through zero-shot prompting?☆11Updated 2 years ago
- [ICML 2023] Exploring the Benefits of Training Expert Language Models over Instruction Tuning☆99Updated 2 years ago
- Improving Text Embedding of Language Models Using Contrastive Fine-tuning☆64Updated last year
- Pretraining Efficiently on S2ORC!☆165Updated 9 months ago
- Retrieval Augmented Generation Generalized Evaluation Dataset☆54Updated 2 weeks ago
- ☆86Updated 3 months ago
- ☆44Updated 8 months ago
- We believe the ability of an LLM to attribute the text that it generates is likely to be crucial for both system developers and users in …☆54Updated 2 years ago
- Official Repository for "Hypencoder: Hypernetworks for Information Retrieval"☆27Updated 4 months ago
- The official code repo for "Sub-Sentence Encoder: Contrastive Learning of Propositional Semantic Representations".☆83Updated last year
- Reference implementation for Reward-Augmented Decoding: Efficient Controlled Text Generation With a Unidirectional Reward Model☆44Updated last year