gangiswag / llm-reranker
☆18Updated 2 months ago
Related projects ⓘ
Alternatives and complementary repositories for llm-reranker
- FollowIR: Evaluating and Teaching Information Retrieval Models to Follow Instructions☆40Updated 4 months ago
- Leveraging passage embeddings for efficient listwise reranking with large language models.☆33Updated last month
- SWIM-IR is a Synthetic Wikipedia-based Multilingual Information Retrieval training set with 28 million query-passage pairs spanning 33 la…☆44Updated last year
- BRIGHT: A Realistic and Challenging Benchmark for Reasoning-Intensive Retrieval☆57Updated 3 weeks ago
- [WWW 2024] The official repo for paper "Scalable and Effective Generative Information Retrieval".☆52Updated 6 months ago
- Prompting Large Language Models to Generate Dense and Sparse Representations for Zero-Shot Document Retrieval☆38Updated 3 weeks ago
- [EMNLP 2024] A Retrieval Benchmark for Scientific Literature Search☆61Updated 4 months ago
- ☆15Updated 9 months ago
- [SIGIR 2024 (Demo)] CoSearchAgent: A Lightweight Collborative Search Agent with Large Language Models☆22Updated 9 months ago
- ☆15Updated 8 months ago
- [ACL 2023] Few-shot Reranking for Multi-hop QA via Language Model Prompting☆27Updated last year
- An easy-to-use python toolkit for flexibly adapting various neural ranking models to any target domain.☆59Updated last year
- GISTEmbed: Guided In-sample Selection of Training Negatives for Text Embeddings☆37Updated 8 months ago
- Code for COLING22 paper, DPTDR: Deep Prompt Tuning for Dense Passage Retrieval☆24Updated last year
- Repo for Llatrieval☆28Updated 3 months ago
- Comprehensive benchmark for RAG☆39Updated 2 weeks ago
- official repo of AAAI2024 paper Mitigating the Impact of False Negatives in Dense Retrieval with Contrastive Confidence Regularization☆13Updated 10 months ago
- ☆69Updated last year
- ☆17Updated 8 months ago
- ☆43Updated 4 months ago
- AIR-Bench: Automated Heterogeneous Information Retrieval Benchmark☆106Updated last month
- Official implementation for 'Extending LLMs’ Context Window with 100 Samples'☆74Updated 10 months ago
- ☆38Updated 7 months ago
- IntructIR, a novel benchmark specifically designed to evaluate the instruction following ability in information retrieval models. Our foc…☆28Updated 5 months ago
- MAIR: A Massive Benchmark for Evaluating Instructed Retrieval. Evaluate your retrieval models on 126 diverse tasks. [EMNLP 2024]☆13Updated 2 weeks ago
- ☆45Updated 2 years ago
- ☆56Updated 9 months ago
- [NeurIPS 2024] Train LLMs with diverse system messages reflecting individualized preferences to generalize to unseen system messages☆37Updated last month
- Code for Search-in-the-Chain: Towards Accurate, Credible and Traceable Large Language Models for Knowledge-intensive Tasks☆47Updated 7 months ago