OSU-NLP-Group / In-Context-RerankingLinks
[ICLR'25] "Attention in Large Language Models Yields Efficient Zero-Shot Re-Rankers"
☆36Updated 7 months ago
Alternatives and similar repositories for In-Context-Reranking
Users that are interested in In-Context-Reranking are comparing it to the libraries listed below
Sorting:
- ☆74Updated last year
- FollowIR: Evaluating and Teaching Information Retrieval Models to Follow Instructions☆49Updated last year
- [ICLR 2025] BRIGHT: A Realistic and Challenging Benchmark for Reasoning-Intensive Retrieval☆178Updated 2 months ago
- [ACL'24] Code and data of paper "When is Tree Search Useful for LLM Planning? It Depends on the Discriminator"☆54Updated last year
- Official code for "MAmmoTH2: Scaling Instructions from the Web" [NeurIPS 2024]☆149Updated last year
- ☆36Updated last year
- Prompting Large Language Models to Generate Dense and Sparse Representations for Zero-Shot Document Retrieval☆51Updated 4 months ago
- ☆48Updated last year
- Code and Data for "Language Modeling with Editable External Knowledge"☆36Updated last year
- [NeurIPS 2024] Train LLMs with diverse system messages reflecting individualized preferences to generalize to unseen system messages☆51Updated 3 months ago
- Code for EMNLP 2024 paper "Learn Beyond The Answer: Training Language Models with Reflection for Mathematical Reasoning"☆56Updated last year
- [EMNLP 2024] A Retrieval Benchmark for Scientific Literature Search☆101Updated 11 months ago
- [ICLR'24 spotlight] Tool-Augmented Reward Modeling☆51Updated 5 months ago
- the instructions and demonstrations for building a formal logical reasoning capable GLM☆55Updated last year
- [EMNLP'24] LongHeads: Multi-Head Attention is Secretly a Long Context Processor☆31Updated last year
- Benchmarking Benchmark Leakage in Large Language Models☆56Updated last year
- Scalable Meta-Evaluation of LLMs as Evaluators☆42Updated last year
- Official repository for paper "ReasonIR Training Retrievers for Reasoning Tasks".☆206Updated 4 months ago
- [NAACL 2024] Struc-Bench: Are Large Language Models Good at Generating Complex Structured Tabular Data? https://aclanthology.org/2024.naa…☆55Updated 3 months ago
- [NAACL 2024 Outstanding Paper] Source code for the NAACL 2024 paper entitled "R-Tuning: Instructing Large Language Models to Say 'I Don't…☆125Updated last year
- Code and Data for "Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering"☆86Updated last year
- ☆49Updated 7 months ago
- ☆68Updated 2 years ago
- ☆27Updated last week
- Code and data for "Dynosaur: A Dynamic Growth Paradigm for Instruction-Tuning Data Curation" (EMNLP 2023)☆64Updated last year
- [ACL 2025 Main] Official Repository for "Evaluating Language Models as Synthetic Data Generators"☆40Updated 11 months ago
- Implementation of the paper: "Answering Questions by Meta-Reasoning over Multiple Chains of Thought"☆96Updated last year
- [ACL'24 Oral] Analysing The Impact of Sequence Composition on Language Model Pre-Training☆22Updated last year
- Co-LLM: Learning to Decode Collaboratively with Multiple Language Models☆122Updated last year
- A dataset of LLM-generated chain-of-thought steps annotated with mistake location.☆82Updated last year