OSU-NLP-Group / In-Context-Reranking
Code for "Attention in Large Language Models Yeilds Efficient Zero-Shot Re-Rankers"
☆11Updated last month
Related projects ⓘ
Alternatives and complementary repositories for In-Context-Reranking
- ☆42Updated 4 months ago
- [EMNLP 2024] A Retrieval Benchmark for Scientific Literature Search☆61Updated 4 months ago
- Official implementation for <Large Language Models for Automated Open-domain Scientific Hypotheses Discovery>, accepted by ACL 2024. It a…☆35Updated 2 weeks ago
- ☆15Updated this week
- Discovering Data-driven Hypotheses in the Wild☆39Updated 2 weeks ago
- Scalable Meta-Evaluation of LLMs as Evaluators☆41Updated 8 months ago
- Repository for paper Tools Are Instrumental for Language Agents in Complex Environments☆32Updated last month
- FollowIR: Evaluating and Teaching Information Retrieval Models to Follow Instructions☆39Updated 4 months ago
- [EMNLP'23] Execution-Based Evaluation for Open Domain Code Generation☆44Updated 10 months ago
- ☆18Updated 5 months ago
- The code implementation of MAGDi: Structured Distillation of Multi-Agent Interaction Graphs Improves Reasoning in Smaller Language Models…☆30Updated 9 months ago
- [ACL'24] Code and data of paper "When is Tree Search Useful for LLM Planning? It Depends on the Discriminator"☆48Updated 8 months ago
- ReBase: Training Task Experts through Retrieval Based Distillation☆27Updated 3 months ago
- Code/data for MARG (multi-agent review generation)☆30Updated 5 months ago
- Are LLMs Capable of Data-based Statistical and Causal Reasoning? Benchmarking Advanced Quantitative Reasoning with Data☆30Updated 2 months ago
- ☆34Updated 3 months ago
- Official repository for "Scaling Retrieval-Based Langauge Models with a Trillion-Token Datastore".☆128Updated this week
- This repository includes a benchmark and code for the paper "Evaluating LLMs at Detecting Errors in LLM Responses".☆26Updated 2 months ago
- Dataset and evaluation suite enabling LLM instruction-following for scientific literature understanding.☆26Updated last week
- ☆30Updated last month
- Evaluate the Quality of Critique☆35Updated 5 months ago
- Code for the arXiv preprint "The Unreasonable Effectiveness of Easy Training Data"☆44Updated 9 months ago
- Evaluation on Logical Reasoning and Abstract Reasoning Challenges☆20Updated 9 months ago
- ☆56Updated 8 months ago
- ☆18Updated 3 weeks ago
- PyTorch implementation for MRL☆18Updated 8 months ago
- ☆20Updated this week
- Large language models (LLMs) made easy, EasyLM is a one stop solution for pre-training, finetuning, evaluating and serving LLMs in JAX/Fl…☆57Updated 2 months ago
- Code and Data for "MIRAI: Evaluating LLM Agents for Event Forecasting"☆54Updated 4 months ago
- SCREWS: A Modular Framework for Reasoning with Revisions☆26Updated last year