bethgelab / CiteMELinks
CiteME is a benchmark designed to test the abilities of language models in finding papers that are cited in scientific texts.
☆45Updated 7 months ago
Alternatives and similar repositories for CiteME
Users that are interested in CiteME are comparing it to the libraries listed below
Sorting:
- ☆58Updated 3 weeks ago
- ☆50Updated this week
- Functional Benchmarks and the Reasoning Gap☆86Updated 8 months ago
- Dataset and evaluation suite enabling LLM instruction-following for scientific literature understanding.☆40Updated 2 months ago
- [ACL 2024] <Large Language Models for Automated Open-domain Scientific Hypotheses Discovery>. It has also received the best poster award …☆41Updated 7 months ago
- ReBase: Training Task Experts through Retrieval Based Distillation☆29Updated 4 months ago
- [EMNLP 2024] A Retrieval Benchmark for Scientific Literature Search☆89Updated 6 months ago
- ☆83Updated 4 months ago
- SCREWS: A Modular Framework for Reasoning with Revisions☆27Updated last year
- Codebase accompanying the Summary of a Haystack paper.☆78Updated 8 months ago
- Large Language Model (LLM) powered evaluator for Retrieval Augmented Generation (RAG) pipelines.☆27Updated last year
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆57Updated 9 months ago
- ☆23Updated last year
- Official implementation of the ACL 2024: Scientific Inspiration Machines Optimized for Novelty☆79Updated last year
- Evaluating LLMs with fewer examples☆155Updated last year
- LangCode - Improving alignment and reasoning of large language models (LLMs) with natural language embedded program (NLEP).☆42Updated last year
- Replicating O1 inference-time scaling laws☆87Updated 6 months ago
- Mixing Language Models with Self-Verification and Meta-Verification☆104Updated 5 months ago
- ☆49Updated 7 months ago
- accompanying material for sleep-time compute paper☆90Updated last month
- Source code for the collaborative reasoner research project at Meta FAIR.☆87Updated last month
- Official repo for the paper PHUDGE: Phi-3 as Scalable Judge. Evaluate your LLMs with or without custom rubric, reference answer, absolute…☆49Updated 10 months ago
- 🔧 Compare how Agent systems perform on several benchmarks. 📊🚀☆97Updated 7 months ago
- Using open source LLMs to build synthetic datasets for direct preference optimization☆63Updated last year
- Advanced Reasoning Benchmark Dataset for LLMs☆46Updated last year
- Dynamic Cheatsheet: Test-Time Learning with Adaptive Memory☆61Updated last week
- ☆131Updated 2 months ago
- Code/data for MARG (multi-agent review generation)☆43Updated 6 months ago
- Code for PHATGOOSE introduced in "Learning to Route Among Specialized Experts for Zero-Shot Generalization"☆85Updated last year
- ☆29Updated 3 weeks ago