bethgelab / CiteME
CiteME is a benchmark designed to test the abilities of language models in finding papers that are cited in scientific texts.
☆40Updated 3 months ago
Alternatives and similar repositories for CiteME:
Users that are interested in CiteME are comparing it to the libraries listed below
- [ACL 2024] <Large Language Models for Automated Open-domain Scientific Hypotheses Discovery>. It has also received the best poster award …☆38Updated 3 months ago
- Mixing Language Models with Self-Verification and Meta-Verification☆100Updated 2 months ago
- ☆20Updated last week
- SCREWS: A Modular Framework for Reasoning with Revisions☆27Updated last year
- Codebase accompanying the Summary of a Haystack paper.☆74Updated 5 months ago
- ReBase: Training Task Experts through Retrieval Based Distillation☆28Updated 2 weeks ago
- ☆50Updated 3 months ago
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆54Updated 5 months ago
- Official repo for the paper PHUDGE: Phi-3 as Scalable Judge. Evaluate your LLMs with or without custom rubric, reference answer, absolute…☆48Updated 7 months ago
- Evaluation of neuro-symbolic engines☆34Updated 6 months ago
- [EMNLP 2024] A Retrieval Benchmark for Scientific Literature Search☆69Updated 2 months ago
- Official implementation of the ACL 2024: Scientific Inspiration Machines Optimized for Novelty☆74Updated 10 months ago
- ☆80Updated last month
- LangCode - Improving alignment and reasoning of large language models (LLMs) with natural language embedded program (NLEP).☆42Updated last year
- Code for PHATGOOSE introduced in "Learning to Route Among Specialized Experts for Zero-Shot Generalization"☆81Updated 11 months ago
- Understanding the correlation between different LLM benchmarks☆29Updated last year
- [ICLR'25] ScienceAgentBench: Toward Rigorous Assessment of Language Agents for Data-Driven Scientific Discovery☆53Updated 3 weeks ago
- ☆87Updated last year
- ☆81Updated last year
- ☆48Updated 3 months ago
- ☆46Updated 8 months ago
- 🔧 Compare how Agent systems perform on several benchmarks. 📊🚀☆71Updated 3 months ago
- ☆24Updated last year
- Implementation of the paper: "AssistantBench: Can Web Agents Solve Realistic and Time-Consuming Tasks?"☆48Updated 2 months ago
- Discovering Data-driven Hypotheses in the Wild☆55Updated 3 months ago
- Official repository for "Scaling Retrieval-Based Langauge Models with a Trillion-Token Datastore".☆157Updated this week
- Functional Benchmarks and the Reasoning Gap☆82Updated 4 months ago
- Using open source LLMs to build synthetic datasets for direct preference optimization☆57Updated 11 months ago
- ☆25Updated 4 months ago
- Code for EMNLP 2024 paper "Learn Beyond The Answer: Training Language Models with Reflection for Mathematical Reasoning"☆52Updated 4 months ago