behavioral-data / BLADE
Code for Benchmarking Language Model Agents for Data-Driven Science
☆12Updated 2 weeks ago
Related projects ⓘ
Alternatives and complementary repositories for BLADE
- ☆8Updated 2 months ago
- Adding new tasks to T0 without catastrophic forgetting☆30Updated 2 years ago
- Prompting Large Language Models to Generate Dense and Sparse Representations for Zero-Shot Document Retrieval☆37Updated 3 weeks ago
- FollowIR: Evaluating and Teaching Information Retrieval Models to Follow Instructions☆39Updated 4 months ago
- official repo of AAAI2024 paper Mitigating the Impact of False Negatives in Dense Retrieval with Contrastive Confidence Regularization☆13Updated 10 months ago
- Few-shot Learning with Auxiliary Data☆26Updated 11 months ago
- ☆19Updated last week
- ☆25Updated 5 months ago
- Code for "FactKB: Generalizable Factuality Evaluation using Language Models Enhanced with Factual Knowledge". EMNLP 2023.☆18Updated 10 months ago
- Code/data for MARG (multi-agent review generation)☆30Updated 6 months ago
- Tasks for describing differences between text distributions.☆16Updated 3 months ago
- [EMNLP 2023] Knowledge Rumination for Pre-trained Language Models☆17Updated last year
- Code for paper "Open-Domain Hierarchical Event Schema Induction by Incremental Prompting and Verification"☆14Updated last year
- [EMNLP-2022 Findings] Code for paper “ProGen: Progressive Zero-shot Dataset Generation via In-context Feedback”.☆24Updated last year
- Evaluate the Quality of Critique☆35Updated 5 months ago
- Tree prompting: easy-to-use scikit-learn interface for improved prompting.☆33Updated last year
- ☆22Updated last year
- [EMNLP 2024] A Retrieval Benchmark for Scientific Literature Search☆61Updated 4 months ago
- This repository includes a benchmark and code for the paper "Evaluating LLMs at Detecting Errors in LLM Responses".☆26Updated 2 months ago
- BRIGHT: A Realistic and Challenging Benchmark for Reasoning-Intensive Retrieval☆56Updated 3 weeks ago
- MAIR: A Massive Benchmark for Evaluating Instructed Retrieval. Evaluate your retrieval models on 126 diverse tasks. [EMNLP 2024]☆12Updated last week
- [NAACL 2024 Findings] Evaluation suite for the systematic evaluation of instruction selection methods.☆23Updated last year
- Codebase for [Paper] Pre-training with Bag-of-Word Prediction for Dense Passage Retrieval☆14Updated 8 months ago
- [ACL 2023] Few-shot Reranking for Multi-hop QA via Language Model Prompting☆26Updated last year
- Complexity Based Prompting for Multi-Step Reasoning☆14Updated last year
- Code for "Seeking Neural Nuggets: Knowledge Transfer in Large Language Models from a Parametric Perspective"☆29Updated 6 months ago
- Official dataset repository for "SciReviewGen: A Large-scale Dataset for Automatic Literature Review Generation."☆13Updated last year
- ☆40Updated 2 years ago
- Are LLMs Capable of Data-based Statistical and Causal Reasoning? Benchmarking Advanced Quantitative Reasoning with Data☆30Updated 2 months ago
- The source code used for paper "Effective Seed-Guided Topic Discovery by Integrating Multiple Types of Contexts", in WSDM 2023.☆15Updated last year