Shangyint / langProBeLinks
☆25Updated 6 months ago
Alternatives and similar repositories for langProBe
Users that are interested in langProBe are comparing it to the libraries listed below
Sorting:
- LOFT: A 1 Million+ Token Long-Context Benchmark☆222Updated 6 months ago
- What's In My Big Data (WIMBD) - a toolkit for analyzing large text datasets☆225Updated last year
- Official Repo for ICLR 2024 paper MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback by Xingyao Wang*, Ziha…☆134Updated last year
- BrowseComp-Plus: A More Fair and Transparent Evaluation Benchmark of Deep-Research Agent☆147Updated last month
- ☆242Updated last year
- ☆188Updated 6 months ago
- Repository for MuSiQue: Multi-hop Questions via Single-hop Question Composition, TACL 2022☆184Updated last year
- ☆56Updated last year
- Awesome LLM Self-Consistency: a curated list of Self-consistency in Large Language Models☆117Updated 5 months ago
- [ICLR 2024] Evaluating Large Language Models at Evaluating Instruction Following☆134Updated last year
- Reproducible, flexible LLM evaluations☆316Updated last month
- [ICLR 2025] BRIGHT: A Realistic and Challenging Benchmark for Reasoning-Intensive Retrieval☆184Updated 3 months ago
- [NAACL 2024 Outstanding Paper] Source code for the NAACL 2024 paper entitled "R-Tuning: Instructing Large Language Models to Say 'I Don't…☆126Updated last year
- The HELMET Benchmark☆197Updated last month
- Official repository for "Scaling Retrieval-Based Langauge Models with a Trillion-Token Datastore".☆222Updated 3 weeks ago
- Benchmarking LLMs with Challenging Tasks from Real Users☆246Updated last year
- ☆27Updated last year
- 🌍 AppWorld: A Controllable World of Apps and People for Benchmarking Function Calling and Interactive Coding Agent, ACL'24 Best Resource…☆351Updated last month
- A simple unified framework for evaluating LLMs☆258Updated 8 months ago
- ☆71Updated 11 months ago
- ☆61Updated 7 months ago
- ☆32Updated last year
- ☆108Updated last year
- [COLM 2025] Official repository for R2E-Gym: Procedural Environment Generation and Hybrid Verifiers for Scaling Open-Weights SWE Agents☆224Updated 5 months ago
- Data and Code for Program of Thoughts [TMLR 2023]☆302Updated last year
- [NeurIPS 2023 D&B] Code repository for InterCode benchmark https://arxiv.org/abs/2306.14898☆233Updated last year
- Synthetic question-answering dataset to formally analyze the chain-of-thought output of large language models on a reasoning task.☆156Updated 4 months ago
- AI Logging for Interpretability and Explainability🔬☆138Updated last year
- Fact-Checking the Output of Generative Large Language Models in both Annotation and Evaluation.☆110Updated 2 years ago
- Inspecting and Editing Knowledge Representations in Language Models☆119Updated 2 years ago