gersteinlab / Struc-BenchLinks
☆54Updated last year
Alternatives and similar repositories for Struc-Bench
Users that are interested in Struc-Bench are comparing it to the libraries listed below
Sorting:
- Meta-CoT: Generalizable Chain-of-Thought Prompting in Mixed-task Scenarios with Large Language Models☆97Updated last year
- ☆20Updated 3 months ago
- ☆33Updated 8 months ago
- ☆72Updated last year
- ☆45Updated 3 months ago
- Code of ICLR paper: https://openreview.net/forum?id=-cqvvvb-NkI☆94Updated 2 years ago
- Implementation of the paper: "Answering Questions by Meta-Reasoning over Multiple Chains of Thought"☆96Updated last year
- [ACL'24] Code and data of paper "When is Tree Search Useful for LLM Planning? It Depends on the Discriminator"☆54Updated last year
- Middleware for LLMs: Tools Are Instrumental for Language Agents in Complex Environments (EMNLP'2024)☆37Updated 6 months ago
- FollowIR: Evaluating and Teaching Information Retrieval Models to Follow Instructions☆45Updated last year
- ☆68Updated 2 years ago
- the instructions and demonstrations for building a formal logical reasoning capable GLM☆53Updated 10 months ago
- Repository for NPHardEval, a quantified-dynamic benchmark of LLMs☆56Updated last year
- [NAACL 2024] Enhancing Chain-of-Thoughts Prompting with Iterative Bootstrapping in Large Language Models☆85Updated last year
- Aligning with Human Judgement: The Role of Pairwise Preference in Large Language Model Evaluators (Liu et al.; COLM 2024)☆47Updated 5 months ago
- Are LLMs Capable of Data-based Statistical and Causal Reasoning? Benchmarking Advanced Quantitative Reasoning with Data☆41Updated 4 months ago
- [ICLR'24 spotlight] Tool-Augmented Reward Modeling☆50Updated last month
- Code for PHATGOOSE introduced in "Learning to Route Among Specialized Experts for Zero-Shot Generalization"☆86Updated last year
- A simple GPT-based evaluation tool for multi-aspect, interpretable assessment of LLMs.☆85Updated last year
- ReBase: Training Task Experts through Retrieval Based Distillation☆29Updated 5 months ago
- ☆22Updated 6 months ago
- [NeurIPS 2023] PyTorch code for Can Language Models Teach? Teacher Explanations Improve Student Performance via Theory of Mind☆66Updated last year
- [ACL 2024] <Large Language Models for Automated Open-domain Scientific Hypotheses Discovery>. It has also received the best poster award …☆42Updated 8 months ago
- Scalable Meta-Evaluation of LLMs as Evaluators☆42Updated last year
- [NeurIPS 2024] Train LLMs with diverse system messages reflecting individualized preferences to generalize to unseen system messages☆48Updated 7 months ago
- Public code repo for paper "SaySelf: Teaching LLMs to Express Confidence with Self-Reflective Rationales"☆107Updated 9 months ago
- Official repo for NAACL 2024 Findings paper "LeTI: Learning to Generate from Textual Interactions."☆65Updated 2 years ago
- Code for EMNLP 2024 paper "Learn Beyond The Answer: Training Language Models with Reflection for Mathematical Reasoning"☆55Updated 9 months ago
- Evaluation on Logical Reasoning and Abstract Reasoning Challenges☆28Updated 2 months ago
- Co-LLM: Learning to Decode Collaboratively with Multiple Language Models☆116Updated last year