ZonglinY / MOOSELinks
[ACL 2024] <Large Language Models for Automated Open-domain Scientific Hypotheses Discovery>. It has also received the best poster award in ICML 2024 AI4Science workshop.
☆42Updated last year
Alternatives and similar repositories for MOOSE
Users that are interested in MOOSE are comparing it to the libraries listed below
Sorting:
- Middleware for LLMs: Tools Are Instrumental for Language Agents in Complex Environments (EMNLP'2024)☆37Updated 11 months ago
- Code/data for MARG (multi-agent review generation)☆59Updated 2 months ago
- [NAACL 2024] Struc-Bench: Are Large Language Models Good at Generating Complex Structured Tabular Data? https://aclanthology.org/2024.naa…☆55Updated 4 months ago
- Repository for NPHardEval, a quantified-dynamic benchmark of LLMs☆61Updated last year
- Official implementation of the ACL 2024: Scientific Inspiration Machines Optimized for Novelty☆89Updated last year
- Code for the arXiv preprint "The Unreasonable Effectiveness of Easy Training Data"☆48Updated last year
- Are LLMs Capable of Data-based Statistical and Causal Reasoning? Benchmarking Advanced Quantitative Reasoning with Data☆44Updated 9 months ago
- LangCode - Improving alignment and reasoning of large language models (LLMs) with natural language embedded program (NLEP).☆48Updated 2 years ago
- ☆49Updated 8 months ago
- ☆75Updated last year
- Aioli: A unified optimization framework for language model data mixing☆31Updated 10 months ago
- ☆129Updated last year
- Neuro-Symbolic Integration Brings Causal and Reliable Reasoning Proofs☆40Updated last year
- ☆28Updated 3 weeks ago
- ☆49Updated 2 years ago
- A dataset of LLM-generated chain-of-thought steps annotated with mistake location.☆84Updated last year
- [EMNLP 2024] A Retrieval Benchmark for Scientific Literature Search☆101Updated last year
- Code and data accompanying our paper on arXiv "Faithful Chain-of-Thought Reasoning".☆165Updated last year
- Implementation of the paper: "Answering Questions by Meta-Reasoning over Multiple Chains of Thought"☆96Updated last year
- Meta-CoT: Generalizable Chain-of-Thought Prompting in Mixed-task Scenarios with Large Language Models☆99Updated 2 years ago
- The code implementation of MAGDi: Structured Distillation of Multi-Agent Interaction Graphs Improves Reasoning in Smaller Language Models…☆38Updated last year
- [ICML 2025] Flow of Reasoning: Training LLMs for Divergent Reasoning with Minimal Examples☆112Updated 4 months ago
- ReBase: Training Task Experts through Retrieval Based Distillation☆29Updated 10 months ago
- [NeurIPS 2024] OlympicArena: Benchmarking Multi-discipline Cognitive Reasoning for Superintelligent AI☆108Updated 9 months ago
- ☆129Updated last year
- ☆29Updated last year
- Code for PHATGOOSE introduced in "Learning to Route Among Specialized Experts for Zero-Shot Generalization"☆91Updated last year
- Improving Language Understanding from Screenshots. Paper: https://arxiv.org/abs/2402.14073☆31Updated last year
- Scalable Meta-Evaluation of LLMs as Evaluators☆43Updated last year
- [ACL'24] Code and data of paper "When is Tree Search Useful for LLM Planning? It Depends on the Discriminator"☆54Updated last year