ZonglinY / MOOSELinks
[ACL 2024] <Large Language Models for Automated Open-domain Scientific Hypotheses Discovery>. It has also received the best poster award in ICML 2024 AI4Science workshop.
☆42Updated 9 months ago
Alternatives and similar repositories for MOOSE
Users that are interested in MOOSE are comparing it to the libraries listed below
Sorting:
- Official implementation of the ACL 2024: Scientific Inspiration Machines Optimized for Novelty☆83Updated last year
- Middleware for LLMs: Tools Are Instrumental for Language Agents in Complex Environments (EMNLP'2024)☆37Updated 7 months ago
- Code/data for MARG (multi-agent review generation)☆46Updated 8 months ago
- ☆45Updated 4 months ago
- Aioli: A unified optimization framework for language model data mixing☆27Updated 6 months ago
- ☆73Updated last year
- PASTA: Post-hoc Attention Steering for LLMs☆122Updated 8 months ago
- Are LLMs Capable of Data-based Statistical and Causal Reasoning? Benchmarking Advanced Quantitative Reasoning with Data☆42Updated 5 months ago
- [ICLR'25] ScienceAgentBench: Toward Rigorous Assessment of Language Agents for Data-Driven Scientific Discovery☆94Updated last month
- Scalable Meta-Evaluation of LLMs as Evaluators☆42Updated last year
- ReBase: Training Task Experts through Retrieval Based Distillation☆29Updated 6 months ago
- ☆125Updated 10 months ago
- A dataset of LLM-generated chain-of-thought steps annotated with mistake location.☆81Updated 11 months ago
- Code for the arXiv preprint "The Unreasonable Effectiveness of Easy Training Data"☆48Updated last year
- [EMNLP 2024] A Retrieval Benchmark for Scientific Literature Search☆92Updated 8 months ago
- Code for EMNLP 2024 paper "Learn Beyond The Answer: Training Language Models with Reflection for Mathematical Reasoning"☆55Updated 10 months ago
- [ICLR'25] "Attention in Large Language Models Yields Efficient Zero-Shot Re-Rankers"☆28Updated 4 months ago
- Code accompanying "How I learned to start worrying about prompt formatting".☆107Updated last month
- [ACL'24] Code and data of paper "When is Tree Search Useful for LLM Planning? It Depends on the Discriminator"☆54Updated last year
- ☆124Updated last year
- Code release for "SPIQA: A Dataset for Multimodal Question Answering on Scientific Papers" [NeurIPS D&B, 2024]☆61Updated 6 months ago
- [ICML 2025] Flow of Reasoning: Training LLMs for Divergent Reasoning with Minimal Examples☆103Updated last week
- Replicating O1 inference-time scaling laws☆89Updated 8 months ago
- ☆17Updated last week
- ☆27Updated last year
- Neuro-Symbolic Integration Brings Causal and Reliable Reasoning Proofs☆38Updated last year
- ☆22Updated last month
- [ICLR 2025]ChemAgent: Self-updating Library in Large Language Models Improves Chemical Reasoning https://arxiv.org/abs/2501.06590☆64Updated this week
- Functional Benchmarks and the Reasoning Gap☆88Updated 10 months ago
- Meta-CoT: Generalizable Chain-of-Thought Prompting in Mixed-task Scenarios with Large Language Models☆97Updated last year