gersteinlab / Struc-BenchLinks
[NAACL 2024] Struc-Bench: Are Large Language Models Good at Generating Complex Structured Tabular Data? https://aclanthology.org/2024.naacl-short.2/
☆54Updated last month
Alternatives and similar repositories for Struc-Bench
Users that are interested in Struc-Bench are comparing it to the libraries listed below
Sorting:
- ☆74Updated last year
- Meta-CoT: Generalizable Chain-of-Thought Prompting in Mixed-task Scenarios with Large Language Models☆97Updated last year
- ☆35Updated 10 months ago
- Code of ICLR paper: https://openreview.net/forum?id=-cqvvvb-NkI☆94Updated 2 years ago
- Middleware for LLMs: Tools Are Instrumental for Language Agents in Complex Environments (EMNLP'2024)☆37Updated 8 months ago
- Aligning with Human Judgement: The Role of Pairwise Preference in Large Language Model Evaluators (Liu et al.; COLM 2024)☆48Updated 7 months ago
- ☆20Updated 5 months ago
- [ICLR'25] "Attention in Large Language Models Yields Efficient Zero-Shot Re-Rankers"☆31Updated 5 months ago
- Code for EMNLP 2024 paper "Learn Beyond The Answer: Training Language Models with Reflection for Mathematical Reasoning"☆55Updated 11 months ago
- Implementation of the paper: "Answering Questions by Meta-Reasoning over Multiple Chains of Thought"☆96Updated last year
- [NAACL 2024] Enhancing Chain-of-Thoughts Prompting with Iterative Bootstrapping in Large Language Models☆86Updated last year
- [ACL'24] Code and data of paper "When is Tree Search Useful for LLM Planning? It Depends on the Discriminator"☆54Updated last year
- ☆46Updated 5 months ago
- [ACL 2024] <Large Language Models for Automated Open-domain Scientific Hypotheses Discovery>. It has also received the best poster award …☆42Updated 10 months ago
- ☆68Updated 2 years ago
- A dataset of LLM-generated chain-of-thought steps annotated with mistake location.☆81Updated last year
- Code and Data for "Language Modeling with Editable External Knowledge"☆34Updated last year
- ☆48Updated last year
- ☆127Updated 11 months ago
- ☆23Updated 2 weeks ago
- Code for the arXiv preprint "The Unreasonable Effectiveness of Easy Training Data"☆48Updated last year
- ReBase: Training Task Experts through Retrieval Based Distillation☆29Updated 7 months ago
- Code for RATIONALYST: Pre-training Process-Supervision for Improving Reasoning https://arxiv.org/pdf/2410.01044☆35Updated 11 months ago
- EMNLP 2024 "Re-reading improves reasoning in large language models". Simply repeating the question to get bidirectional understanding for…☆26Updated 9 months ago
- A simple GPT-based evaluation tool for multi-aspect, interpretable assessment of LLMs.☆87Updated last year
- ☆154Updated last year
- Official repo for NAACL 2024 Findings paper "LeTI: Learning to Generate from Textual Interactions."☆64Updated 2 years ago
- FollowIR: Evaluating and Teaching Information Retrieval Models to Follow Instructions☆46Updated last year
- Retrieval Augmented Generation Generalized Evaluation Dataset☆55Updated last month
- LangCode - Improving alignment and reasoning of large language models (LLMs) with natural language embedded program (NLEP).☆43Updated last year