gersteinlab / Struc-BenchLinks
[NAACL 2024] Struc-Bench: Are Large Language Models Good at Generating Complex Structured Tabular Data? https://aclanthology.org/2024.naacl-short.2/
☆55Updated 5 months ago
Alternatives and similar repositories for Struc-Bench
Users that are interested in Struc-Bench are comparing it to the libraries listed below
Sorting:
- ☆75Updated last year
- Middleware for LLMs: Tools Are Instrumental for Language Agents in Complex Environments (EMNLP'2024)☆37Updated last year
- [NAACL 2024] Enhancing Chain-of-Thoughts Prompting with Iterative Bootstrapping in Large Language Models☆86Updated last year
- ☆20Updated 9 months ago
- Code for EMNLP 2024 paper "Learn Beyond The Answer: Training Language Models with Reflection for Mathematical Reasoning"☆55Updated last year
- Aligning with Human Judgement: The Role of Pairwise Preference in Large Language Model Evaluators (Liu et al.; COLM 2024)☆48Updated last year
- Implementation of the paper: "Answering Questions by Meta-Reasoning over Multiple Chains of Thought"☆96Updated last year
- Code of ICLR paper: https://openreview.net/forum?id=-cqvvvb-NkI☆95Updated 2 years ago
- ☆39Updated last year
- Code and Data for "Language Modeling with Editable External Knowledge"☆36Updated last year
- Meta-CoT: Generalizable Chain-of-Thought Prompting in Mixed-task Scenarios with Large Language Models☆101Updated 2 years ago
- Repository for NPHardEval, a quantified-dynamic benchmark of LLMs☆63Updated last year
- Code/data for MARG (multi-agent review generation)☆59Updated 3 months ago
- ☆129Updated last year
- Official repo for NAACL 2024 Findings paper "LeTI: Learning to Generate from Textual Interactions."☆66Updated 2 years ago
- [ACL'24] Code and data of paper "When is Tree Search Useful for LLM Planning? It Depends on the Discriminator"☆54Updated last year
- Neuro-Symbolic Integration Brings Causal and Reliable Reasoning Proofs☆41Updated last year
- ☆70Updated 2 years ago
- Scalable Meta-Evaluation of LLMs as Evaluators☆43Updated last year
- [ICLR'25] "Attention in Large Language Models Yields Efficient Zero-Shot Re-Rankers"