JunyiYe / CreativeMathLinks
[AAAI 2025] Assessing the Creativity of LLMs in Proposing Novel Solutions to Mathematical Problems
☆12Updated 5 months ago
Alternatives and similar repositories for CreativeMath
Users that are interested in CreativeMath are comparing it to the libraries listed below
Sorting:
- ☆84Updated 9 months ago
- [ACL 2023] Learning Multi-step Reasoning by Solving Arithmetic Tasks. https://arxiv.org/abs/2306.01707☆24Updated 2 years ago
- Repo for paper: Examining LLMs' Uncertainty Expression Towards Questions Outside Parametric Knowledge☆14Updated last year
- ☆16Updated last year
- EMNLP'2023: Explore-Instruct: Enhancing Domain-Specific Instruction Coverage through Active Exploration☆36Updated last year
- [ACL 2024] Unveiling Linguistic Regions in Large Language Models☆32Updated last year
- ☆42Updated 6 months ago
- [NeurIPS 2024] How do Large Language Models Handle Multilingualism?☆41Updated 10 months ago
- [EMNLP 2023] MQuAKE: Assessing Knowledge Editing in Language Models via Multi-Hop Questions☆116Updated last year
- Code and data for "ConflictBank: A Benchmark for Evaluating the Influence of Knowledge Conflicts in LLM" (NeurIPS 2024 Track Datasets and…☆51Updated 4 months ago
- ☆75Updated last year
- Official Implementation of "Probing Language Models for Pre-training Data Detection"☆20Updated 10 months ago
- ☆38Updated last year
- ☆56Updated last year
- NAACL 2024: SeaEval for Multilingual Foundation Models: From Cross-Lingual Alignment to Cultural Reasoning☆26Updated 7 months ago
- Repository for Label Words are Anchors: An Information Flow Perspective for Understanding In-Context Learning☆165Updated last year
- Github repository for "FELM: Benchmarking Factuality Evaluation of Large Language Models" (NeurIPS 2023)☆59Updated last year
- ☆20Updated last year
- Awesome LLM for NLG Evaluation Papers☆25Updated last year
- Official repository for ACL 2025 paper "ProcessBench: Identifying Process Errors in Mathematical Reasoning"☆171Updated 4 months ago
- OMGEval😮: An Open Multilingual Generative Evaluation Benchmark for Foundation Models☆35Updated last year
- [NeurIPS 2024] Can Language Models Learn to Skip Steps?☆20Updated 8 months ago
- [EMNLP 2024] The official GitHub repo for the survey paper "Knowledge Conflicts for LLMs: A Survey"☆138Updated last year
- A versatile toolkit for applying Logit Lens to modern large language models (LLMs). Currently supports Llama-3.1-8B and Qwen-2.5-7B, enab…☆115Updated last month
- GSM-Plus: Data, Code, and Evaluation for Enhancing Robust Mathematical Reasoning in Math Word Problems.☆63Updated last year
- Collection of papers for scalable automated alignment.☆93Updated 11 months ago
- [ACL 2024] FollowBench: A Multi-level Fine-grained Constraints Following Benchmark for Large Language Models☆114Updated 3 months ago
- an easy-to-use knn-mt toolkit☆104Updated 2 years ago
- BeHonest: Benchmarking Honesty in Large Language Models☆34Updated last year
- [ICLR'24 Spotlight] "Adaptive Chameleon or Stubborn Sloth: Revealing the Behavior of Large Language Models in Knowledge Conflicts"☆77Updated last year