smartyfh / LLM-Uncertainty-Bench
Benchmarking LLMs via Uncertainty Quantification
☆210Updated last year
Alternatives and similar repositories for LLM-Uncertainty-Bench:
Users that are interested in LLM-Uncertainty-Bench are comparing it to the libraries listed below
- We leverage 14 datasets as OOD test data and conduct evaluations on 8 NLU tasks over 21 popularly used models. Our findings confirm that …☆94Updated last year
- Explore concepts like Self-Correct, Self-Refine, Self-Improve, Self-Contradict, Self-Play, and Self-Knowledge, alongside o1-like reasonin…☆161Updated 3 months ago
- This includes the original implementation of CtrlA: Adaptive Retrieval-Augmented Generation via Inherent Control.☆60Updated 5 months ago
- AvaTaR: Optimizing LLM Agents for Tool Usage via Contrastive Reasoning (NeurIPS 2024)☆186Updated last week
- [ACL 2024] User-friendly evaluation framework: Eval Suite & Benchmarks: UHGEval, HaluEval, HalluQA, etc.☆160Updated 4 months ago
- A recipe for online RLHF and online iterative DPO.☆494Updated 2 months ago
- Grimoire is All You Need for Enhancing Large Language Models☆111Updated last year
- Controllable Text Generation for Large Language Models: A Survey☆162Updated 6 months ago
- [EMNLP 2024] DA-Code: Agent Data Science Code Generation Benchmark for Large Language Models☆63Updated 4 months ago
- [ACL 24 main] Large Language Models Can Learn Temporal Reasoning☆49Updated 3 months ago
- The official repo for paper, LLMs-as-Judges: A Comprehensive Survey on LLM-based Evaluation Methods.☆301Updated 2 months ago
- [ACL 2024] Shifting Attention to Relevance: Towards the Predictive Uncertainty Quantification of Free-Form Large Language Models☆46Updated 6 months ago
- Code and Checkpoints for "Generate rather than Retrieve: Large Language Models are Strong Context Generators" in ICLR 2023.☆281Updated 2 years ago
- ☆158Updated 8 months ago
- code repo for ICLR 2024 paper "Can LLMs Express Their Uncertainty? An Empirical Evaluation of Confidence Elicitation in LLMs"☆102Updated last year
- ☆25Updated 7 months ago
- MPLSandbox is an out-of-the-box multi-programming language sandbox designed to provide unified and comprehensive feedback from compiler a…☆174Updated 4 months ago
- 🚀 [NeurIPS24] Make Vision Matter in Visual-Question-Answering (VQA)! Introducing NaturalBench, a vision-centric VQA benchmark (NeurIPS'2…☆70Updated last month
- The official implementation of Self-Play Preference Optimization (SPPO)☆498Updated last month
- The official implementation of the ICML 2024 paper "MemoryLLM: Towards Self-Updatable Large Language Models" and "M+: Extending MemoryLLM…☆130Updated last month
- ☆63Updated this week
- The Official Repo of ML-Bench: Evaluating Large Language Models and Agents for Machine Learning Tasks on Repository-Level Code (https://a…☆290Updated 3 months ago
- [arXiv 2024] An official source code for paper "FlipAttack: Jailbreak LLMs via Flipping".☆96Updated last week
- ☆100Updated last year
- A Survey of Attributions for Large Language Models☆196Updated 6 months ago
- [ECCV 2024] Does Your Multi-modal LLM Truly See the Diagrams in Visual Math Problems?☆153Updated 5 months ago
- A Comprehensive Benchmark for Code Information Retrieval.☆70Updated 3 weeks ago