IAAR-Shanghai / UHGEvalLinks
[ACL 2024] User-friendly evaluation framework: Eval Suite & Benchmarks: UHGEval, HaluEval, HalluQA, etc.
☆175Updated 4 months ago
Alternatives and similar repositories for UHGEval
Users that are interested in UHGEval are comparing it to the libraries listed below
Sorting:
- Controllable Text Generation for Large Language Models: A Survey☆192Updated last year
- notes for Multi-hop Reading Comprehension and open-domain question answering☆86Updated 3 years ago
- This includes the original implementation of CtrlA: Adaptive Retrieval-Augmented Generation via Inherent Control.☆62Updated last year
- A library for generating difficulty-scalable, multi-tool, and verifiable agentic tasks with execution trajectories.☆166Updated 3 months ago
- Explore concepts like Self-Correct, Self-Refine, Self-Improve, Self-Contradict, Self-Play, and Self-Knowledge, alongside o1-like reasonin…☆169Updated 10 months ago
- Grimoire is All You Need for Enhancing Large Language Models☆117Updated last year
- [EMNLP 2023] FreeAL: Towards Human-Free Active Learning in the Era of Large Language Models☆92Updated last year
- [NeurIPS 2025 Poster] Search and Refine During Think: Facilitating Knowledge Refinement for Improved Retrieval-Augmented Reasoning☆100Updated last week
- ☆152Updated 7 months ago
- The framework to prune LLMs to any size and any config.☆94Updated last year
- We leverage 14 datasets as OOD test data and conduct evaluations on 8 NLU tasks over 21 popularly used models. Our findings confirm that …☆93Updated 2 years ago
- A scalable, end-to-end training pipeline for general-purpose agents☆360Updated 3 months ago
- [EMNLP 2024] DA-Code: Agent Data Science Code Generation Benchmark for Large Language Models☆78Updated 3 months ago
- Your efficient and accurate answer verification system for RL training.☆41Updated 4 months ago
- Machine-generated text detection in the wild (ACL 2024)☆216Updated 7 months ago
- A Unified Intermediate Representation for Graph Query Languages☆66Updated 2 years ago
- ☆102Updated 2 years ago
- [ACL 2024] CodeScope: An Execution-based Multilingual Multitask Multidimensional Benchmark for Evaluating LLMs on Code Understanding and …☆100Updated last year
- [ACL 24 main] Large Language Models Can Learn Temporal Reasoning☆61Updated 10 months ago
- Benchmarking LLMs via Uncertainty Quantification☆246Updated last year
- 从预训练到强化学习的中文llama2☆87Updated 2 years ago
- Code and Checkpoints for "Generate rather than Retrieve: Large Language Models are Strong Context Generators" in ICLR 2023.☆289Updated 2 years ago
- Codebase for Iterative DPO Using Rule-based Rewards☆260Updated 6 months ago
- ✨ A synthetic dataset generation framework that produces diverse coding questions and verifiable solutions - all in one framwork☆279Updated last month
- MPLSandbox is an out-of-the-box multi-programming language sandbox designed to provide unified and comprehensive feedback from compiler a…☆177Updated 6 months ago
- SQL-o1: A Self-Reward Heuristic Dynamic Search Method for Text-to-SQL☆191Updated 5 months ago
- [EMNLP 2023] CodeTransOcean: A Comprehensive Multilingual Benchmark for Code Translation☆57Updated last year
- ☆76Updated last week
- The official repo for paper, LLMs-as-Judges: A Comprehensive Survey on LLM-based Evaluation Methods.☆476Updated 2 months ago
- [KDD 2024]this is project for training explicit graph reasoning large language models.☆98Updated 10 months ago