IAAR-Shanghai / ICSFSurvey
Explore concepts like Self-Correct, Self-Refine, Self-Improve, Self-Contradict, Self-Play, and Self-Knowledge, alongside o1-like reasoning elevation🍓 and hallucination alleviation🍄.
☆161Updated 3 months ago
Alternatives and similar repositories for ICSFSurvey:
Users that are interested in ICSFSurvey are comparing it to the libraries listed below
- This includes the original implementation of CtrlA: Adaptive Retrieval-Augmented Generation via Inherent Control.☆60Updated 5 months ago
- The official repository of our survey paper: "Towards a Unified View of Preference Learning for Large Language Models: A Survey"☆162Updated 4 months ago
- [ACL 2024] User-friendly evaluation framework: Eval Suite & Benchmarks: UHGEval, HaluEval, HalluQA, etc.☆160Updated 4 months ago
- [ICLR 2025] xFinder: Large Language Models as Automated Evaluators for Reliable Evaluation☆156Updated 3 weeks ago
- Grimoire is All You Need for Enhancing Large Language Models☆112Updated last year
- We leverage 14 datasets as OOD test data and conduct evaluations on 8 NLU tasks over 21 popularly used models. Our findings confirm that …☆94Updated last year
- RobustFT: Robust Supervised Fine-tuning for Large Language Models under Noisy Response☆40Updated 3 months ago
- Controllable Text Generation for Large Language Models: A Survey☆163Updated 6 months ago
- Reverse Chain-of-Thought Problem Generation for Geometric Reasoning in Large Multimodal Models☆171Updated 4 months ago
- [KDD 2024]this is project for training explicit graph reasoning large language models.☆90Updated 2 months ago
- [ACL 24 main] Large Language Models Can Learn Temporal Reasoning☆49Updated 3 months ago
- Benchmarking LLMs via Uncertainty Quantification☆214Updated last year
- An awesome repository & A comprehensive survey on interpretability of LLM attention heads.☆325Updated 3 weeks ago
- [EMNLP 2024] DA-Code: Agent Data Science Code Generation Benchmark for Large Language Models☆63Updated 4 months ago
- ☆116Updated last week
- A Comprehensive Benchmark for Code Information Retrieval.☆72Updated last month
- This is the official code repository of MoTCoder: Elevating Large Language Models with Modular of Thought for Challenging Programming Tas…☆55Updated this week
- The Official Repo of ML-Bench: Evaluating Large Language Models and Agents for Machine Learning Tasks on Repository-Level Code (https://a…☆291Updated 4 months ago
- MPLSandbox is an out-of-the-box multi-programming language sandbox designed to provide unified and comprehensive feedback from compiler a…☆174Updated 4 months ago
- LLM Benchmark for Code☆31Updated 7 months ago
- [EMNLP'2024] "XRec: Large Language Models for Explainable Recommendation"☆147Updated 5 months ago
- AvaTaR: Optimizing LLM Agents for Tool Usage via Contrastive Reasoning (NeurIPS 2024)☆187Updated 2 weeks ago
- This tool(enhance_long) aims to enhance the LlaMa2 long context extrapolation capability in the lowest-cost approach, preferably without …☆45Updated last year
- ☆100Updated last year
- The official repo for paper, LLMs-as-Judges: A Comprehensive Survey on LLM-based Evaluation Methods.☆306Updated 3 months ago
- [NeurIPS2024] Twin-Merging: Dynamic Integration of Modular Expertise in Model Merging☆101Updated this week