SalesforceAIResearch / FaithEvalLinks
☆61Updated 3 months ago
Alternatives and similar repositories for FaithEval
Users that are interested in FaithEval are comparing it to the libraries listed below
Sorting:
- [NAACL 2024 Outstanding Paper] Source code for the NAACL 2024 paper entitled "R-Tuning: Instructing Large Language Models to Say 'I Don't…☆129Updated last year
- [ICLR'24 Spotlight] "Adaptive Chameleon or Stubborn Sloth: Revealing the Behavior of Large Language Models in Knowledge Conflicts"☆81Updated last year
- Github repository for "RAGTruth: A Hallucination Corpus for Developing Trustworthy Retrieval-Augmented Language Models"☆223Updated last year
- ☆107Updated last year
- [ICLR 2025] BRIGHT: A Realistic and Challenging Benchmark for Reasoning-Intensive Retrieval☆189Updated 4 months ago
- Codes and datasets for the paper Measuring and Enhancing Trustworthiness of LLMs in RAG through Grounded Attributions and Learning to Ref…☆72Updated 11 months ago
- [ICLR 2025] InstructRAG: Instructing Retrieval-Augmented Generation via Self-Synthesized Rationales☆136Updated last year
- [ICLR 2024] Evaluating Large Language Models at Evaluating Instruction Following☆136Updated last year
- Critique-out-Loud Reward Models☆73Updated last year
- PASTA: Post-hoc Attention Steering for LLMs☆134Updated last year
- Code associated with Tuning Language Models by Proxy (Liu et al., 2024)☆127Updated last year
- [NAACL 2025] The official implementation of paper "Learning From Failure: Integrating Negative Examples when Fine-tuning Large Language M…☆28Updated last year
- [NeurIPS 2024] Knowledge Circuits in Pretrained Transformers☆163Updated 2 months ago
- Code implementation of synthetic continued pretraining☆148Updated last year
- ☆38Updated last year
- [EMNLP 2024 (Oral)] Leave No Document Behind: Benchmarking Long-Context LLMs with Extended Multi-Doc QA☆146Updated last month
- Implementation of the paper: "Making Retrieval-Augmented Language Models Robust to Irrelevant Context"☆75Updated last year
- Official code for "MAmmoTH2: Scaling Instructions from the Web" [NeurIPS 2024]☆148Updated last year
- Watch Every Step! LLM Agent Learning via Iterative Step-level Process Refinement (EMNLP 2024 Main Conference)☆65Updated last year
- Codebase for reproducing the experiments of the semantic uncertainty paper (paragraph-length experiments).☆78Updated last year
- ☆140Updated last year
- [ICLR 2024] MetaTool Benchmark for Large Language Models: Deciding Whether to Use Tools and Which to Use☆107Updated last year
- Code and Data for "Long-context LLMs Struggle with Long In-context Learning" [TMLR2025]☆111Updated 11 months ago
- ☆169Updated 3 months ago
- [ICML 2024] Selecting High-Quality Data for Training Language Models☆201Updated 2 months ago
- Awesome LLM Self-Consistency: a curated list of Self-consistency in Large Language Models☆119Updated 6 months ago
- BrowseComp-Plus: A More Fair and Transparent Evaluation Benchmark of Deep-Research Agent☆169Updated last month
- Public code repo for paper "SaySelf: Teaching LLMs to Express Confidence with Self-Reflective Rationales"☆112Updated last year
- Official Repo for ICLR 2024 paper MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback by Xingyao Wang*, Ziha…☆132Updated last year
- [NAACL'25 Oral] Steering Knowledge Selection Behaviours in LLMs via SAE-Based Representation Engineering☆72Updated 3 weeks ago