declare-lab / trust-alignLinks
Codes and datasets for the paper Measuring and Enhancing Trustworthiness of LLMs in RAG through Grounded Attributions and Learning to Refuse
☆69Updated 10 months ago
Alternatives and similar repositories for trust-align
Users that are interested in trust-align are comparing it to the libraries listed below
Sorting:
- ☆37Updated last year
- [ICLR 2025] InstructRAG: Instructing Retrieval-Augmented Generation via Self-Synthesized Rationales☆134Updated 11 months ago
- [NeurIPS 2024] Knowledge Circuits in Pretrained Transformers☆162Updated 2 months ago
- [ACL 2025] Agentic Reward Modeling: Integrating Human Preferences with Verifiable Correctness Signals for Reliable Reward Systems☆119Updated 7 months ago
- Code release for "SPIQA: A Dataset for Multimodal Question Answering on Scientific Papers" [NeurIPS D&B, 2024]☆71Updated last year
- ☆24Updated 9 months ago
- ☆50Updated 11 months ago
- ☆58Updated 2 months ago
- ☆166Updated 3 months ago
- DocBench: A Benchmark for Evaluating LLM-based Document Reading Systems☆62Updated last year
- Data and Code for EMNLP 2025 Findings Paper "MCTS-RAG: Enhancing Retrieval-Augmented Generation with Monte Carlo Tree Search"☆84Updated 2 months ago
- Co-LLM: Learning to Decode Collaboratively with Multiple Language Models☆124Updated last year
- [ICML 2025] Flow of Reasoning: Training LLMs for Divergent Reasoning with Minimal Examples☆113Updated 5 months ago
- [ACL'25 Oral] What Happened in LLMs Layers when Trained for Fast vs. Slow Thinking: A Gradient Perspective☆75Updated 6 months ago
- [NAACL 2024 Outstanding Paper] Source code for the NAACL 2024 paper entitled "R-Tuning: Instructing Large Language Models to Say 'I Don't…☆126Updated last year
- ☆52Updated 7 months ago
- [ACL 2025] Knowledge Unlearning for Large Language Models☆47Updated 3 months ago
- Public code repo for paper "SaySelf: Teaching LLMs to Express Confidence with Self-Reflective Rationales"☆112Updated last year
- Code for the EMNLP 2024 paper "Detecting and Mitigating Contextual Hallucinations in Large Language Models Using Only Attention Maps"☆142Updated 3 months ago
- code repo for ICLR 2024 paper "Can LLMs Express Their Uncertainty? An Empirical Evaluation of Confidence Elicitation in LLMs"☆142Updated last year
- ☆108Updated last year
- Stanford NLP Python library for benchmarking the utility of LLM interpretability methods☆162Updated 6 months ago
- Official repository for paper "ReasonIR Training Retrievers for Reasoning Tasks".☆213Updated 6 months ago
- ☆140Updated 10 months ago
- Exploration of automated dataset selection approaches at large scales.☆53Updated 10 months ago
- [ICLR'24 Spotlight] "Adaptive Chameleon or Stubborn Sloth: Revealing the Behavior of Large Language Models in Knowledge Conflicts"☆82Updated last year
- ☆107Updated last month
- [EMNLP 2024 Findings] ProSA: Assessing and Understanding the Prompt Sensitivity of LLMs☆29Updated 7 months ago
- ☆75Updated last year
- Unofficial Implementation of Chain-of-Thought Reasoning Without Prompting☆34Updated last year