declare-lab / trust-alignLinks
Codes and datasets for the paper Measuring and Enhancing Trustworthiness of LLMs in RAG through Grounded Attributions and Learning to Refuse
☆68Updated 8 months ago
Alternatives and similar repositories for trust-align
Users that are interested in trust-align are comparing it to the libraries listed below
Sorting:
- [NeurIPS 2024] Knowledge Circuits in Pretrained Transformers☆159Updated 8 months ago
- [ICML 2025] Flow of Reasoning: Training LLMs for Divergent Reasoning with Minimal Examples☆110Updated 3 months ago
- ☆37Updated 10 months ago
- Public code repo for paper "SaySelf: Teaching LLMs to Express Confidence with Self-Reflective Rationales"☆109Updated last year
- [ICLR 2025] InstructRAG: Instructing Retrieval-Augmented Generation via Self-Synthesized Rationales☆131Updated 9 months ago
- Code for In-context Vectors: Making In Context Learning More Effective and Controllable Through Latent Space Steering☆192Updated 9 months ago
- Data and Code for EMNLP 2025 Findings Paper "MCTS-RAG: Enhancing Retrieval-Augmented Generation with Monte Carlo Tree Search"☆79Updated last week
- [ICLR'24 Spotlight] "Adaptive Chameleon or Stubborn Sloth: Revealing the Behavior of Large Language Models in Knowledge Conflicts"☆77Updated last year
- official implementation of paper "Process Reward Model with Q-value Rankings"☆64Updated 9 months ago
- Code for the EMNLP 2024 paper "Detecting and Mitigating Contextual Hallucinations in Large Language Models Using Only Attention Maps"☆141Updated last month
- Official Code Repository for LM-Steer Paper: "Word Embeddings Are Steers for Language Models" (ACL 2024 Outstanding Paper Award)☆128Updated 4 months ago
- Code release for "SPIQA: A Dataset for Multimodal Question Answering on Scientific Papers" [NeurIPS D&B, 2024]☆69Updated 10 months ago
- ☆52Updated 7 months ago
- ☆103Updated last year
- Critique-out-Loud Reward Models☆70Updated last year
- [ACL 2025] Knowledge Unlearning for Large Language Models☆46Updated last month
- ☆74Updated last year
- ☆51Updated last week
- [NAACL 2024 Outstanding Paper] Source code for the NAACL 2024 paper entitled "R-Tuning: Instructing Large Language Models to Say 'I Don't…☆125Updated last year
- Exploration of automated dataset selection approaches at large scales.☆48Updated 8 months ago
- ☆29Updated last year
- ☆51Updated 9 months ago
- [NeurIPS 2024] The official implementation of paper: Chain of Preference Optimization: Improving Chain-of-Thought Reasoning in LLMs.☆132Updated 7 months ago
- Stanford NLP Python library for benchmarking the utility of LLM interpretability methods☆140Updated 4 months ago
- Official repository for paper "ReasonIR Training Retrievers for Reasoning Tasks".☆206Updated 4 months ago
- [ACL'24] Code and data of paper "When is Tree Search Useful for LLM Planning? It Depends on the Discriminator"☆54Updated last year
- In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation (ICML 2024)☆62Updated last year
- ☆154Updated last month
- ☆25Updated 7 months ago
- [NAACL'25 Oral] Steering Knowledge Selection Behaviours in LLMs via SAE-Based Representation Engineering☆66Updated 11 months ago