declare-lab / trust-alignLinks
Codes and datasets for the paper Measuring and Enhancing Trustworthiness of LLMs in RAG through Grounded Attributions and Learning to Refuse
☆66Updated 7 months ago
Alternatives and similar repositories for trust-align
Users that are interested in trust-align are comparing it to the libraries listed below
Sorting:
- [NeurIPS 2024] Knowledge Circuits in Pretrained Transformers☆157Updated 7 months ago
- Co-LLM: Learning to Decode Collaboratively with Multiple Language Models☆121Updated last year
- [ICLR 2025] InstructRAG: Instructing Retrieval-Augmented Generation via Self-Synthesized Rationales☆124Updated 8 months ago
- ☆37Updated 8 months ago
- [ACL 2025] Knowledge Unlearning for Large Language Models☆42Updated 2 weeks ago
- Public code repo for paper "SaySelf: Teaching LLMs to Express Confidence with Self-Reflective Rationales"☆109Updated last year
- [ACL'25 Oral] What Happened in LLMs Layers when Trained for Fast vs. Slow Thinking: A Gradient Perspective☆74Updated 3 months ago
- Code release for "SPIQA: A Dataset for Multimodal Question Answering on Scientific Papers" [NeurIPS D&B, 2024]☆64Updated 8 months ago
- Code for the EMNLP 2024 paper "Detecting and Mitigating Contextual Hallucinations in Large Language Models Using Only Attention Maps"☆131Updated last year
- Data and Code for EMNLP 2025 Findings Paper "MCTS-RAG: Enhancing Retrieval-Augmented Generation with Monte Carlo Tree Search"☆70Updated 3 months ago
- Official repository for paper "ReasonIR Training Retrievers for Reasoning Tasks".☆202Updated 3 months ago
- Search, Verify and Feedback: Towards Next Generation Post-training Paradigm of Foundation Models via Verifier Engineering☆61Updated 10 months ago
- ☆48Updated 7 months ago
- [ACL 2025] Agentic Reward Modeling: Integrating Human Preferences with Verifiable Correctness Signals for Reliable Reward Systems☆106Updated 3 months ago
- ☆99Updated 11 months ago
- [NAACL 2024 Outstanding Paper] Source code for the NAACL 2024 paper entitled "R-Tuning: Instructing Large Language Models to Say 'I Don't…☆121Updated last year
- SSRL: Self-Search Reinforcement Learning☆145Updated last month
- [ICML 2025] Flow of Reasoning: Training LLMs for Divergent Reasoning with Minimal Examples☆106Updated 2 months ago
- Code for In-context Vectors: Making In Context Learning More Effective and Controllable Through Latent Space Steering☆190Updated 7 months ago
- ☆47Updated 3 months ago
- Stanford NLP Python library for benchmarking the utility of LLM interpretability methods☆134Updated 3 months ago
- B-STAR: Monitoring and Balancing Exploration and Exploitation in Self-Taught Reasoners☆85Updated 4 months ago
- [ICLR'24 Spotlight] "Adaptive Chameleon or Stubborn Sloth: Revealing the Behavior of Large Language Models in Knowledge Conflicts"☆77Updated last year
- ☆50Updated 4 months ago
- [NeurIPS 2024] The official implementation of paper: Chain of Preference Optimization: Improving Chain-of-Thought Reasoning in LLMs.☆128Updated 6 months ago
- ☆74Updated last year
- Code for "Critique Fine-Tuning: Learning to Critique is More Effective than Learning to Imitate" [COLM 2025]☆172Updated 2 months ago
- Code for "Your Mixture-of-Experts LLM Is Secretly an Embedding Model For Free"☆83Updated 11 months ago
- Code associated with Tuning Language Models by Proxy (Liu et al., 2024)☆120Updated last year
- official implementation of paper "Process Reward Model with Q-value Rankings"☆63Updated 8 months ago