HuichiZhou / TrustRAGLinks
Code for "TrustRAG: Enhancing Robustness and Trustworthiness in RAG"
☆41Updated 3 months ago
Alternatives and similar repositories for TrustRAG
Users that are interested in TrustRAG are comparing it to the libraries listed below
Sorting:
- ☆29Updated 2 months ago
- The repository of the paper "REEF: Representation Encoding Fingerprints for Large Language Models," aims to protect the IP of open-source…☆44Updated 5 months ago
- [ICML 2024 Oral] Official code repository for MLLM-as-a-Judge.☆71Updated 4 months ago
- The implementation for ICLR 2025 Oral: From Exploration to Mastery: Enabling LLMs to Master Tools via Self-Driven Interactions.☆39Updated last month
- Think or Not? Selective Reasoning via Reinforcement Learning for Vision-Language Models☆36Updated last week
- SG-Bench: Evaluating LLM Safety Generalization Across Diverse Tasks and Prompt Types☆18Updated 7 months ago
- [ACL 2024] Code and data for "Machine Unlearning of Pre-trained Large Language Models"☆59Updated 9 months ago
- ☆24Updated 2 months ago
- [ICLR 2025] Dissecting adversarial robustness of multimodal language model agents☆93Updated 4 months ago
- [ICLR 2025] Code and Data Repo for Paper "Latent Space Chain-of-Embedding Enables Output-free LLM Self-Evaluation"☆64Updated 6 months ago
- CoT-Valve: Length-Compressible Chain-of-Thought Tuning☆75Updated 4 months ago
- ☆45Updated 4 months ago
- Official codebase for "STAIR: Improving Safety Alignment with Introspective Reasoning"☆47Updated 4 months ago
- Official repository for "Safety in Large Reasoning Models: A Survey" - Exploring safety risks, attacks, and defenses for Large Reasoning …☆52Updated 3 weeks ago
- Accepted LLM Papers in NeurIPS 2024☆37Updated 8 months ago
- ☆26Updated last year
- [ECCV 2024] Official PyTorch Implementation of "How Many Unicorns Are in This Image? A Safety Evaluation Benchmark for Vision LLMs"☆81Updated last year
- ☆25Updated last month
- More Thinking, Less Seeing? Assessing Amplified Hallucination in Multimodal Reasoning Models☆24Updated 3 weeks ago
- [ACL'25 Oral] What Happened in LLMs Layers when Trained for Fast vs. Slow Thinking: A Gradient Perspective☆65Updated this week
- Awesome Large Reasoning Model(LRM) Safety.This repository is used to collect security-related research on large reasoning models such as …☆65Updated this week
- [ACL 2025] Knowledge Unlearning for Large Language Models☆37Updated last month
- Code for "CREAM: Consistency Regularized Self-Rewarding Language Models", ICLR 2025.☆22Updated 4 months ago
- To Think or Not to Think: Exploring the Unthinking Vulnerability in Large Reasoning Models☆31Updated last month
- [FCS'24] LVLM Safety paper☆18Updated 5 months ago
- "In-Context Unlearning: Language Models as Few Shot Unlearners". Martin Pawelczyk, Seth Neel* and Himabindu Lakkaraju*; ICML 2024.☆26Updated last year
- codes for Efficient Test-Time Scaling via Self-Calibration☆14Updated 3 months ago
- Official repo for EMNLP'24 paper "SOUL: Unlocking the Power of Second-Order Optimization for LLM Unlearning"☆26Updated 8 months ago
- Safe Unlearning: A Surprisingly Effective and Generalizable Solution to Defend Against Jailbreak Attacks☆28Updated 11 months ago
- Code for paper "Unraveling Cross-Modality Knowledge Conflicts in Large Vision-Language Models."☆42Updated 8 months ago