Trustworthy-ML-Lab / CB-LLMsLinks
[ICLR 25] A novel framework for building intrinsically interpretable LLMs with human-understandable concepts to ensure safety, reliability, transparency, and trustworthiness.
☆22Updated 3 weeks ago
Alternatives and similar repositories for CB-LLMs
Users that are interested in CB-LLMs are comparing it to the libraries listed below
Sorting:
- ☆99Updated 5 months ago
- [NeurIPS 2024 Spotlight] EMR-Merging: Tuning-Free High-Performance Model Merging☆67Updated 6 months ago
- AdaMerging: Adaptive Model Merging for Multi-Task Learning. ICLR, 2024.☆88Updated 10 months ago
- ☆52Updated last month
- LCA-on-the-line (ICML 2024 Oral)☆12Updated 6 months ago
- Official repository of "Localizing Task Information for Improved Model Merging and Compression" [ICML 2024]☆49Updated 10 months ago
- ☆68Updated 10 months ago
- Code for Reducing Hallucinations in Vision-Language Models via Latent Space Steering☆71Updated 9 months ago
- [ICLR 23] A new framework to transform any neural networks into an interpretable concept-bottleneck-model (CBM) without needing labeled c…☆111Updated last year
- Official Pytorch implementation of "Interpreting and Editing Vision-Language Representations to Mitigate Hallucinations" (ICLR '25)☆82Updated 3 months ago
- [EMNLP 2023, Main Conference] Sparse Low-rank Adaptation of Pre-trained Language Models☆81Updated last year
- A curated collection of resources focused on the Mechanistic Interpretability (MI) of Large Multimodal Models (LMMs). This repository agg…☆123Updated last month
- Less is More: High-value Data Selection for Visual Instruction Tuning☆15Updated 7 months ago
- A curated list of Model Merging methods.☆92Updated 11 months ago
- [NeurIPS'24 Oral] HydraLoRA: An Asymmetric LoRA Architecture for Efficient Fine-Tuning☆222Updated 9 months ago
- ☆50Updated 9 months ago
- [ICML 2024 Oral] Official code repository for MLLM-as-a-Judge.☆80Updated 6 months ago
- In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation (ICML 2024)☆61Updated last year
- [ICLR 2025] MLLM can see? Dynamic Correction Decoding for Hallucination Mitigation☆99Updated 8 months ago
- ☆50Updated last month
- ☆38Updated last year
- [NeurIPS 2024] "Can Language Models Perform Robust Reasoning in Chain-of-thought Prompting with Noisy Rationales?"☆37Updated last month
- [ACL 2024] Code and data for "Machine Unlearning of Pre-trained Large Language Models"☆60Updated 11 months ago
- [ACM Multimedia 2025] This is the official repo for Debiasing Large Visual Language Models, including a Post-Hoc debias method and Visual…☆82Updated 6 months ago
- Awesome Low-Rank Adaptation☆43Updated last month
- Code for paper "Parameter Efficient Multi-task Model Fusion with Partial Linearization"☆21Updated 11 months ago
- CVPR 2023: Language in a Bottle: Language Model Guided Concept Bottlenecks for Interpretable Image Classification☆95Updated last year
- This repo contains the code for the paper "Understanding and Mitigating Hallucinations in Large Vision-Language Models via Modular Attrib…☆25Updated last month
- The First to Know: How Token Distributions Reveal Hidden Knowledge in Large Vision-Language Models?☆33Updated 10 months ago
- A curated list of resources for activation engineering☆101Updated 3 months ago