NOVAglow646 / OOD-Generalization-Paper-Reading-NotesLinks
OOD Generalization相关文章的阅读笔记
☆31Updated 10 months ago
Alternatives and similar repositories for OOD-Generalization-Paper-Reading-Notes
Users that are interested in OOD-Generalization-Paper-Reading-Notes are comparing it to the libraries listed below
Sorting:
- ☆51Updated 11 months ago
- 🔥 【Meta Awesome List】: AI/ML Research Hub - Solving the "Chasing Hot Topics" Problem for AI Researchers. 🤖 Agent-driven intelligence au…☆55Updated last month
- Large language model review prompts☆242Updated this week
- Code for Reducing Hallucinations in Vision-Language Models via Latent Space Steering☆85Updated 11 months ago
- ☆144Updated 10 months ago
- ☆17Updated last year
- The first toolkit for MLRM safety evaluation, providing unified interface for mainstream models, datasets, and jailbreaking methods!☆13Updated 6 months ago
- Code for CVPR 2024 paper: Positive-Unlabeled Learning by Latent Group-Aware Meta Disambiguation☆19Updated last year
- 关于LLM和Multimodal LLM的paper list☆49Updated 3 weeks ago
- [NAACL 2025 Main] Official Implementation of MLLMU-Bench☆38Updated 7 months ago
- A Task of Fictitious Unlearning for VLMs☆23Updated 6 months ago
- [ACL'25 Main] SelfElicit: Your Language Model Secretly Knows Where is the Relevant Evidence! | 让你的LLM更好地利用上下文文档:一个基于注意力的简单方案☆23Updated 8 months ago
- ☆29Updated last year
- "In-Context Unlearning: Language Models as Few Shot Unlearners". Martin Pawelczyk, Seth Neel* and Himabindu Lakkaraju*; ICML 2024.☆28Updated 2 years ago
- A Comprehensive Survey of Forgetting in Deep Learning Beyond Continual Learning. TPAMI, 2024.☆329Updated last week
- Towards Modality Generalization: A Benchmark and Prospective Analysis☆26Updated 5 months ago
- ☆33Updated last year
- Official repository for "Safety in Large Reasoning Models: A Survey" - Exploring safety risks, attacks, and defenses for Large Reasoning …☆75Updated 2 months ago
- [CVPR 2024] On the Diversity and Realism of Distilled Dataset: An Efficient Dataset Distillation Paradigm☆74Updated 8 months ago
- ☆13Updated last year
- An implementation of SEAL: Safety-Enhanced Aligned LLM fine-tuning via bilevel data selection.☆20Updated 8 months ago
- A curated collection of resources focused on the Mechanistic Interpretability (MI) of Large Multimodal Models (LMMs). This repository agg…☆148Updated this week
- [ICLR 2025] "Noisy Test-Time Adaptation in Vision-Language Models"☆16Updated 8 months ago
- IDEAL: Influence-Driven Selective Annotations Empower In-Context Learners in Large Language Models☆59Updated last year
- Official code for ICML 2024 paper, "Connecting the Dots: Collaborative Fine-tuning for Black-Box Vision-Language Models"☆19Updated last year
- ICML 2025 Oral: ABKD: Pursuing a Proper Allocation of the Probability Mass in Knowledge Distillation via α-β-Divergence☆39Updated 2 months ago
- A curated list of resources dedicated to the safety of Large Vision-Language Models. This repository aligns with our survey titled A Surv…☆154Updated 2 weeks ago
- [ECCV 2024] The official code for "AdaShield: Safeguarding Multimodal Large Language Models from Structure-based Attack via Adaptive Shi…☆65Updated last year
- Accepted by ECCV 2024☆165Updated last year
- ☆29Updated 2 years ago