ml-research / LlavaGuard
☆24Updated 2 months ago
Related projects ⓘ
Alternatives and complementary repositories for LlavaGuard
- Code for T-MARS data filtering☆35Updated last year
- ☆49Updated last year
- ☆12Updated 8 months ago
- ☆38Updated last year
- Official Repository for Dataset Inference for LLMs☆23Updated 3 months ago
- The official repository of the paper "On the Exploitability of Instruction Tuning".☆57Updated 9 months ago
- [ICLR 2024] Provable Robust Watermarking for AI-Generated Text☆26Updated 11 months ago
- [Arxiv 2024] Adversarial attacks on multimodal agents☆38Updated 4 months ago
- [SafeGenAi @ NeurIPS 2024] Cheating Automatic LLM Benchmarks: Null Models Achieve High Win Rates☆60Updated 3 weeks ago
- [ECCV 2024] Official PyTorch Implementation of "How Many Unicorns Are in This Image? A Safety Evaluation Benchmark for Vision LLMs"☆67Updated 11 months ago
- ☆64Updated 7 months ago
- Code for the paper "Data Feedback Loops: Model-driven Amplification of Dataset Biases"☆15Updated 2 years ago
- We introduce EMMET and unify model editing with popular algorithms ROME and MEMIT.☆12Updated 2 months ago
- ☆31Updated last year
- LLM Self Defense: By Self Examination, LLMs know they are being tricked☆26Updated 5 months ago
- [ATTRIB @ NeurIPS 2024] When Attention Sink Emerges in Language Models: An Empirical View☆27Updated 3 weeks ago
- ☆27Updated 9 months ago
- Is In-Context Learning Sufficient for Instruction Following in LLMs?☆23Updated 5 months ago
- Implementation of PaCE: Parsimonious Concept Engineering for Large Language Models (NeurIPS 2024)☆26Updated last week
- Enhancing Large Vision Language Models with Self-Training on Image Comprehension.☆57Updated 5 months ago
- Official Pytorch implementation of "Interpreting and Editing Vision-Language Representations to Mitigate Hallucinations"☆29Updated 2 weeks ago
- Implementation of paper 'Reversing the Forget-Retain Objectives: An Efficient LLM Unlearning Framework from Logit Difference' [NeurIPS'24…☆13Updated 4 months ago
- ☆28Updated last year
- [ACL 2024] Shifting Attention to Relevance: Towards the Predictive Uncertainty Quantification of Free-Form Large Language Models☆36Updated 2 months ago
- What do we learn from inverting CLIP models?☆45Updated 8 months ago
- Holistic evaluation of multimodal foundation models☆41Updated 3 months ago
- [Arxiv] Aligning Modalities in Vision Large Language Models via Preference Fine-tuning☆72Updated 6 months ago
- code repo for ICLR 2024 paper "Can LLMs Express Their Uncertainty? An Empirical Evaluation of Confidence Elicitation in LLMs"☆68Updated 7 months ago
- [NeurIPS 2024] Knowledge Circuits in Pretrained Transformers☆68Updated 3 weeks ago
- Intriguing Properties of Data Attribution on Diffusion Models (ICLR 2024)☆23Updated 9 months ago