ylhz / ICML2024-paperlistLinks
Summaries of ICML 2024 papers
☆12Updated last year
Alternatives and similar repositories for ICML2024-paperlist
Users that are interested in ICML2024-paperlist are comparing it to the libraries listed below
Sorting:
- Focused on the safety and security of Embodied AI☆71Updated last week
 - This is the first released survey paper on hallucinations of large vision-language models (LVLMs). To keep track of this field and contin…☆82Updated last year
 - ☆51Updated 11 months ago
 - Accepted by ECCV 2024☆169Updated last year
 - [ICML 2024] Official implementation for "HALC: Object Hallucination Reduction via Adaptive Focal-Contrast Decoding"☆100Updated 11 months ago
 - A toolbox for benchmarking trustworthiness of multimodal large language models (MultiTrust, NeurIPS 2024 Track Datasets and Benchmarks)☆169Updated 4 months ago
 - A toolbox for benchmarking Multimodal LLM Agents trustworthiness across truthfulness, controllability, safety and privacy dimensions thro…☆56Updated 4 months ago
 - ☆144Updated 8 months ago
 - Instruction Tuning in Continual Learning paradigm☆62Updated 8 months ago
 - [ICLR 2024 Spotlight 🔥 ] - [ Best Paper Award SoCal NLP 2023 🏆] - Jailbreak in pieces: Compositional Adversarial Attacks on Multi-Modal…☆73Updated last year
 - up-to-date curated list of state-of-the-art Large vision language models hallucinations research work, papers & resources☆203Updated last month
 - [ICLR'25] Official code for the paper 'MLLMs Know Where to Look: Training-free Perception of Small Visual Details with Multimodal LLMs'☆284Updated 6 months ago
 - (ACL 2025) 🔥🔥🔥Code for "Empowering Multimodal Large Language Models with Evol-Instruct"☆18Updated 5 months ago
 - [ACL 2025] Data and Code for Paper VLSBench: Unveiling Visual Leakage in Multimodal Safety☆51Updated 3 months ago
 - [CVPR 2024 Highlight] Mitigating Object Hallucinations in Large Vision-Language Models through Visual Contrastive Decoding☆330Updated last year
 - Papers about Hallucination in Multi-Modal Large Language Models (MLLMs)☆98Updated 11 months ago
 - The reinforcement learning codes for dataset SPA-VL☆40Updated last year
 - 关于LLM和Multimodal LLM的paper list☆50Updated last month
 - One Prompt Word is Enough to Boost Adversarial Robustness for Pre-trained Vision-Language Models☆56Updated 10 months ago
 - [ECCV 2024] The official code for "AdaShield: Safeguarding Multimodal Large Language Models from Structure-based Attack via Adaptive Shi…☆66Updated last year
 - ☆52Updated 10 months ago
 - [CVPR 2024 Highlight] OPERA: Alleviating Hallucination in Multi-Modal Large Language Models via Over-Trust Penalty and Retrospection-Allo…☆375Updated last year
 - ECSO (Make MLLM safe without neither training nor any external models!) (https://arxiv.org/abs/2403.09572)☆33Updated last year
 - ☆40Updated 4 months ago
 - [NAACL 2025 Main] Official Implementation of MLLMU-Bench☆38Updated 7 months ago
 - [CVPR'24] HallusionBench: You See What You Think? Or You Think What You See? An Image-Context Reasoning Benchmark Challenging for GPT-4V(…☆306Updated 2 weeks ago
 - [ICLR 2025] PyTorch Implementation of "ETA: Evaluating Then Aligning Safety of Vision Language Models at Inference Time"☆26Updated 3 months ago
 - ☆11Updated 2 years ago
 - Accepted by IJCAI-24 Survey Track☆222Updated last year
 - Codes for paper "SafeAgentBench: A Benchmark for Safe Task Planning of \\ Embodied LLM Agents"☆54Updated 8 months ago