opendatalab / LOKILinks
[ICLR 2025 Spotlight] The official implementation of the paper “LOKI:A Comprehensive Synthetic Data Detection Benchmark using Large Multimodal Models”
☆162Updated 4 months ago
Alternatives and similar repositories for LOKI
Users that are interested in LOKI are comparing it to the libraries listed below
Sorting:
- The official implementation of the paper "LEGION: Learning to Ground and Explain for Synthetic Image Detection"☆53Updated 2 months ago
- FakeVLM: Advancing Synthetic Image Detection through Explainable Multimodal Models and Fine-Grained Artifact Analysis☆68Updated 2 months ago
- (ICCV 2025)This repository is the official implementation of AIGI-Holmes: Towards Explainable and Generalizable AI-Generated Image Detect…☆109Updated last month
- [ICCV 2025] The official code of the paper "Deciphering Cross-Modal Alignment in Large Vision-Language Models with Modality Integration R…☆106Updated last month
- [NeurIPS2024] Repo for the paper `ControlMLLM: Training-Free Visual Prompt Learning for Multimodal Large Language Models'☆190Updated last month
- Official repository of 'ScaleCap: Inference-Time Scalable Image Captioning via Dual-Modality Debiasing’☆53Updated 2 months ago
- Code for "The Devil behind the mask: An emergent safety vulnerability of Diffusion LLMs"☆61Updated last month
- [CVPR 2025] RAP: Retrieval-Augmented Personalization☆66Updated last month
- [ECCV 2024] ShareGPT4V: Improving Large Multi-modal Models with Better Captions☆232Updated last year
- 📖 This is a repository for organizing papers, codes, and other resources related to unified multimodal models.☆277Updated 2 weeks ago
- ☆71Updated 4 months ago
- ☆105Updated 5 months ago
- [ICLR'25] Official code for the paper 'MLLMs Know Where to Look: Training-free Perception of Small Visual Details with Multimodal LLMs'☆249Updated 4 months ago
- Official code for paper "GRIT: Teaching MLLMs to Think with Images"☆121Updated 3 weeks ago
- [ICCV 2025] VisRL: Intention-Driven Visual Perception via Reinforced Reasoning☆38Updated 2 months ago
- ✨✨ [ICLR 2025] MME-RealWorld: Could Your Multimodal LLM Challenge High-Resolution Real-World Scenarios that are Difficult for Humans?☆130Updated 5 months ago
- 🔥CVPR 2025 Multimodal Large Language Models Paper List☆152Updated 5 months ago
- Official repository for VisionZip (CVPR 2025)☆338Updated last month
- Think or Not Think: A Study of Explicit Thinking in Rule-Based Visual Reinforcement Fine-Tuning☆62Updated 3 months ago
- [NeurIPS 2024] Mitigating Object Hallucination via Concentric Causal Attention☆59Updated 8 months ago
- The Next Step Forward in Multimodal LLM Alignment☆176Updated 3 months ago
- [CVPR2025 Highlight] Insight-V: Exploring Long-Chain Visual Reasoning with Multimodal Large Language Models☆219Updated last month
- [ECCV 2024] API: Attention Prompting on Image for Large Vision-Language Models☆101Updated 10 months ago
- [AAAI 2025]This repo contains evaluation code for the paper “UrBench: A Comprehensive Benchmark for Evaluating Large Multimodal Models in…☆34Updated 4 months ago
- [NeurIPS 2024] MoME: Mixture of Multimodal Experts for Generalist Multimodal Large Language Models☆69Updated 3 months ago
- MME-CoT: Benchmarking Chain-of-Thought in LMMs for Reasoning Quality, Robustness, and Efficiency☆125Updated 3 weeks ago
- TinyLLaVA-Video-R1: Towards Smaller LMMs for Video Reasoning☆96Updated 3 months ago
- [ICCV 2025] Official implementation of LLaVA-KD: A Framework of Distilling Multimodal Large Language Models☆92Updated last month
- Official Repository of paper: Envisioning Beyond the Pixels: Benchmarking Reasoning-Informed Visual Editing☆84Updated last week
- Official implement of MIA-DPO☆64Updated 7 months ago