vinid / neg_clipLinks
NegCLIP.
☆35Updated 2 years ago
Alternatives and similar repositories for neg_clip
Users that are interested in neg_clip are comparing it to the libraries listed below
Sorting:
- ☆59Updated 2 years ago
- Official PyTorch code of GroundVQA (CVPR'24)☆62Updated last year
- [CVPR 2024] Retrieval-Augmented Image Captioning with External Visual-Name Memory for Open-World Comprehension☆55Updated last year
- Official implementation of "Why are Visually-Grounded Language Models Bad at Image Classification?" (NeurIPS 2024)☆91Updated 11 months ago
- (CVPR2024) MeaCap: Memory-Augmented Zero-shot Image Captioning☆50Updated last year
- HalluciDoctor: Mitigating Hallucinatory Toxicity in Visual Instruction Data (Accepted by CVPR 2024)☆48Updated last year
- Repository for the paper: Teaching Structured Vision & Language Concepts to Vision & Language Models☆47Updated 2 years ago
- PyTorch code for "Contrastive Region Guidance: Improving Grounding in Vision-Language Models without Training"☆36Updated last year
- [CVPR 2024] Contrasting Intra-Modal and Ranking Cross-Modal Hard Negatives to Enhance Visio-Linguistic Fine-grained Understanding☆51Updated 5 months ago
- Official repository for the A-OKVQA dataset☆99Updated last year
- Code for DeCo: Decoupling token compression from semanchc abstraction in multimodal large language models☆71Updated 2 months ago
- VisualGPTScore for visio-linguistic reasoning☆27Updated last year
- [EMNLP'23] The official GitHub page for ''Evaluating Object Hallucination in Large Vision-Language Models''☆94Updated last month
- ☆70Updated last year
- [ICML2024] Repo for the paper `Evaluating and Analyzing Relationship Hallucinations in Large Vision-Language Models'☆21Updated 9 months ago
- LLaVA-NeXT-Image-Llama3-Lora, Modified from https://github.com/arielnlee/LLaVA-1.6-ft☆44Updated last year
- Official repository for CoMM Dataset☆48Updated 9 months ago
- [CVPRW-25 MMFM] Official repository of paper titled "How Good is my Video LMM? Complex Video Reasoning and Robustness Evaluation Suite fo…☆50Updated last year
- Official implementation of HawkEye: Training Video-Text LLMs for Grounding Text in Videos☆42Updated last year
- FreeVA: Offline MLLM as Training-Free Video Assistant☆63Updated last year
- [CVPR23 Highlight] CREPE: Can Vision-Language Foundation Models Reason Compositionally?☆34Updated 2 years ago
- [CVPR 2024] Improving language-visual pretraining efficiency by perform cluster-based masking on images.☆29Updated last year
- Official code for "What Makes for Good Visual Tokenizers for Large Language Models?".☆58Updated 2 years ago
- [ICLR 2024] Analyzing and Mitigating Object Hallucination in Large Vision-Language Models☆150Updated last year
- [ICLR 2025] VL-ICL Bench: The Devil in the Details of Multimodal In-Context Learning☆65Updated 2 weeks ago
- [NeurIPS 2024] Visual Perception by Large Language Model’s Weights☆50Updated 6 months ago
- ICLR 2023 DeCap: Decoding CLIP Latents for Zero-shot Captioning☆137Updated 2 years ago
- [NeurIPS 2023] A faithful benchmark for vision-language compositionality☆85Updated last year
- Large Language Models are Temporal and Causal Reasoners for Video Question Answering (EMNLP 2023)☆76Updated 6 months ago
- This is the official repository for the paper "Visually-Prompted Language Model for Fine-Grained Scene Graph Generation in an Open World"…☆48Updated last year