gimpong / AAAI25-S5VHLinks
The code for the paper "Efficient Self-Supervised Video Hashing with Selective State Spaces" (AAAI'25).
☆19Updated 4 months ago
Alternatives and similar repositories for AAAI25-S5VH
Users that are interested in AAAI25-S5VH are comparing it to the libraries listed below
Sorting:
- ☆46Updated last year
- Rui Qian, Xin Yin, Dejing Dou†: Reasoning to Attend: Try to Understand How <SEG> Token Works (CVPR 2025)☆50Updated 2 months ago
- [CVPR 2025] PVC: Progressive Visual Token Compression for Unified Image and Video Processing in Large Vision-Language Models☆50Updated 6 months ago
- [CVPR 2024] The official implementation of paper "synthesize, diagnose, and optimize: towards fine-grained vision-language understanding"☆49Updated 6 months ago
- [NeurIPS'24] I2EBench: A Comprehensive Benchmark for Instruction-based Image Editing☆27Updated last week
- ☆46Updated last month
- [NeurIPS 2024] Official PyTorch implementation of "Improving Compositional Reasoning of CLIP via Synthetic Vision-Language Negatives"☆46Updated last year
- official repo for paper "[CLS] Token Tells Everything Needed for Training-free Efficient MLLMs"☆23Updated 7 months ago
- Adapting LLaMA Decoder to Vision Transformer☆30Updated last year
- CVPR2024: Dual Memory Networks: A Versatile Adaptation Approach for Vision-Language Models☆87Updated last year
- Visual self-questioning for large vision-language assistant.☆45Updated 4 months ago
- Official Repository of Personalized Visual Instruct Tuning☆33Updated 9 months ago
- ☆32Updated 2 months ago
- [CVPR 2025] RAP: Retrieval-Augmented Personalization☆76Updated 3 weeks ago
- CLIP-MoE: Mixture of Experts for CLIP☆50Updated last year
- The official implementation of the paper "MMFuser: Multimodal Multi-Layer Feature Fuser for Fine-Grained Vision-Language Understanding". …☆61Updated last year
- (CVPR 2024) "Unsegment Anything by Simulating Deformation"☆29Updated last year
- ICM-Assistant: Instruction-tuning Multimodal Large Language Models for Rule-based Explainable Image Content Moderation. AAAI, 2025☆13Updated 3 months ago
- [NeurIPS'25] ColorBench: Can VLMs See and Understand the Colorful World? A Comprehensive Benchmark for Color Perception, Reasoning, and R…☆29Updated 2 months ago
- Official Implementation of DiffCLIP: Differential Attention Meets CLIP☆48Updated 9 months ago
- [NIPS2023]Implementation of Foundation Model is Efficient Multimodal Multitask Model Selector☆37Updated last year
- [EMNLP 2025 Main] Video Compression Commander: Plug-and-Play Inference Acceleration for Video Large Language Models☆44Updated this week
- [ICME 2024 Oral] DARA: Domain- and Relation-aware Adapters Make Parameter-efficient Tuning for Visual Grounding☆23Updated 9 months ago
- ☆23Updated last year
- [NeurIPS 2024] Visual Perception by Large Language Model’s Weights☆55Updated 8 months ago
- [ICLR2025] γ -MOD: Mixture-of-Depth Adaptation for Multimodal Large Language Models☆40Updated last month
- [ICCV 2025] Official code for "AIM: Adaptive Inference of Multi-Modal LLMs via Token Merging and Pruning"☆47Updated 2 months ago
- ☆23Updated 6 months ago
- Official implementation of TagAlign☆35Updated last year
- [ECCV2024] ClearCLIP: Decomposing CLIP Representations for Dense Vision-Language Inference☆95Updated 8 months ago