ispamm / GRAMLinks
Official PyTorch repository for GRAM
☆115Updated 9 months ago
Alternatives and similar repositories for GRAM
Users that are interested in GRAM are comparing it to the libraries listed below
Sorting:
- [CVPR 2025] Crab: A Unified Audio-Visual Scene Understanding Model with Explicit Cooperation☆81Updated last month
- [ECCV’24] Official Implementation for CAT: Enhancing Multimodal Large Language Model to Answer Questions in Dynamic Audio-Visual Scenario…☆58Updated last year
- Code for the paper "Compositional Entailment Learning for Hyperbolic Vision-Language Models".☆98Updated 7 months ago
- Official Repository for "Learning Trimodal Relation for Audio-Visual Question Answering with Missing Modality" (ECCV 2024)☆15Updated last year
- Codebase for the paper: "TIM: A Time Interval Machine for Audio-Visual Action Recognition"☆52Updated last year
- A python implement for Certifiable Robust Multi-modal Training☆19Updated 7 months ago
- Code for paper "Boosting Continual Learning of Vision-Language Models via Mixture-of-Experts Adapters" CVPR2024☆269Updated 4 months ago
- [CVPR 2024] Official PyTorch Code for "PromptKD: Unsupervised Prompt Distillation for Vision-Language Models"☆348Updated last month
- [ICLR 2025] TRACE: Temporal Grounding Video LLM via Casual Event Modeling☆143Updated 5 months ago
- A curated list of awesome self-supervised learning methods in videos☆166Updated 2 months ago
- Awesome papers & datasets specifically focused on long-term videos.☆351Updated 3 months ago
- [CVPR 2024 Highlight] Official implementation of the paper: Cooperation Does Matter: Exploring Multi-Order Bilateral Relations for Audio-…☆40Updated 9 months ago
- FineCLIP: Self-distilled Region-based CLIP for Better Fine-grained Understanding (NIPS24)☆34Updated 2 months ago
- [CVPR 2025] FLAIR: VLM with Fine-grained Language-informed Image Representations☆132Updated 5 months ago
- The suite of modeling video with Mamba☆289Updated last year
- [ICLR 2024] SemiReward: A General Reward Model for Semi-supervised Learning☆77Updated 2 months ago
- [AAAI 2024 Oral] M2CLIP: A Multimodal, Multi-Task Adapting Framework for Video Action Recognition☆72Updated last year
- Video Chain of Thought, Codes for ICML 2024 paper: "Video-of-Thought: Step-by-Step Video Reasoning from Perception to Cognition"☆178Updated 11 months ago
- The official code for "TextRefiner: Internal Visual Feature as Efficient Refiner for Vision-Language Models Prompt Tuning" | [AAAI2025]☆49Updated 10 months ago
- [ICLR 2024] FROSTER: Frozen CLIP is a Strong Teacher for Open-Vocabulary Action Recognition☆95Updated last year
- The official implementation of ECCV2024 paper "Facial Affective Behavior Analysis with Instruction Tuning"☆29Updated last year
- Codebase for "Multimodal Distillation for Egocentric Action Recognition" (ICCV 2023)☆32Updated 2 years ago
- The repo for "MMPareto: Boosting Multimodal Learning with Innocent Unimodal Assistance", ICML 2024☆54Updated last year
- [CVPR'2024 Highlight] Official PyTorch implementation of the paper "VTimeLLM: Empower LLM to Grasp Video Moments".☆294Updated last year
- A comprehensive survey of Composed Multi-modal Retrieval (CMR), including Composed Image Retrieval (CIR) and Composed Video Retrieval (CV…☆80Updated 2 weeks ago
- [CVPR 2024] Do you remember? Dense Video Captioning with Cross-Modal Memory Retrieval☆64Updated last year
- An easy way to apply LoRA to CLIP. Implementation of the paper "Low-Rank Few-Shot Adaptation of Vision-Language Models" (CLIP-LoRA) [CVPR…☆283Updated 8 months ago
- ☆47Updated last year
- (2024CVPR) MA-LMM: Memory-Augmented Large Multimodal Model for Long-Term Video Understanding☆344Updated last year
- [ICLR 2024] Test-Time RL with CLIP Feedback for Vision-Language Models.☆98Updated 3 months ago