NickyFot / EmoCommonSenseLinks
Official Repository for VLLMs Provide Better Context for Emotion Understanding Through Common Sense Reasoning
☆24Updated last year
Alternatives and similar repositories for EmoCommonSense
Users that are interested in EmoCommonSense are comparing it to the libraries listed below
Sorting:
- [ICLR 2025] TRACE: Temporal Grounding Video LLM via Casual Event Modeling☆143Updated 4 months ago
- ☆121Updated last year
- Official PyTorch implementation of the paper "Revisiting Temporal Modeling for CLIP-based Image-to-Video Knowledge Transferring"☆107Updated last year
- [CVPR 2024] Context-Guided Spatio-Temporal Video Grounding☆64Updated last year
- ☆54Updated last year
- [CVPR 2024] TeachCLIP for Text-to-Video Retrieval☆42Updated 8 months ago
- [ICLR 2024] FROSTER: Frozen CLIP is a Strong Teacher for Open-Vocabulary Action Recognition☆94Updated 11 months ago
- Official Implementation of "Chrono: A Simple Blueprint for Representing Time in MLLMs"☆92Updated 10 months ago
- [ICCV 2023] DiffusionRet: Generative Text-Video Retrieval with Diffusion Model☆139Updated last year
- Official implementation of the paper "Boosting Human-Object Interaction Detection with Text-to-Image Diffusion Model"☆66Updated 2 years ago
- [AAAI 2024] DGL: Dynamic Global-Local Prompt Tuning for Text-Video Retrieval.☆47Updated last year
- 🌀 R2-Tuning: Efficient Image-to-Video Transfer Learning for Video Temporal Grounding (ECCV 2024)☆90Updated last year
- Official PyTorch implementation of the paper "Enhancing Vision-Language Pre-Training with Jointly Learned Questioner and Dense Captioner"☆15Updated 2 years ago
- ☆80Updated last year
- Pytorch Code for "Unified Coarse-to-Fine Alignment for Video-Text Retrieval" (ICCV 2023)☆66Updated last year
- ☆30Updated 2 years ago
- [AAAI 2025] VTG-LLM: Integrating Timestamp Knowledge into Video LLMs for Enhanced Video Temporal Grounding☆121Updated last year
- [ICCV 2023] ALIP: Adaptive Language-Image Pre-training with Synthetic Caption☆103Updated 2 years ago
- [CVPR 2023] Official repository of paper titled "Fine-tuned CLIP models are efficient video learners".☆301Updated last year
- [IJCAI 2023] Text-Video Retrieval with Disentangled Conceptualization and Set-to-Set Alignment☆53Updated last year
- Code for CVPR2023 paper "Collaborative Noisy Label Cleaner: Learning Scene-aware Trailers for Multi-modal Highlight Detection in Movies"☆18Updated 2 years ago
- Hierarchical Video-Moment Retrieval and Step-Captioning (CVPR 2023)☆107Updated 11 months ago
- An official implementation for "X-CLIP: End-to-End Multi-grained Contrastive Learning for Video-Text Retrieval"☆180Updated last year
- Official repository for "Vita-CLIP: Video and text adaptive CLIP via Multimodal Prompting" [CVPR 2023]☆127Updated 2 years ago
- [CVPR 2023 Highlight & TPAMI] Video-Text as Game Players: Hierarchical Banzhaf Interaction for Cross-Modal Representation Learning☆123Updated last year
- [ICCV 2023] Accurate and Fast Compressed Video Captioning☆51Updated 5 months ago
- ☆85Updated 2 years ago
- ☆110Updated 3 years ago
- Official Implementation of SnAG (CVPR 2024)☆56Updated 8 months ago
- (TIP'2023) Concept-Aware Video Captioning: Describing Videos with Effective Prior Information☆31Updated last year