minghu0830 / NurViD-benchmarkLinks
☆23Updated last year
Alternatives and similar repositories for NurViD-benchmark
Users that are interested in NurViD-benchmark are comparing it to the libraries listed below
Sorting:
- ☆85Updated 2 years ago
- Official PyTorch code of GroundVQA (CVPR'24)☆64Updated last year
- Official repository for "Vita-CLIP: Video and text adaptive CLIP via Multimodal Prompting" [CVPR 2023]☆127Updated 2 years ago
- [CVPR 2024] Do you remember? Dense Video Captioning with Cross-Modal Memory Retrieval☆64Updated last year
- [ICLR 2024] FROSTER: Frozen CLIP is a Strong Teacher for Open-Vocabulary Action Recognition☆94Updated 11 months ago
- [CVPR 2024] Retrieval-Augmented Image Captioning with External Visual-Name Memory for Open-World Comprehension☆60Updated last year
- ☆30Updated 2 years ago
- (CVPR 2023) Official implemention of the paper "Weakly Supervised Video Representation Learning with Unaligned Text for Sequential Videos…☆31Updated last year
- (CVPR2024) MeaCap: Memory-Augmented Zero-shot Image Captioning☆54Updated last year
- [AAAI 2024] DGL: Dynamic Global-Local Prompt Tuning for Text-Video Retrieval.☆47Updated last year
- [CVPR 2025] PyTorch implementation of paper "FLAME: Frozen Large Language Models Enable Data-Efficient Language-Image Pre-training"☆32Updated 6 months ago
- PyTorch code for "Contrastive Region Guidance: Improving Grounding in Vision-Language Models without Training"☆39Updated last year
- [ Arxiv 2023 ] This repository contains the code for "MUPPET: Multi-Modal Few-Shot Temporal Action Detection"☆15Updated 2 years ago
- [CVPR 2025] COSMOS: Cross-Modality Self-Distillation for Vision Language Pre-training☆37Updated 9 months ago
- [CVPR 2024] TeachCLIP for Text-to-Video Retrieval☆42Updated 8 months ago
- CVPR 2023 Accepted Paper HOICLIP: Efficient Knowledge Transfer for HOI Detection with Vision-Language Models☆68Updated last year
- ☆22Updated last year
- [BMVC 2023] Zero-shot Composed Text-Image Retrieval☆54Updated last year
- Large Language Models are Temporal and Causal Reasoners for Video Question Answering (EMNLP 2023)☆77Updated 9 months ago
- Implementation of "VL-Mamba: Exploring State Space Models for Multimodal Learning"☆85Updated last year
- [AAAI'25]: Building a Multi-modal Spatiotemporal Expert for Zero-shot Action Recognition with CLIP☆18Updated 5 months ago
- [CVPR 2024] Context-Guided Spatio-Temporal Video Grounding☆64Updated last year
- The official repo for "Ref-AVS: Refer and Segment Objects in Audio-Visual Scenes", ECCV 2024☆49Updated 2 months ago
- ☆37Updated 3 years ago
- Towards a Unified View on Visual Parameter-Efficient Transfer Learning☆26Updated 3 years ago
- [NeurIPS 2023] Align Your Prompts: Test-Time Prompting with Distribution Alignment for Zero-Shot Generalization☆110Updated last year
- BEAR: a new BEnchmark on video Action Recognition☆46Updated last year
- ☆26Updated 2 years ago
- The PyTorch implementation for "DEAL: Disentangle and Localize Concept-level Explanations for VLMs" (ECCV 2024 Strong Double Blind)☆20Updated last year
- [ICML2024] Official PyTorch implementation of CoMC: Language-Driven Cross-Modal Classifier for Zero-Shot Multi-Label Image Recognition☆16Updated last year