facebookresearch / HierVLLinks
[CVPR 2023] HierVL Learning Hierarchical Video-Language Embeddings
☆46Updated 2 years ago
Alternatives and similar repositories for HierVL
Users that are interested in HierVL are comparing it to the libraries listed below
Sorting:
- Code release for the paper "Egocentric Video Task Translation" (CVPR 2023 Highlight)☆33Updated 2 years ago
- ☆71Updated last year
- Code release for "EgoVLPv2: Egocentric Video-Language Pre-training with Fusion in the Backbone" [ICCV, 2023]☆102Updated last year
- ☆73Updated last year
- Code for the paper titled "CiT Curation in Training for Effective Vision-Language Data".☆78Updated 3 years ago
- https://arxiv.org/abs/2209.15162☆53Updated 3 years ago
- ☆58Updated last month
- Code for our ACL 2025 paper "Language Repository for Long Video Understanding"☆34Updated last year
- Code for the paper "Hyperbolic Image-Text Representations", Desai et al, ICML 2023☆195Updated 2 years ago
- Official code for our CVPR 2023 paper: Test of Time: Instilling Video-Language Models with a Sense of Time☆46Updated last year
- Code for “Pretrained Language Models as Visual Planners for Human Assistance”☆61Updated 2 years ago
- Test-Time Training on Video Streams☆66Updated 2 years ago
- Official implementation of "HowToCaption: Prompting LLMs to Transform Video Annotations at Scale." ECCV 2024☆58Updated 5 months ago
- Official code repo of PIN: Positional Insert Unlocks Object Localisation Abilities in VLMs☆26Updated last year
- Official repository of paper "Subobject-level Image Tokenization" (ICML-25)☆92Updated 6 months ago
- (ACL'2023) MultiCapCLIP: Auto-Encoding Prompts for Zero-Shot Multilingual Visual Captioning☆36Updated last year
- [NeurIPS2023] Official implementation of the paper "Large Language Models are Visual Reasoning Coordinators"☆105Updated 2 years ago
- This repo contains the official implementation of ICLR 2024 paper "Is ImageNet worth 1 video? Learning strong image encoders from 1 long …☆95Updated last year
- Code and models for the paper "The effectiveness of MAE pre-pretraining for billion-scale pretraining" https://arxiv.org/abs/2303.13496☆92Updated 9 months ago
- Code and Models for "GeneCIS A Benchmark for General Conditional Image Similarity"☆61Updated 2 years ago
- ☆58Updated last year
- Code for CVPR 2023 paper "Procedure-Aware Pretraining for Instructional Video Understanding"☆50Updated last year
- [ICLR2024] Codes and Models for COSA: Concatenated Sample Pretrained Vision-Language Foundation Model☆43Updated last year
- Matryoshka Multimodal Models☆121Updated last year
- [CVPR23 Highlight] CREPE: Can Vision-Language Foundation Models Reason Compositionally?☆35Updated 2 years ago
- [CVPR 2023] Official code for "Learning Procedure-aware Video Representation from Instructional Videos and Their Narrations"☆55Updated 2 years ago
- EILeV: Eliciting In-Context Learning in Vision-Language Models for Videos Through Curated Data Distributional Properties☆131Updated last year
- Implementation of MaMMUT, a simple vision-encoder text-decoder architecture for multimodal tasks from Google, in Pytorch☆103Updated 2 years ago
- ☆60Updated 2 years ago
- [NeurIPS 2023 D&B] VidChapters-7M: Video Chapters at Scale☆202Updated 2 years ago