fmthoker / SEVERE-BENCHMARKLinks
☆26Updated 2 years ago
Alternatives and similar repositories for SEVERE-BENCHMARK
Users that are interested in SEVERE-BENCHMARK are comparing it to the libraries listed below
Sorting:
- Official code for "Disentangling Visual Embeddings for Attributes and Objects" Published at CVPR 2022☆35Updated 2 years ago
- This repository provides data for the VAW dataset as described in the CVPR 2021 paper titled "Learning to Predict Visual Attributes in th…☆67Updated 3 years ago
- The implementation of CVPR2021 paper Temporal Query Networks for Fine-grained Video Understanding☆62Updated 3 years ago
- [ECCV-2022]Grounding Visual Representations with Texts for Domain Generalization☆31Updated 2 years ago
- [CVPR23 Highlight] CREPE: Can Vision-Language Foundation Models Reason Compositionally?☆33Updated 2 years ago
- Compress conventional Vision-Language Pre-training data☆52Updated last year
- Video-Text Representation Learning via Differentiable Weak Temporal Alignment (CVPR 2022)☆17Updated last year
- ☆109Updated 2 years ago
- Perceptual Grouping in Contrastive Vision-Language Models (ICCV'23)☆37Updated last year
- This is an official pytorch implementation of Learning To Recognize Procedural Activities with Distant Supervision. In this repository, w…☆43Updated 2 years ago
- COLA: Evaluate how well your vision-language model can Compose Objects Localized with Attributes!☆24Updated 9 months ago
- A Unified Framework for Video-Language Understanding☆59Updated 2 years ago
- This repository contains the Adverbs in Recipes (AIR) dataset and the code published at the CVPR 23 paper: "Learning Action Changes by Me…☆13Updated 2 years ago
- ☆56Updated 3 years ago
- ☆59Updated 3 years ago
- This is the official implementation of Elaborative Rehearsal for Zero-shot Action Recognition (ICCV2021)☆36Updated 3 years ago
- [CVPR 2022] Code for Motion-aware Contrastive Video Representation Learning via Foreground-background Merging☆49Updated last year
- [CVPR'22 Oral] Temporal Alignment Networks for Long-term Video. Tengda Han, Weidi Xie, Andrew Zisserman.