johanmodin / clifs
Contrastive Language-Image Forensic Search allows free text searching through videos using OpenAI's machine learning model CLIP
☆442Updated 2 years ago
Related projects: ⓘ
- An official implementation for "CLIP4Clip: An Empirical Study of CLIP for End to End Video Clip Retrieval"☆845Updated 5 months ago
- A PyTorch Lightning solution to training OpenAI's CLIP from scratch.☆654Updated 2 years ago
- Grounded Language-Image Pre-training☆2,154Updated 7 months ago
- [CVPR 2022] Official code for "RegionCLIP: Region-based Language-Image Pretraining"☆696Updated 6 months ago
- Awesome list for research on CLIP (Contrastive Language-Image Pre-Training).☆1,110Updated 2 months ago
- ☆963Updated last year
- Implementation of CoCa, Contrastive Captioners are Image-Text Foundation Models, in Pytorch☆1,037Updated 9 months ago
- Code for ALBEF: a new vision-language pre-training method☆1,505Updated 2 years ago
- Robust fine-tuning of zero-shot models☆629Updated 2 years ago
- [ICCV 2021- Oral] Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decode…☆776Updated last year
- Official PyTorch implementation of GroupViT: Semantic Segmentation Emerges from Text Supervision, CVPR 2022.☆721Updated 2 years ago
- Prompt Learning for Vision-Language Models (IJCV'22, CVPR'22)☆1,679Updated 4 months ago
- ☆530Updated 9 months ago
- Supervision Exists Everywhere: A Data Efficient Contrastive Language-Image Pre-training Paradigm☆628Updated 2 years ago
- Code release for SLIP Self-supervision meets Language-Image Pre-training☆743Updated last year
- X-VLM: Multi-Grained Vision Language Pre-Training (ICML 2022)☆441Updated last year
- Code for the ICML 2021 (long talk) paper: "ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision"☆1,361Updated 5 months ago
- GIT: A Generative Image-to-text Transformer for Vision and Language☆543Updated 9 months ago
- awesome grounding: A curated list of research papers in visual grounding☆1,001Updated last year
- [NeurIPS 2022 Spotlight] VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training☆1,324Updated 9 months ago
- OpenAI CLIP text encoders for multiple languages!☆746Updated last year
- [ECCV2024] Video Foundation Models & Data for Multimodal Understanding☆1,300Updated 3 weeks ago
- Simple image captioning model☆1,285Updated 3 months ago
- VideoX: a collection of video cross-modal models☆968Updated 3 months ago
- [CVPR 2022] Official code for "Unified Contrastive Learning in Image-Text-Label Space"☆382Updated 10 months ago
- Multi-modality pre-training☆468Updated 4 months ago
- This is the official implement of paper "ActionCLIP: A New Paradigm for Action Recognition"☆496Updated 9 months ago
- Language-Driven Semantic Segmentation☆705Updated 2 months ago
- [ICLR2022] official implementation of UniFormer☆816Updated 5 months ago
- Implementation of 🦩 Flamingo, state-of-the-art few-shot visual question answering attention net out of Deepmind, in Pytorch☆1,193Updated last year