facebookresearch / VLaMP
Code for “Pretrained Language Models as Visual Planners for Human Assistance”
☆60Updated last year
Alternatives and similar repositories for VLaMP:
Users that are interested in VLaMP are comparing it to the libraries listed below
- [ICCV2023] EgoObjects: A Large-Scale Egocentric Dataset for Fine-Grained Object Understanding☆75Updated last year
- [CVPR 2023] Official code for "Learning Procedure-aware Video Representation from Instructional Videos and Their Narrations"☆52Updated last year
- Language Quantized AutoEncoders☆99Updated 2 years ago
- Official repository for the General Robust Image Task (GRIT) Benchmark☆51Updated last year
- Code release for the CVPR'23 paper titled "PartDistillation Learning part from Instance Segmentation"☆59Updated last year
- Code for the paper titled "CiT Curation in Training for Effective Vision-Language Data".☆78Updated 2 years ago
- Code release for the paper "Egocentric Video Task Translation" (CVPR 2023 Highlight)☆32Updated last year
- ☆47Updated last year
- multimodal video-audio-text generation and retrieval between every pair of modalities on the MUGEN dataset. The repo. contains the traini…☆39Updated last year
- [CVPR 2023] HierVL Learning Hierarchical Video-Language Embeddings☆45Updated last year
- PyTorch code for "Perceiver-VL: Efficient Vision-and-Language Modeling with Iterative Latent Attention" (WACV 2023)☆33Updated 2 years ago
- ☆68Updated 7 months ago
- A Video Tokenizer Evaluation Dataset☆101Updated last month
- https://arxiv.org/abs/2209.15162☆49Updated 2 years ago
- [NeurIPS2023] Official implementation of the paper "Large Language Models are Visual Reasoning Coordinators"☆104Updated last year
- imagetokenizer is a python package, helps you encoder visuals and generate visuals token ids from codebook, supports both image and video…☆30Updated 7 months ago
- code release of research paper "Exploring Long-Sequence Masked Autoencoders"☆99Updated 2 years ago
- Implementation of 🌻 Mirasol, SOTA Multimodal Autoregressive model out of Google Deepmind, in Pytorch☆88Updated last year
- VideoCC is a dataset containing (video-URL, caption) pairs for training video-text machine learning models. It is created using an automa…☆76Updated 2 years ago
- Visual Programming for Text-to-Image Generation and Evaluation (NeurIPS 2023)☆54Updated last year
- Un-*** 50 billions multimodality dataset☆24Updated 2 years ago
- Code and models for the paper "The effectiveness of MAE pre-pretraining for billion-scale pretraining" https://arxiv.org/abs/2303.13496☆86Updated 6 months ago
- Language Repository for Long Video Understanding☆31Updated 8 months ago
- M4 experiment logbook☆56Updated last year
- ☆53Updated 2 years ago
- [ACL 2023] Code and data for our paper "Measuring Progress in Fine-grained Vision-and-Language Understanding"☆13Updated last year
- Code for CVPR 2023 paper "Procedure-Aware Pretraining for Instructional Video Understanding"☆48Updated 3 weeks ago
- This repo contains evaluation code for the paper "BLINK: Multimodal Large Language Models Can See but Not Perceive". https://arxiv.or…☆116Updated 7 months ago
- ElasticTok: Adaptive Tokenization for Image and Video☆54Updated 3 months ago
- [ICLR 2022] RelViT: Concept-guided Vision Transformer for Visual Relational Reasoning☆64Updated 2 years ago