Vision-CAIR / ChatCaptioner
Official Repository of ChatCaptioner
☆461Updated last year
Alternatives and similar repositories for ChatCaptioner:
Users that are interested in ChatCaptioner are comparing it to the libraries listed below
- GRiT: A Generative Region-to-text Transformer for Object Understanding (https://arxiv.org/abs/2212.00280)☆312Updated last year
- Fine-tuning "ImageBind One Embedding Space to Bind Them All" with LoRA☆178Updated last year
- Code release for "Learning Video Representations from Large Language Models"☆507Updated last year
- 🐟 Code and models for the NeurIPS 2023 paper "Generating Images with Multimodal Language Models".☆447Updated last year
- [A toolbox for fun.] Transform Image into Unique Paragraph with ChatGPT, BLIP2, OFA, GRIT, Segment Anything, ControlNet.☆802Updated last year
- Relate Anything Model is capable of taking an image as input and utilizing SAM to identify the corresponding mask within the image.☆449Updated last year
- This is the official repository for the LENS (Large Language Models Enhanced to See) system.☆352Updated last year
- GIT: A Generative Image-to-text Transformer for Vision and Language☆556Updated last year
- [ICLR 2024 & ECCV 2024] The All-Seeing Projects: Towards Panoptic Visual Recognition&Understanding and General Relation Comprehension of …☆476Updated 6 months ago
- GPT4RoI: Instruction Tuning Large Language Model on Region-of-Interest☆522Updated 8 months ago
- Chatbot Arena meets multi-modality! Multi-Modality Arena allows you to benchmark vision-language models side-by-side while providing imag…☆494Updated 10 months ago
- PG-Video-LLaVA: Pixel Grounding in Large Multimodal Video Models☆250Updated last year
- [ICLR 2024 Spotlight] DreamLLM: Synergistic Multimodal Comprehension and Creation☆417Updated 2 months ago
- [NeurIPS 2023] This repository includes the official implementation of our paper "An Inverse Scaling Law for CLIP Training"☆309Updated 8 months ago
- Official implementation of SEED-LLaMA (ICLR 2024).☆596Updated 4 months ago
- Grounded Segment Anything: From Objects to Parts☆400Updated last year
- 🧀 Code and models for the ICML 2023 paper "Grounding Language Models to Images for Multimodal Inputs and Outputs".☆478Updated last year
- MMICL, a state-of-the-art VLM with the in context learning ability from ICL, PKU☆344Updated last year
- Code/Data for the paper: "LLaVAR: Enhanced Visual Instruction Tuning for Text-Rich Image Understanding"☆262Updated 8 months ago
- LLaVA-Interactive-Demo☆362Updated 6 months ago
- Large-scale text-video dataset. 10 million captioned short videos.☆623Updated 6 months ago
- [ICCV2023 Oral] Unmasked Teacher: Towards Training-Efficient Video Foundation Models☆316Updated 8 months ago
- Language Models Can See: Plugging Visual Controls in Text Generation☆257Updated 2 years ago
- [CVPR 2024] MovieChat: From Dense Token to Sparse Memory for Long Video Understanding☆588Updated 3 weeks ago
- ☆765Updated 7 months ago
- [NeurIPS 2023] Self-Chained Image-Language Model for Video Localization and Question Answering☆185Updated last year
- BindDiffusion: One Diffusion Model to Bind Them All☆165Updated last year
- [NeurIPS 2023] Official implementations of "Cheap and Quick: Efficient Vision-Language Instruction Tuning for Large Language Models"☆515Updated last year
- Frozen in Time: A Joint Video and Image Encoder for End-to-End Retrieval [ICCV'21]☆357Updated 2 years ago
- [NeurIPS 2022] Zero-Shot Video Question Answering via Frozen Bidirectional Language Models☆156Updated 2 months ago