Vision-CAIR / ChatCaptionerLinks
Official Repository of ChatCaptioner
☆467Updated 2 years ago
Alternatives and similar repositories for ChatCaptioner
Users that are interested in ChatCaptioner are comparing it to the libraries listed below
Sorting:
- GRiT: A Generative Region-to-text Transformer for Object Understanding (ECCV2024)☆340Updated 2 years ago
- Relate Anything Model is capable of taking an image as input and utilizing SAM to identify the corresponding mask within the image.☆456Updated 2 years ago
- 🧀 Code and models for the ICML 2023 paper "Grounding Language Models to Images for Multimodal Inputs and Outputs".☆485Updated 2 years ago
- [Image 2 Text Para] Transform Image into Unique Paragraph with ChatGPT, BLIP2, OFA, GRIT, Segment Anything, ControlNet.☆825Updated 2 years ago
- Multi-modality pre-training☆507Updated last year
- [ICLR 2024 & ECCV 2024] The All-Seeing Projects: Towards Panoptic Visual Recognition&Understanding and General Relation Comprehension of …☆504Updated last year
- 🐟 Code and models for the NeurIPS 2023 paper "Generating Images with Multimodal Language Models".☆471Updated 2 years ago
- (ECCVW 2025)GPT4RoI: Instruction Tuning Large Language Model on Region-of-Interest☆551Updated 8 months ago
- GIT: A Generative Image-to-text Transformer for Vision and Language☆581Updated 2 years ago
- Code release for "Learning Video Representations from Large Language Models"☆536Updated 2 years ago
- The official repository of "Video assistant towards large language model makes everything easy"☆232Updated last year
- Fine-tuning "ImageBind One Embedding Space to Bind Them All" with LoRA☆194Updated 2 years ago
- An open source implementation of "Scaling Autoregressive Multi-Modal Models: Pretraining and Instruction Tuning", an all-new multi modal …☆364Updated 2 years ago
- Official implementation of SEED-LLaMA (ICLR 2024).☆639Updated last year
- Open LLaMA Eyes to See the World☆175Updated 2 years ago
- MMICL, a state-of-the-art VLM with the in context learning ability from ICL, PKU☆360Updated 2 years ago
- This is the official repository for the LENS (Large Language Models Enhanced to See) system.☆356Updated 6 months ago
- PG-Video-LLaVA: Pixel Grounding in Large Multimodal Video Models☆261Updated 6 months ago
- Code/Data for the paper: "LLaVAR: Enhanced Visual Instruction Tuning for Text-Rich Image Understanding"☆269Updated last year
- Large-scale text-video dataset. 10 million captioned short videos.☆674Updated last year
- [NeurIPS 2023] Official implementations of "Cheap and Quick: Efficient Vision-Language Instruction Tuning for Large Language Models"☆525Updated 2 years ago
- Codes for VPGTrans: Transfer Visual Prompt Generator across LLMs. VL-LLaMA, VL-Vicuna.☆270Updated 2 years ago
- ☆643Updated last year
- [NeurIPS 2023] This repository includes the official implementation of our paper "An Inverse Scaling Law for CLIP Training"☆320Updated last year
- Official code for VisProg (CVPR 2023 Best Paper!)☆758Updated last year
- EILeV: Eliciting In-Context Learning in Vision-Language Models for Videos Through Curated Data Distributional Properties☆131Updated last year
- ☆231Updated 2 years ago
- ☆805Updated last year
- [TMLR23] Official implementation of UnIVAL: Unified Model for Image, Video, Audio and Language Tasks.☆232Updated 2 years ago
- [ICCV2023 Oral] Unmasked Teacher: Towards Training-Efficient Video Foundation Models☆347Updated last year