Vision-CAIR / ChatCaptionerLinks
Official Repository of ChatCaptioner
☆464Updated 2 years ago
Alternatives and similar repositories for ChatCaptioner
Users that are interested in ChatCaptioner are comparing it to the libraries listed below
Sorting:
- GRiT: A Generative Region-to-text Transformer for Object Understanding (ECCV2024)☆329Updated last year
- 🐟 Code and models for the NeurIPS 2023 paper "Generating Images with Multimodal Language Models".☆459Updated last year
- Code release for "Learning Video Representations from Large Language Models"☆526Updated last year
- 🧀 Code and models for the ICML 2023 paper "Grounding Language Models to Images for Multimodal Inputs and Outputs".☆482Updated last year
- Official implementation of SEED-LLaMA (ICLR 2024).☆618Updated 10 months ago
- [Image 2 Text Para] Transform Image into Unique Paragraph with ChatGPT, BLIP2, OFA, GRIT, Segment Anything, ControlNet.☆814Updated 2 years ago
- Multi-modality pre-training☆496Updated last year
- ☆616Updated last year
- [ICLR 2024 & ECCV 2024] The All-Seeing Projects: Towards Panoptic Visual Recognition&Understanding and General Relation Comprehension of …☆489Updated 11 months ago
- An open source implementation of "Scaling Autoregressive Multi-Modal Models: Pretraining and Instruction Tuning", an all-new multi modal …☆362Updated last year
- GPT4RoI: Instruction Tuning Large Language Model on Region-of-Interest☆535Updated last month
- Relate Anything Model is capable of taking an image as input and utilizing SAM to identify the corresponding mask within the image.☆456Updated 2 years ago
- Fine-tuning "ImageBind One Embedding Space to Bind Them All" with LoRA☆185Updated last year
- [CVPR 2024] MovieChat: From Dense Token to Sparse Memory for Long Video Understanding☆635Updated 5 months ago
- The official repository of "Video assistant towards large language model makes everything easy"☆229Updated 6 months ago
- (CVPR2024)A benchmark for evaluating Multimodal LLMs using multiple-choice questions.☆345Updated 6 months ago
- PG-Video-LLaVA: Pixel Grounding in Large Multimodal Video Models☆257Updated last year
- GIT: A Generative Image-to-text Transformer for Vision and Language☆572Updated last year
- Large-scale text-video dataset. 10 million captioned short videos.☆646Updated 11 months ago
- [NeurIPS 2023] Official implementations of "Cheap and Quick: Efficient Vision-Language Instruction Tuning for Large Language Models"☆521Updated last year
- MMICL, a state-of-the-art VLM with the in context learning ability from ICL, PKU☆352Updated last year
- Codes for VPGTrans: Transfer Visual Prompt Generator across LLMs. VL-LLaMA, VL-Vicuna.☆273Updated last year
- LaVIT: Empower the Large Language Model to Understand and Generate Visual Content☆583Updated 9 months ago
- [ICLR 2024 Spotlight] DreamLLM: Synergistic Multimodal Comprehension and Creation☆450Updated 7 months ago
- Open LLaMA Eyes to See the World☆174Updated 2 years ago
- ☆228Updated last year
- Official code for VisProg (CVPR 2023 Best Paper!)☆736Updated 10 months ago
- [NeurIPS 2023] This repository includes the official implementation of our paper "An Inverse Scaling Law for CLIP Training"☆315Updated last year
- Code/Data for the paper: "LLaVAR: Enhanced Visual Instruction Tuning for Text-Rich Image Understanding"☆267Updated last year
- [ICCV2023 Oral] Unmasked Teacher: Towards Training-Efficient Video Foundation Models☆335Updated last year