Official code for VisProg (CVPR 2023 Best Paper!)
☆760Aug 26, 2024Updated last year
Alternatives and similar repositories for visprog
Users that are interested in visprog are comparing it to the libraries listed below
Sorting:
- Code for the paper "ViperGPT: Visual Inference via Python Execution for Reasoning"☆1,712Jan 29, 2024Updated 2 years ago
- (ECCVW 2025)GPT4RoI: Instruction Tuning Large Language Model on Region-of-Interest☆551Jun 3, 2025Updated 9 months ago
- Emu Series: Generative Multimodal Models from BAAI☆1,768Jan 12, 2026Updated last month
- ☆83Jul 16, 2023Updated 2 years ago
- Grounded Language-Image Pre-training☆2,575Jan 24, 2024Updated 2 years ago
- An open-source framework for training large multimodal models.☆4,071Aug 31, 2024Updated last year
- Official JAX implementation of MAGVIT: Masked Generative Video Transformer☆995Jan 17, 2024Updated 2 years ago
- ☆643Feb 15, 2024Updated 2 years ago
- Official repo for MM-REACT☆967Jan 31, 2024Updated 2 years ago
- LAVIS - A One-stop Library for Language-Vision Intelligence☆11,177Nov 18, 2024Updated last year
- [CVPR 2023] Official Implementation of X-Decoder for generalized decoding for pixel, image and language☆1,343Oct 5, 2023Updated 2 years ago
- [NeurIPS 2023] Text data, code and pre-trained models for paper "Improving CLIP Training with Language Rewrites"☆289Jan 14, 2024Updated 2 years ago
- VisionLLM Series☆1,137Feb 27, 2025Updated last year
- Official PyTorch implementation of ODISE: Open-Vocabulary Panoptic Segmentation with Text-to-Image Diffusion Models [CVPR 2023 Highlight]☆935Jul 6, 2024Updated last year
- Project Page for "LISA: Reasoning Segmentation via Large Language Model"☆2,589Feb 16, 2025Updated last year
- [ICLR 2024 & ECCV 2024] The All-Seeing Projects: Towards Panoptic Visual Recognition&Understanding and General Relation Comprehension of …☆505Aug 9, 2024Updated last year
- ☆1,842Jun 28, 2024Updated last year
- Prompt Learning for Vision-Language Models (IJCV'22, CVPR'22)☆2,182May 20, 2024Updated last year
- Cambrian-1 is a family of multimodal LLMs with a vision-centric design.☆1,986Nov 7, 2025Updated 4 months ago
- [CVPR 2024 🔥] Grounding Large Multimodal Model (GLaMM), the first-of-its-kind model capable of generating natural language responses tha…☆945Aug 5, 2025Updated 7 months ago
- [NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.☆24,500Aug 12, 2024Updated last year
- EVA Series: Visual Representation Fantasies from BAAI☆2,648Aug 1, 2024Updated last year
- ☆4,577Sep 14, 2025Updated 5 months ago
- An open source implementation of CLIP.☆13,460Feb 27, 2026Updated last week
- ☆806Jul 8, 2024Updated last year
- Painter & SegGPT Series: Vision Foundation Models from BAAI☆2,592Dec 6, 2024Updated last year
- [NeurIPS 2023] Official implementation of the paper "Segment Everything Everywhere All at Once"☆4,771Aug 19, 2024Updated last year
- [ECCV2024] Video Foundation Models & Data for Multimodal Understanding☆2,204Dec 15, 2025Updated 2 months ago
- [ECCV 2024] Official implementation of the paper "Semantic-SAM: Segment and Recognize Anything at Any Granularity"☆2,808Jul 10, 2025Updated 7 months ago
- Code for 3D-LLM: Injecting the 3D World into Large Language Models☆1,181Jun 6, 2024Updated last year
- ☆1,047Oct 3, 2022Updated 3 years ago
- [ICCV2023 Best Paper Finalist] PyTorch implementation of DiffusionDet (https://arxiv.org/abs/2211.09788)☆2,243Dec 22, 2022Updated 3 years ago
- Caption-Anything is a versatile tool combining image segmentation, visual captioning, and ChatGPT, generating tailored captions with dive…☆1,774Aug 29, 2023Updated 2 years ago
- [ICLR 2024] Fine-tuning LLaMA to follow Instructions within 1 Hour and 1.2M Parameters☆5,933Mar 14, 2024Updated last year
- Official implementation for the paper "Prompt Pre-Training with Over Twenty-Thousand Classes for Open-Vocabulary Visual Recognition"☆259May 3, 2024Updated last year
- General AI methods for Anything: AnyObject, AnyGeneration, AnyModel, AnyTask, AnyX☆1,843Nov 15, 2023Updated 2 years ago
- Official PyTorch implementation of the paper "In-Context Learning Unlocked for Diffusion Models"☆413Mar 25, 2024Updated last year
- Official implementation of SEED-LLaMA (ICLR 2024).☆642Sep 21, 2024Updated last year
- ☆360Jan 27, 2024Updated 2 years ago