NVlabs / prismer
The implementation of "Prismer: A Vision-Language Model with Multi-Task Experts".
☆1,306Updated last year
Alternatives and similar repositories for prismer:
Users that are interested in prismer are comparing it to the libraries listed below
- Code for the paper "ViperGPT: Visual Inference via Python Execution for Reasoning"☆1,668Updated last year
- An open-source framework for training large multimodal models.☆3,805Updated 4 months ago
- [CVPR 2023] Official Implementation of X-Decoder for generalized decoding for pixel, image and language☆1,297Updated last year
- MultimodalC4 is a multimodal extension of c4 that interleaves millions of images with text.☆916Updated 7 months ago
- Multimodal-GPT☆1,487Updated last year
- ☆1,683Updated 4 months ago
- Official Implementation of Paella https://arxiv.org/abs/2211.07292v2☆742Updated last year
- Implementation of 🦩 Flamingo, state-of-the-art few-shot visual question answering attention net out of Deepmind, in Pytorch☆1,228Updated 2 years ago
- [A toolbox for fun.] Transform Image into Unique Paragraph with ChatGPT, BLIP2, OFA, GRIT, Segment Anything, ControlNet.☆801Updated last year
- Official repo for MM-REACT☆941Updated 11 months ago
- General AI methods for Anything: AnyObject, AnyGeneration, AnyModel, AnyTask, AnyX☆1,750Updated last year
- Zero-shot Image-to-Image Translation [SIGGRAPH 2023]☆1,092Updated 3 months ago
- [ICLR 2024] Fine-tuning LLaMA to follow Instructions within 1 Hour and 1.2M Parameters☆5,801Updated 10 months ago
- Versatile Diffusion: Text, Images and Variations All in One Diffusion Model, arXiv 2022 / ICCV 2023☆1,325Updated last year
- Consistency Distilled Diff VAE☆2,150Updated last year
- Easily compute clip embeddings and build a clip retrieval system with them☆2,475Updated 9 months ago
- ICLR2024 Spotlight: curation/training code, metadata, distribution and pre-trained models for MetaCLIP; CVPR 2024: MoDE: CLIP Data Expert…☆1,342Updated last month
- Caption-Anything is a versatile tool combining image segmentation, visual captioning, and ChatGPT, generating tailored captions with dive…☆1,704Updated last year
- GPT4Tools is an intelligent system that can automatically decide, control, and utilize different visual foundation models, allowing the u…☆764Updated last year
- DataComp: In search of the next generation of multimodal datasets☆674Updated last year
- MetaSeg: Packaged version of the Segment Anything repository☆965Updated this week
- A collection of modular datasets generated by GPT-4, General-Instruct - Roleplay-Instruct - Code-Instruct - and Toolformer☆1,624Updated last year
- Implementation of Phenaki Video, which uses Mask GIT to produce text guided videos of up to 2 minutes in length, in Pytorch☆760Updated 6 months ago
- Easily turn large sets of image urls to an image dataset. Can download, resize and package 100M urls in 20h on one machine.☆3,868Updated 5 months ago
- 🧀 Code and models for the ICML 2023 paper "Grounding Language Models to Images for Multimodal Inputs and Outputs".☆478Updated last year
- Open-Set Grounded Text-to-Image Generation☆2,063Updated 10 months ago
- Large language models (LLMs) made easy, EasyLM is a one stop solution for pre-training, finetuning, evaluating and serving LLMs in JAX/Fl…☆2,445Updated 5 months ago
- Implementation of Toolformer, Language Models That Can Use Tools, by MetaAI☆1,996Updated 6 months ago
- Painter & SegGPT Series: Vision Foundation Models from BAAI☆2,545Updated last month
- Relate Anything Model is capable of taking an image as input and utilizing SAM to identify the corresponding mask within the image.☆448Updated last year