NVlabs / prismerLinks
The implementation of "Prismer: A Vision-Language Model with Multi-Task Experts".
☆1,306Updated last year
Alternatives and similar repositories for prismer
Users that are interested in prismer are comparing it to the libraries listed below
Sorting:
- Code for the paper "ViperGPT: Visual Inference via Python Execution for Reasoning"☆1,708Updated last year
- ☆1,706Updated last year
- An open-source framework for training large multimodal models.☆4,032Updated last year
- [CVPR 2023] Official Implementation of X-Decoder for generalized decoding for pixel, image and language☆1,335Updated 2 years ago
- Multimodal-GPT☆1,509Updated 2 years ago
- Implementation of 🦩 Flamingo, state-of-the-art few-shot visual question answering attention net out of Deepmind, in Pytorch☆1,266Updated 3 years ago
- MultimodalC4 is a multimodal extension of c4 that interleaves millions of images with text.☆942Updated 7 months ago
- Official Implementation of Paella https://arxiv.org/abs/2211.07292v2☆746Updated 2 years ago
- ☆587Updated 2 years ago
- Official implementation of "Composer: Creative and Controllable Image Synthesis with Composable Conditions"☆1,559Updated last year
- [Image 2 Text Para] Transform Image into Unique Paragraph with ChatGPT, BLIP2, OFA, GRIT, Segment Anything, ControlNet.☆820Updated 2 years ago
- Official repo for MM-REACT☆959Updated last year
- Versatile Diffusion: Text, Images and Variations All in One Diffusion Model, arXiv 2022 / ICCV 2023☆1,334Updated 2 years ago
- GPT4Tools is an intelligent system that can automatically decide, control, and utilize different visual foundation models, allowing the u…☆772Updated last year
- Long-form text-to-images generation, using a pipeline of deep generative models (GPT-3 and Stable Diffusion)☆689Updated 3 years ago
- [NeurIPS 22] [AAAI 24] Recurrent Transformer-based long-context architecture.☆771Updated last year
- ☆714Updated last year
- A collection of modular datasets generated by GPT-4, General-Instruct - Roleplay-Instruct - Code-Instruct - and Toolformer☆1,630Updated 2 years ago
- Caption-Anything is a versatile tool combining image segmentation, visual captioning, and ChatGPT, generating tailored captions with dive…☆1,763Updated 2 years ago
- An open-source implementation of Google's PaLM models☆816Updated last year
- A large-scale text-to-image prompt gallery dataset based on Stable Diffusion☆1,319Updated last year
- An Open-source Toolkit for LLM Development☆2,790Updated 9 months ago
- Large language models (LLMs) made easy, EasyLM is a one stop solution for pre-training, finetuning, evaluating and serving LLMs in JAX/Fl…☆2,498Updated last year
- Public repo for the NeurIPS 2023 paper "Unlimiformer: Long-Range Transformers with Unlimited Length Input"☆1,063Updated last year
- ICLR2024 Spotlight: curation/training code, metadata, distribution and pre-trained models for MetaCLIP; CVPR 2024: MoDE: CLIP Data Expert…☆1,699Updated last month
- 🧀 Code and models for the ICML 2023 paper "Grounding Language Models to Images for Multimodal Inputs and Outputs".☆482Updated 2 years ago
- MetaSeg: Packaged version of the Segment Anything repository☆987Updated last week
- [ICLR 2024] Fine-tuning LLaMA to follow Instructions within 1 Hour and 1.2M Parameters☆5,910Updated last year
- This repository contains the code of the CVPR 2022 paper "Image Segmentation Using Text and Image Prompts".☆1,285Updated last year
- DataComp: In search of the next generation of multimodal datasets☆745Updated 6 months ago