kohjingyu / fromageView external linksLinks
π§ Code and models for the ICML 2023 paper "Grounding Language Models to Images for Multimodal Inputs and Outputs".
β486Oct 30, 2023Updated 2 years ago
Alternatives and similar repositories for fromage
Users that are interested in fromage are comparing it to the libraries listed below
Sorting:
- π Code and models for the NeurIPS 2023 paper "Generating Images with Multimodal Language Models".β471Jan 19, 2024Updated 2 years ago
- An open-source framework for training large multimodal models.β4,068Aug 31, 2024Updated last year
- MultimodalC4 is a multimodal extension of c4 that interleaves millions of images with text.β952Mar 19, 2025Updated 10 months ago
- Official implementation of SEED-LLaMA (ICLR 2024).β639Sep 21, 2024Updated last year
- LAVIS - A One-stop Library for Language-Vision Intelligenceβ11,166Nov 18, 2024Updated last year
- 𦦠Otter, a multi-modal model based on OpenFlamingo (open-sourced version of DeepMind's Flamingo), trained on MIMIC-IT and showcasing impβ¦β3,292Mar 5, 2024Updated last year
- [ICLR 2024] Fine-tuning LLaMA to follow Instructions within 1 Hour and 1.2M Parametersβ5,936Mar 14, 2024Updated last year
- GRiT: A Generative Region-to-text Transformer for Object Understanding (ECCV2024)β340Jan 8, 2024Updated 2 years ago
- Official Repository of ChatCaptionerβ467Apr 13, 2023Updated 2 years ago
- Official JAX implementation of MAGVIT: Masked Generative Video Transformerβ993Jan 17, 2024Updated 2 years ago
- Easily turn large sets of image urls to an image dataset. Can download, resize and package 100M urls in 20h on one machine.β4,358Oct 19, 2025Updated 3 months ago
- [NeurIPS 2023] This repository includes the official implementation of our paper "An Inverse Scaling Law for CLIP Training"β320Jun 3, 2024Updated last year
- DataComp: In search of the next generation of multimodal datasetsβ770Apr 28, 2025Updated 9 months ago
- Official repo for MM-REACTβ965Jan 31, 2024Updated 2 years ago
- Official repository of OFA (ICML 2022). Paper: OFA: Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence Lβ¦β2,555Apr 24, 2024Updated last year
- [CVPR2023] The code for γPosition-guided Text Prompt for Vision-Language Pre-trainingγβ151Jun 7, 2023Updated 2 years ago
- Emu Series: Generative Multimodal Models from BAAIβ1,765Jan 12, 2026Updated last month
- GIT: A Generative Image-to-text Transformer for Vision and Languageβ580Dec 2, 2023Updated 2 years ago
- Official implementation for "Multimodal Chain-of-Thought Reasoning in Language Models" (stay tuned and more will be updated)β3,989Jun 12, 2024Updated last year
- Evaluating Vision & Language Pretraining Models with Objects, Attributes and Relations. [EMNLP 2022]β136Sep 29, 2024Updated last year
- β805Jul 8, 2024Updated last year
- COYO-700M: Large-scale Image-Text Pair Datasetβ1,251Nov 30, 2022Updated 3 years ago
- CapDec: SOTA Zero Shot Image Captioning Using CLIP and GPT2, EMNLP 2022 (findings)β203Jan 28, 2024Updated 2 years ago
- [CVPR 2023] Official Implementation of X-Decoder for generalized decoding for pixel, image and languageβ1,342Oct 5, 2023Updated 2 years ago
- Implementation of the deepmind Flamingo vision-language model, based on Hugging Face language models and ready for trainingβ169Apr 27, 2023Updated 2 years ago
- This is the official repository for the LENS (Large Language Models Enhanced to See) system.β356Jul 22, 2025Updated 6 months ago
- Grounded Language-Image Pre-trainingβ2,573Jan 24, 2024Updated 2 years ago
- When do we not need larger vision models?β412Feb 8, 2025Updated last year
- [CVPR 2023] Learning Visual Representations via Language-Guided Samplingβ149Apr 13, 2023Updated 2 years ago
- Repository for the paper: dense and aligned captions (dac) promote compositional reasoning in vl modelsβ27Nov 29, 2023Updated 2 years ago
- SVIT: Scaling up Visual Instruction Tuningβ166Jun 20, 2024Updated last year
- [CVPR 2024] CapsFusion: Rethinking Image-Text Data at Scaleβ213Feb 27, 2024Updated last year
- An open source implementation of CLIP.β13,383Updated this week
- PyTorch code for "VL-Adapter: Parameter-Efficient Transfer Learning for Vision-and-Language Tasks" (CVPR2022)β209Dec 18, 2022Updated 3 years ago
- Repository for the paper "Data Efficient Masked Language Modeling for Vision and Language".β18Sep 17, 2021Updated 4 years ago
- Official implementation of "Composer: Creative and Controllable Image Synthesis with Composable Conditions"β1,559Dec 26, 2023Updated 2 years ago
- Supervision Exists Everywhere: A Data Efficient Contrastive Language-Image Pre-training Paradigmβ674Sep 19, 2022Updated 3 years ago
- VisionLLM Seriesβ1,137Feb 27, 2025Updated 11 months ago
- Multimodal-GPTβ1,518Jun 4, 2023Updated 2 years ago