π§ Code and models for the ICML 2023 paper "Grounding Language Models to Images for Multimodal Inputs and Outputs".
β485Oct 30, 2023Updated 2 years ago
Alternatives and similar repositories for fromage
Users that are interested in fromage are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- π Code and models for the NeurIPS 2023 paper "Generating Images with Multimodal Language Models".β473Jan 19, 2024Updated 2 years ago
- An open-source framework for training large multimodal models.β4,079Aug 31, 2024Updated last year
- MultimodalC4 is a multimodal extension of c4 that interleaves millions of images with text.β954Mar 19, 2025Updated last year
- Official implementation of SEED-LLaMA (ICLR 2024).β642Sep 21, 2024Updated last year
- LAVIS - A One-stop Library for Language-Vision Intelligenceβ11,194Nov 18, 2024Updated last year
- Managed Database hosting by DigitalOcean β’ AdPostgreSQL, MySQL, MongoDB, Kafka, Valkey, and OpenSearch available. Automatically scale up storage and focus on building your apps.
- GRiT: A Generative Region-to-text Transformer for Object Understanding (ECCV2024)β341Jan 8, 2024Updated 2 years ago
- 𦦠Otter, a multi-modal model based on OpenFlamingo (open-sourced version of DeepMind's Flamingo), trained on MIMIC-IT and showcasing impβ¦β3,344Mar 5, 2024Updated 2 years ago
- Official Repository of ChatCaptionerβ468Apr 13, 2023Updated 2 years ago
- Implementation of 𦩠Flamingo, state-of-the-art few-shot visual question answering attention net out of Deepmind, in Pytorchβ1,272Oct 18, 2022Updated 3 years ago
- [ICLR 2024] Fine-tuning LLaMA to follow Instructions within 1 Hour and 1.2M Parametersβ5,932Mar 14, 2024Updated 2 years ago
- Evaluating Vision & Language Pretraining Models with Objects, Attributes and Relations. [EMNLP 2022]β137Sep 29, 2024Updated last year
- Easily turn large sets of image urls to an image dataset. Can download, resize and package 100M urls in 20h on one machine.β4,385Oct 19, 2025Updated 5 months ago
- [CVPR2023] The code for γPosition-guided Text Prompt for Vision-Language Pre-trainingγβ151Jun 7, 2023Updated 2 years ago
- Emu Series: Generative Multimodal Models from BAAIβ1,772Jan 12, 2026Updated 2 months ago
- Managed Kubernetes at scale on DigitalOcean β’ AdDigitalOcean Kubernetes includes the control plane, bandwidth allowance, container registry, automatic updates, and more for free.
- Official implementation for "Multimodal Chain-of-Thought Reasoning in Language Models" (stay tuned and more will be updated)β3,986Jun 12, 2024Updated last year
- Official JAX implementation of MAGVIT: Masked Generative Video Transformerβ995Jan 17, 2024Updated 2 years ago
- [NeurIPS 2023] This repository includes the official implementation of our paper "An Inverse Scaling Law for CLIP Training"β320Jun 3, 2024Updated last year
- β807Jul 8, 2024Updated last year
- Counterfactual Reasoning VQA Datasetβ28Nov 23, 2023Updated 2 years ago
- Official repository of OFA (ICML 2022). Paper: OFA: Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence Lβ¦β2,557Apr 24, 2024Updated last year
- Official repo for MM-REACTβ969Jan 31, 2024Updated 2 years ago
- [TACL/EMNLP'24] Do Vision and Language Models Share Concepts? A Vector Space Alignment Studyβ16Nov 22, 2024Updated last year
- DataComp: In search of the next generation of multimodal datasetsβ773Apr 28, 2025Updated 10 months ago
- 1-Click AI Models by DigitalOcean Gradient β’ AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click and start building anything your business needs.
- COYO-700M: Large-scale Image-Text Pair Datasetβ1,251Nov 30, 2022Updated 3 years ago
- GIT: A Generative Image-to-text Transformer for Vision and Languageβ579Dec 2, 2023Updated 2 years ago
- CapDec: SOTA Zero Shot Image Captioning Using CLIP and GPT2, EMNLP 2022 (findings)β204Jan 28, 2024Updated 2 years ago
- This is the official repository for the LENS (Large Language Models Enhanced to See) system.β355Jul 22, 2025Updated 8 months ago
- PyTorch code for "VL-Adapter: Parameter-Efficient Transfer Learning for Vision-and-Language Tasks" (CVPR2022)β211Dec 18, 2022Updated 3 years ago
- [CVPR 2023] Learning Visual Representations via Language-Guided Samplingβ150Apr 13, 2023Updated 2 years ago
- [CVPR 2023] Official Implementation of X-Decoder for generalized decoding for pixel, image and languageβ1,341Oct 5, 2023Updated 2 years ago
- β644Feb 15, 2024Updated 2 years ago
- Grounded Language-Image Pre-trainingβ2,585Jan 24, 2024Updated 2 years ago
- End-to-end encrypted cloud storage - Proton Drive β’ AdSpecial offer: 40% Off Yearly / 80% Off First Month. Protect your most important files, photos, and documents from prying eyes.
- β196Mar 5, 2025Updated last year
- Implementation of the deepmind Flamingo vision-language model, based on Hugging Face language models and ready for trainingβ169Apr 27, 2023Updated 2 years ago
- VisionLLM Seriesβ1,139Feb 27, 2025Updated last year
- SVIT: Scaling up Visual Instruction Tuningβ166Jun 20, 2024Updated last year
- When do we not need larger vision models?β416Feb 8, 2025Updated last year
- An open source implementation of CLIP.β13,579Mar 12, 2026Updated 2 weeks ago
- [CVPR 2024] CapsFusion: Rethinking Image-Text Data at Scaleβ214Feb 27, 2024Updated 2 years ago