dhansmair / flamingo-mini
Implementation of the deepmind Flamingo vision-language model, based on Hugging Face language models and ready for training
ā165Updated last year
Alternatives and similar repositories for flamingo-mini:
Users that are interested in flamingo-mini are comparing it to the libraries listed below
- Code used for the creation of OBELICS, an open, massive and curated collection of interleaved image-text web documents, containing 141M dā¦ā193Updated 4 months ago
- š§ Code and models for the ICML 2023 paper "Grounding Language Models to Images for Multimodal Inputs and Outputs".ā477Updated last year
- Language Models Can See: Plugging Visual Controls in Text Generationā257Updated 2 years ago
- MultiInstruct: Improving Multi-Modal Zero-Shot Learning via Instruction Tuningā134Updated last year
- ā64Updated last year
- Code/Data for the paper: "LLaVAR: Enhanced Visual Instruction Tuning for Text-Rich Image Understanding"ā261Updated 7 months ago
- Open LLaMA Eyes to See the Worldā175Updated last year
- Implementation of PALI3 from the paper PALI-3 VISION LANGUAGE MODELS: SMALLER, FASTER, STRONGER"ā143Updated 2 months ago
- MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities (ICML 2024)ā278Updated 2 months ago
- PyTorch code for "VL-Adapter: Parameter-Efficient Transfer Learning for Vision-and-Language Tasks" (CVPR2022)ā204Updated 2 years ago
- ā83Updated last year
- [NeurIPS 2023] Self-Chained Image-Language Model for Video Localization and Question Answeringā184Updated last year
- Democratization of "PaLI: A Jointly-Scaled Multilingual Language-Image Model"ā88Updated 9 months ago
- Open source code for AAAI 2023 Paper "BridgeTower: Building Bridges Between Encoders in Vision-Language Representation Learning"ā161Updated last year
- š Code and models for the NeurIPS 2023 paper "Generating Images with Multimodal Language Models".ā444Updated 11 months ago
- Code release for "MERLOT Reserve: Neural Script Knowledge through Vision and Language and Sound"ā138Updated 2 years ago
- [NeurIPS 2023] This repository includes the official implementation of our paper "An Inverse Scaling Law for CLIP Training"ā306Updated 7 months ago
- Research Trends in LLM-guided Multimodal Learning.ā356Updated last year
- Coarse-to-Fine Vision-Language Pre-training with Fusion in the Backboneā127Updated last year
- E5-V: Universal Embeddings with Multimodal Large Language Modelsā215Updated 3 weeks ago
- M4 experiment logbookā56Updated last year
- Language Quantized AutoEncodersā95Updated last year
- (CVPR2024)A benchmark for evaluating Multimodal LLMs using multiple-choice questions.ā324Updated this week
- [ICLR'24] Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuningā266Updated 10 months ago
- [ACL 2023] Official PyTorch code for Singularity model in "Revealing Single Frame Bias for Video-and-Language Learning"ā131Updated last year
- [TMLR23] Official implementation of UnIVAL: Unified Model for Image, Video, Audio and Language Tasks.ā224Updated last year
- LLMScore: Unveiling the Power of Large Language Models in Text-to-Image Synthesis Evaluationā126Updated last year
- CapDec: SOTA Zero Shot Image Captioning Using CLIP and GPT2, EMNLP 2022 (findings)ā188Updated 11 months ago
- Align and Prompt: Video-and-Language Pre-training with Entity Promptsā187Updated 2 years ago
- Fine-tuning LLaMA to follow Instructions within 1 Hour and 1.2M Parametersā86Updated last year