kyegomez / CM3LeonLinks
An open source implementation of "Scaling Autoregressive Multi-Modal Models: Pretraining and Instruction Tuning", an all-new multi modal AI that uses just a decoder to generate both text and images
☆363Updated last year
Alternatives and similar repositories for CM3Leon
Users that are interested in CM3Leon are comparing it to the libraries listed below
Sorting:
- Official implementation of SEED-LLaMA (ICLR 2024).☆620Updated 11 months ago
- 🐟 Code and models for the NeurIPS 2023 paper "Generating Images with Multimodal Language Models".☆460Updated last year
- Open reproduction of MUSE for fast text2image generation.☆354Updated last year
- LaVIT: Empower the Large Language Model to Understand and Generate Visual Content☆590Updated 10 months ago
- [ICLR 2024 Spotlight] DreamLLM: Synergistic Multimodal Comprehension and Creation☆454Updated 8 months ago
- Official implementation of paper "MiniGPT-5: Interleaved Vision-and-Language Generation via Generative Vokens"☆861Updated 3 months ago
- Code/Data for the paper: "LLaVAR: Enhanced Visual Instruction Tuning for Text-Rich Image Understanding"☆269Updated last year
- LLM-grounded Diffusion: Enhancing Prompt Understanding of Text-to-Image Diffusion Models with Large Language Models (LLM-grounded Diffusi…☆477Updated 11 months ago
- Official PyTorch implementation of the paper "In-Context Learning Unlocked for Diffusion Models"☆411Updated last year
- HPT - Open Multimodal LLMs from HyperGAI☆315Updated last year
- ☆621Updated last year
- [CVPR 2024] VCoder: Versatile Vision Encoders for Multimodal Large Language Models☆278Updated last year
- Official Repository of ChatCaptioner☆465Updated 2 years ago
- Mini-DALLE3: Interactive Text to Image by Prompting Large Language Models☆313Updated last year
- Large-scale text-video dataset. 10 million captioned short videos.☆654Updated last year
- This is the official repository for the LENS (Large Language Models Enhanced to See) system.☆352Updated last month
- [NeurIPS 2023] Official implementations of "Cheap and Quick: Efficient Vision-Language Instruction Tuning for Large Language Models"☆522Updated last year
- 🧀 Code and models for the ICML 2023 paper "Grounding Language Models to Images for Multimodal Inputs and Outputs".☆482Updated last year
- VisionLLaMA: A Unified LLaMA Backbone for Vision Tasks☆386Updated last year
- DataComp: In search of the next generation of multimodal datasets☆734Updated 3 months ago
- LLaVA-Plus: Large Language and Vision Assistants that Plug and Learn to Use Skills☆757Updated last year
- Unified Controllable Visual Generation Model☆648Updated 6 months ago
- [IJCV] FastComposer: Tuning-Free Multi-Subject Image Generation with Localized Attention☆705Updated 7 months ago
- LLaVA-Interactive-Demo☆377Updated last year
- Easily create large video dataset from video urls☆626Updated last year
- Aligning LMMs with Factually Augmented RLHF☆371Updated last year
- [NeurIPS 2023] This repository includes the official implementation of our paper "An Inverse Scaling Law for CLIP Training"☆316Updated last year
- Implementation of PALI3 from the paper PALI-3 VISION LANGUAGE MODELS: SMALLER, FASTER, STRONGER"☆145Updated last month
- Better Aligning Text-to-Image Models with Human Preference. ICCV 2023☆290Updated 2 years ago
- BuboGPT: Enabling Visual Grounding in Multi-Modal LLMs☆512Updated 2 years ago