GAIR-NLP / anole
Anole: An Open, Autoregressive and Native Multimodal Models for Interleaved Image-Text Generation
☆759Updated 9 months ago
Alternatives and similar repositories for anole
Users that are interested in anole are comparing it to the libraries listed below
Sorting:
- Official implementation of SEED-LLaMA (ICLR 2024).☆612Updated 7 months ago
- Official Implementation of "Lumina-mGPT: Illuminate Flexible Photorealistic Text-to-Image Generation with Multimodal Generative Pretraini…☆594Updated last month
- Multimodal Models in Real World☆503Updated 2 months ago
- LLM2CLIP makes SOTA pretrained CLIP model more SOTA ever.☆513Updated last month
- Next-Token Prediction is All You Need☆2,121Updated last month
- LLaVA-UHD v2: an MLLM Integrating High-Resolution Semantic Pyramid via Hierarchical Window Transformer☆376Updated 3 weeks ago
- LaVIT: Empower the Large Language Model to Understand and Generate Visual Content☆580Updated 7 months ago
- Long Context Transfer from Language to Vision☆374Updated last month
- 📖 This is a repository for organizing papers, codes and other resources related to unified multimodal models.☆542Updated last month
- LLaVA-Plus: Large Language and Vision Assistants that Plug and Learn to Use Skills☆740Updated last year
- Explore the Multimodal “Aha Moment” on 2B Model☆586Updated last month
- ☆609Updated last year
- Block Diffusion: Interpolating Between Autoregressive and Diffusion Language Models☆595Updated last month
- Autoregressive Model Beats Diffusion: 🦙 Llama for Scalable Image Generation☆1,750Updated 9 months ago
- [ICLR 2024 Spotlight] DreamLLM: Synergistic Multimodal Comprehension and Creation☆439Updated 5 months ago
- SEED-Voken: A Series of Powerful Visual Tokenizers☆878Updated 2 months ago
- [ICLR 2025] Repository for Show-o, One Single Transformer to Unify Multimodal Understanding and Generation.☆1,396Updated 2 weeks ago
- 🐟 Code and models for the NeurIPS 2023 paper "Generating Images with Multimodal Language Models".☆457Updated last year
- [ECCV 2024 Oral] Code for paper: An Image is Worth 1/2 Tokens After Layer 2: Plug-and-Play Inference Acceleration for Large Vision-Langua…☆425Updated 4 months ago
- [ICLR2025] LLaVA-HR: High-Resolution Large Language-Vision Assistant☆237Updated 9 months ago
- Repository for Meta Chameleon, a mixed-modal early-fusion foundation model from FAIR.☆1,997Updated 9 months ago
- Rethinking Step-by-step Visual Reasoning in LLMs☆293Updated 3 months ago
- PyTorch Implementation of "V* : Guided Visual Search as a Core Mechanism in Multimodal LLMs"☆607Updated last year
- This repo contains evaluation code for the paper "MMMU: A Massive Multi-discipline Multimodal Understanding and Reasoning Benchmark for E…☆424Updated last month
- Tarsier -- a family of large-scale video-language models, which is designed to generate high-quality video descriptions , together with g…☆369Updated 3 weeks ago
- [CVPR'25 highlight] RLAIF-V: Open-Source AI Feedback Leads to Super GPT-4V Trustworthiness☆363Updated this week
- A fork to add multimodal model training to open-r1☆1,255Updated 3 months ago
- Seed1.5-VL, a vision-language foundation model designed to advance general-purpose multimodal understanding and reasoning, achieving stat…☆529Updated this week
- Aligning LMMs with Factually Augmented RLHF☆362Updated last year
- ☆490Updated 5 months ago