GAIR-NLP / anoleLinks
Anole: An Open, Autoregressive and Native Multimodal Models for Interleaved Image-Text Generation
☆774Updated last month
Alternatives and similar repositories for anole
Users that are interested in anole are comparing it to the libraries listed below
Sorting:
- Official Implementation of "Lumina-mGPT: Illuminate Flexible Photorealistic Text-to-Image Generation with Multimodal Generative Pretraini…☆605Updated 3 months ago
- Official implementation of SEED-LLaMA (ICLR 2024).☆618Updated 9 months ago
- Explore the Multimodal “Aha Moment” on 2B Model☆596Updated 3 months ago
- Long Context Transfer from Language to Vision☆384Updated 3 months ago
- LLM2CLIP makes SOTA pretrained CLIP model more SOTA ever.☆531Updated 2 weeks ago
- PyTorch Implementation of "V* : Guided Visual Search as a Core Mechanism in Multimodal LLMs"☆644Updated last year
- A family of lightweight multimodal models.☆1,024Updated 7 months ago
- Multimodal Models in Real World☆520Updated 4 months ago
- ☆616Updated last year
- HPT - Open Multimodal LLMs from HyperGAI☆316Updated last year
- Next-Token Prediction is All You Need☆2,166Updated 4 months ago
- [ICML 2025] Official PyTorch implementation of LongVU☆384Updated 2 months ago
- [ICLR 2025] Repository for Show-o series, One Single Transformer to Unify Multimodal Understanding and Generation.☆1,587Updated this week
- [ACL 2025 🔥] Rethinking Step-by-step Visual Reasoning in LLMs☆304Updated last month
- LLaVA-Plus: Large Language and Vision Assistants that Plug and Learn to Use Skills☆749Updated last year
- MMaDA - Open-Sourced Multimodal Large Diffusion Language Models☆1,189Updated last month
- ✨✨[CVPR 2025] Video-MME: The First-Ever Comprehensive Evaluation Benchmark of Multi-modal LLMs in Video Analysis☆597Updated 2 months ago
- Aligning LMMs with Factually Augmented RLHF☆368Updated last year
- Official repository for the paper PLLaVA☆660Updated 11 months ago
- LLaVA-UHD v2: an MLLM Integrating High-Resolution Semantic Pyramid via Hierarchical Window Transformer☆382Updated 2 months ago
- Tarsier -- a family of large-scale video-language models, which is designed to generate high-quality video descriptions , together with g…☆427Updated 2 months ago
- A minimal codebase for finetuning large multimodal models, supporting llava-1.5/1.6, llava-interleave, llava-next-video, llava-onevision,…☆311Updated 4 months ago
- [CVPR'25 highlight] RLAIF-V: Open-Source AI Feedback Leads to Super GPT-4V Trustworthiness☆385Updated 2 months ago
- LaVIT: Empower the Large Language Model to Understand and Generate Visual Content☆583Updated 9 months ago
- A novel Multimodal Large Language Model (MLLM) architecture, designed to structurally align visual and textual embeddings.☆985Updated last month
- Kimi-VL: Mixture-of-Experts Vision-Language Model for Multimodal Reasoning, Long-Context Understanding, and Strong Agent Capabilities☆956Updated 3 weeks ago
- Codebase for Aria - an Open Multimodal Native MoE☆1,058Updated 5 months ago
- This repo contains evaluation code for the paper "MMMU: A Massive Multi-discipline Multimodal Understanding and Reasoning Benchmark for E…☆460Updated last month
- 📖 This is a repository for organizing papers, codes and other resources related to unified multimodal models.☆613Updated 2 weeks ago
- [ICLR 2024 Spotlight] DreamLLM: Synergistic Multimodal Comprehension and Creation☆450Updated 7 months ago