DLYuanGod / TinyGPT-VLinks
TinyGPT-V: Efficient Multimodal Large Language Model via Small Backbones
β1,300Updated last year
Alternatives and similar repositories for TinyGPT-V
Users that are interested in TinyGPT-V are comparing it to the libraries listed below
Sorting:
- γTMM 2025π₯γ Mixture-of-Experts for Large Vision-Language Modelsβ2,262Updated 3 months ago
- γEMNLP 2024π₯γVideo-LLaVA: Learning United Visual Representation by Alignment Before Projectionβ3,389Updated 10 months ago
- π MINT-1T: A one trillion token multimodal interleaved dataset.β826Updated last year
- Emu Series: Generative Multimodal Models from BAAIβ1,746Updated last year
- An Open-source Toolkit for LLM Developmentβ2,790Updated 9 months ago
- β714Updated last year
- Implementation of the training framework proposed in Self-Rewarding Language Model, from MetaAIβ1,398Updated last year
- Official implementation of paper "MiniGPT-5: Interleaved Vision-and-Language Generation via Generative Vokens"β862Updated 5 months ago
- Reaching LLaMA2 Performance with 0.1M Dollarsβ986Updated last year
- A family of lightweight multimodal models.β1,046Updated 11 months ago
- ICLR2024 Spotlight: curation/training code, metadata, distribution and pre-trained models for MetaCLIP; CVPR 2024: MoDE: CLIP Data Expertβ¦β1,699Updated last month
- LLaVA-Plus: Large Language and Vision Assistants that Plug and Learn to Use Skillsβ758Updated last year
- PyTorch Implementation of "V* : Guided Visual Search as a Core Mechanism in Multimodal LLMs"β681Updated last year
- [ICLR-2025-SLLM Spotlight π₯]MobiLlama : Small Language Model tailored for edge devicesβ663Updated 5 months ago
- LLaVA-Interactive-Demoβ379Updated last year
- This repository provides the code and model checkpoints for AIMv1 and AIMv2 research projects.β1,378Updated 2 months ago
- Strong and Open Vision Language Assistant for Mobile Devicesβ1,285Updated last year
- π₯π₯ LLaVA++: Extending LLaVA with Phi-3 and LLaMA-3 (LLaVA LLaMA-3, LLaVA Phi-3)β842Updated 2 months ago
- Code and models for ICML 2024 paper, NExT-GPT: Any-to-Any Multimodal Large Language Modelβ3,576Updated 5 months ago
- The official implementation of Self-Play Fine-Tuning (SPIN)β1,207Updated last year
- MultimodalC4 is a multimodal extension of c4 that interleaves millions of images with text.β942Updated 7 months ago
- 4M: Massively Multimodal Masked Modelingβ1,767Updated 4 months ago
- An open-source framework for training large multimodal models.β4,032Updated last year
- Code and documents of LongLoRA and LongAlpaca (ICLR 2024 Oral)β2,687Updated last year
- Repository for Meta Chameleon, a mixed-modal early-fusion foundation model from FAIR.β2,062Updated last year
- A family of open-sourced Mixture-of-Experts (MoE) Large Language Modelsβ1,620Updated last year
- [NeurIPS 2023] MeZO: Fine-Tuning Language Models with Just Forward Passes. https://arxiv.org/abs/2305.17333β1,132Updated last year
- Code for "AnyGPT: Unified Multimodal LLM with Discrete Sequence Modeling"β861Updated last year
- Official repo for "Mini-Gemini: Mining the Potential of Multi-modality Vision Language Models"β3,323Updated last year
- MobileLLM Optimizing Sub-billion Parameter Language Models for On-Device Use Cases. In ICML 2024.β1,386Updated 6 months ago