DLYuanGod / TinyGPT-VLinks
TinyGPT-V: Efficient Multimodal Large Language Model via Small Backbones
β1,306Updated last year
Alternatives and similar repositories for TinyGPT-V
Users that are interested in TinyGPT-V are comparing it to the libraries listed below
Sorting:
- π MINT-1T: A one trillion token multimodal interleaved dataset.β827Updated last year
- γTMM 2025π₯γ Mixture-of-Experts for Large Vision-Language Modelsβ2,294Updated 5 months ago
- [ICLR-2025-SLLM Spotlight π₯]MobiLlama : Small Language Model tailored for edge devicesβ669Updated 8 months ago
- γEMNLP 2024π₯γVideo-LLaVA: Learning United Visual Representation by Alignment Before Projectionβ3,431Updated last year
- Implementation of the training framework proposed in Self-Rewarding Language Model, from MetaAIβ1,407Updated last year
- LLaVA-Plus: Large Language and Vision Assistants that Plug and Learn to Use Skillsβ763Updated last year
- Reaching LLaMA2 Performance with 0.1M Dollarsβ988Updated last year
- Emu Series: Generative Multimodal Models from BAAIβ1,762Updated last year
- An Open-source Toolkit for LLM Developmentβ2,797Updated 11 months ago
- A family of lightweight multimodal models.β1,049Updated last year
- NeurIPS 2025 Spotlight; ICLR2024 Spotlight; CVPR 2024; EMNLP 2024β1,795Updated last month
- β715Updated last year
- Official implementation of paper "MiniGPT-5: Interleaved Vision-and-Language Generation via Generative Vokens"β863Updated 8 months ago
- Strong and Open Vision Language Assistant for Mobile Devicesβ1,318Updated last year
- MobileLLM Optimizing Sub-billion Parameter Language Models for On-Device Use Cases. In ICML 2024.β1,404Updated 8 months ago
- Mamba-Chat: A chat LLM based on the state-space model architecture πβ940Updated last year
- MultimodalC4 is a multimodal extension of c4 that interleaves millions of images with text.β949Updated 9 months ago
- This repository provides the code and model checkpoints for AIMv1 and AIMv2 research projects.β1,394Updated 5 months ago
- LLaVA-Interactive-Demoβ380Updated last year
- 4M: Massively Multimodal Masked Modelingβ1,780Updated 7 months ago
- PyTorch Implementation of "V* : Guided Visual Search as a Core Mechanism in Multimodal LLMs"β684Updated 2 years ago
- A family of open-sourced Mixture-of-Experts (MoE) Large Language Modelsβ1,650Updated last year
- Training LLMs with QLoRA + FSDPβ1,537Updated last year
- GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projectionβ1,637Updated last year
- [NeurIPS 2023] MeZO: Fine-Tuning Language Models with Just Forward Passes. https://arxiv.org/abs/2305.17333β1,138Updated last year
- Run Mixtral-8x7B models in Colab or consumer desktopsβ2,328Updated last year
- The official implementation of Self-Play Fine-Tuning (SPIN)β1,230Updated last year
- Macaw-LLM: Multi-Modal Language Modeling with Image, Video, Audio, and Text Integrationβ1,594Updated last year
- π₯π₯ LLaVA++: Extending LLaVA with Phi-3 and LLaMA-3 (LLaVA LLaMA-3, LLaVA Phi-3)β850Updated 5 months ago
- From scratch implementation of a sparse mixture of experts language model inspired by Andrej Karpathy's makemore :)β786Updated last year