2U1 / SmolVLM-FinetuneLinks
An open-source implementaion for fine-tuning SmolVLM.
☆42Updated 3 months ago
Alternatives and similar repositories for SmolVLM-Finetune
Users that are interested in SmolVLM-Finetune are comparing it to the libraries listed below
Sorting:
- [NeurIPS 2024] MoVA: Adapting Mixture of Vision Experts to Multimodal Context☆165Updated 10 months ago
- Code for ChatRex: Taming Multimodal LLM for Joint Perception and Understanding☆198Updated 6 months ago
- Task Preference Optimization: Improving Multimodal Large Language Models with Vision Task Alignment☆54Updated 2 weeks ago
- ☆119Updated last year
- Vision Search Assistant: Empower Vision-Language Models as Multimodal Search Engines☆125Updated 9 months ago
- Official code for paper "GRIT: Teaching MLLMs to Think with Images"☆115Updated this week
- Official repo of Griffon series including v1(ECCV 2024), v2, and G☆228Updated 2 months ago
- Official repository for paper MG-LLaVA: Towards Multi-Granularity Visual Instruction Tuning(https://arxiv.org/abs/2406.17770).☆156Updated 10 months ago
- Pixel-Level Reasoning Model trained with RL☆187Updated last month
- [CVPR 2025] DeCLIP: Decoupled Learning for Open-Vocabulary Dense Perception☆70Updated last month
- LongLLaVA: Scaling Multi-modal LLMs to 1000 Images Efficiently via Hybrid Architecture☆209Updated 7 months ago
- ✨✨Beyond LLaVA-HD: Diving into High-Resolution Large Multimodal Models☆160Updated 7 months ago
- A Simple Framework of Small-scale LMMs for Video Understanding☆73Updated last month
- LLaVA-MORE: A Comparative Study of LLMs and Visual Backbones for Enhanced Visual Instruction Tuning☆145Updated last week
- [ICLR2025] Draw-and-Understand: Leveraging Visual Prompts to Enable MLLMs to Comprehend What You Want☆84Updated 2 months ago
- [ICCV 2025] Dynamic-VLM☆23Updated 7 months ago
- [CVPR 2024] Official implementation of "ViTamin: Designing Scalable Vision Models in the Vision-language Era"☆207Updated last year
- Rex-Thinker: Grounded Object Refering via Chain-of-Thought Reasoning☆102Updated last month
- Code for the paper "Vamba: Understanding Hour-Long Videos with Hybrid Mamba-Transformers" [ICCV 2025]☆78Updated last week
- [NeurIPS 2024] Official PyTorch implementation code for realizing the technical part of Mamba-based traversal of rationale (Meteor) to im…☆115Updated last year
- a family of highly capabale yet efficient large multimodal models☆186Updated 11 months ago
- [ICLR2025] LLaVA-HR: High-Resolution Large Language-Vision Assistant☆238Updated 11 months ago
- Florence-2 is a novel vision foundation model with a unified, prompt-based representation for a variety of computer vision and vision-lan…☆83Updated last year
- Official implementation of "Traceable Evidence Enhanced Visual Grounded Reasoning: Evaluation and Methodology"☆49Updated 3 weeks ago
- [CVPR'24 Highlight] PyTorch Implementation of Object Recognition as Next Token Prediction☆180Updated 3 months ago
- [NeurIPS 2024] TransAgent: Transfer Vision-Language Foundation Models with Heterogeneous Agent Collaboration☆24Updated 9 months ago
- [ICCV2025] Referring any person or objects given a natural language description. Code base for RexSeek and HumanRef Benchmark☆149Updated 3 months ago
- OLA-VLM: Elevating Visual Perception in Multimodal LLMs with Auxiliary Embedding Distillation, arXiv 2024☆60Updated 5 months ago
- [ACM MM25] The official code of "Breaking the Modality Barrier: Universal Embedding Learning with Multimodal LLMs"☆83Updated last month
- Project for "LaSagnA: Language-based Segmentation Assistant for Complex Queries".☆59Updated last year