2U1 / SmolVLM-FinetuneLinks
An open-source implementaion for fine-tuning SmolVLM.
☆52Updated last month
Alternatives and similar repositories for SmolVLM-Finetune
Users that are interested in SmolVLM-Finetune are comparing it to the libraries listed below
Sorting:
- Rex-Thinker: Grounded Object Refering via Chain-of-Thought Reasoning☆126Updated 4 months ago
- [NeurIPS 2024] MoVA: Adapting Mixture of Vision Experts to Multimodal Context☆167Updated last year
- Florence-2 is a novel vision foundation model with a unified, prompt-based representation for a variety of computer vision and vision-lan…☆106Updated last year
- Code for ChatRex: Taming Multimodal LLM for Joint Perception and Understanding☆205Updated 2 weeks ago
- ☆119Updated last year
- Multimodal Open-O1 (MO1) is designed to enhance the accuracy of inference models by utilizing a novel prompt-based approach. This tool wo…☆29Updated last year
- Official repo of Griffon series including v1(ECCV 2024), v2(ICCV 2025), G, and R, and also the RL tool Vision-R1.☆240Updated 2 months ago
- [NeurIPS 2025] Elevating Visual Perception in Multimodal LLMs with Visual Embedding Distillation, arXiv 2024☆64Updated 2 weeks ago
- Vision Search Assistant: Empower Vision-Language Models as Multimodal Search Engines☆126Updated 11 months ago
- [ICCV2023] TinyCLIP: CLIP Distillation via Affinity Mimicking and Weight Inheritance☆114Updated last year
- Task Preference Optimization: Improving Multimodal Large Language Models with Vision Task Alignment☆60Updated 3 months ago
- A Simple Framework of Small-scale LMMs for Video Understanding☆95Updated 4 months ago
- [ICCV2025] Referring any person or objects given a natural language description. Code base for RexSeek and HumanRef Benchmark☆169Updated 2 weeks ago
- [EMNLP-2025 Oral] ZoomEye: Enhancing Multimodal LLMs with Human-Like Zooming Capabilities through Tree-Based Image Exploration☆59Updated 2 months ago
- [ECCV 2024] SegVG: Transferring Object Bounding Box to Segmentation for Visual Grounding☆62Updated last year
- [ICLR2025] Draw-and-Understand: Leveraging Visual Prompts to Enable MLLMs to Comprehend What You Want☆88Updated 4 months ago
- Official code for NeurIPS 2025 paper "GRIT: Teaching MLLMs to Think with Images"☆153Updated last week
- [CVPR 2024] Official implementation of "ViTamin: Designing Scalable Vision Models in the Vision-language Era"☆210Updated last year
- [CVPR 2025] DynRefer: Delving into Region-level Multimodal Tasks via Dynamic Resolution☆54Updated 7 months ago
- Reproduction of LLaVA-v1.5 based on Llama-3-8b LLM backbone.☆65Updated last year
- [ACL2025 Findings] Migician: Revealing the Magic of Free-Form Multi-Image Grounding in Multimodal Large Language Models☆80Updated 5 months ago
- Official repository for paper MG-LLaVA: Towards Multi-Granularity Visual Instruction Tuning(https://arxiv.org/abs/2406.17770).☆158Updated last year
- An efficient multi-modal instruction-following data synthesis tool and the official implementation of Oasis https://arxiv.org/abs/2503.08…☆32Updated 4 months ago
- [AAAI2025] ChatterBox: Multi-round Multimodal Referring and Grounding, Multimodal, Multi-round dialogues☆57Updated 5 months ago
- Train InternViT-6B in MMSegmentation and MMDetection with DeepSpeed☆103Updated last year
- [ICCV 2025] Dynamic-VLM☆25Updated 10 months ago
- Project for "LaSagnA: Language-based Segmentation Assistant for Complex Queries".☆61Updated last year
- The official implementation of the paper "MMFuser: Multimodal Multi-Layer Feature Fuser for Fine-Grained Vision-Language Understanding". …☆59Updated 11 months ago
- [ICCVW 25] LLaVA-MORE: A Comparative Study of LLMs and Visual Backbones for Enhanced Visual Instruction Tuning☆153Updated 2 months ago
- ☆53Updated 9 months ago