retkowsky / florence-2Links
Florence-2
☆68Updated 5 months ago
Alternatives and similar repositories for florence-2
Users that are interested in florence-2 are comparing it to the libraries listed below
Sorting:
- Code for ChatRex: Taming Multimodal LLM for Joint Perception and Understanding☆194Updated 5 months ago
- [ICCV2025] Referring any person or objects given a natural language description. Code base for RexSeek and HumanRef Benchmark☆138Updated 3 months ago
- Florence-2 is a novel vision foundation model with a unified, prompt-based representation for a variety of computer vision and vision-lan…☆78Updated last year
- Rex-Thinker: Grounded Object Refering via Chain-of-Thought Reasoning☆88Updated 2 weeks ago
- A Simple Framework of Small-scale LMMs for Video Understanding☆71Updated last month
- ✨✨Beyond LLaVA-HD: Diving into High-Resolution Large Multimodal Models☆159Updated 6 months ago
- Quick exploration into fine tuning florence 2☆323Updated 9 months ago
- a family of highly capabale yet efficient large multimodal models☆185Updated 10 months ago
- OLA-VLM: Elevating Visual Perception in Multimodal LLMs with Auxiliary Embedding Distillation, arXiv 2024☆60Updated 4 months ago
- [ACL2025 Findings] Migician: Revealing the Magic of Free-Form Multi-Image Grounding in Multimodal Large Language Models☆68Updated last month
- Official repository for paper MG-LLaVA: Towards Multi-Granularity Visual Instruction Tuning(https://arxiv.org/abs/2406.17770).☆156Updated 9 months ago
- LLM2CLIP makes SOTA pretrained CLIP model more SOTA ever.☆531Updated 2 weeks ago
- The official implement of "VisionReasoner: Unified Visual Perception and Reasoning via Reinforcement Learning"☆222Updated last month
- Vision Search Assistant: Empower Vision-Language Models as Multimodal Search Engines☆125Updated 8 months ago
- Implementation of PALI3 from the paper PALI-3 VISION LANGUAGE MODELS: SMALLER, FASTER, STRONGER"☆145Updated 3 months ago
- [NeurIPS 2024] MoVA: Adapting Mixture of Vision Experts to Multimodal Context☆163Updated 9 months ago
- Official code of "EVF-SAM: Early Vision-Language Fusion for Text-Prompted Segment Anything Model"☆429Updated 3 months ago
- A CPU Realtime VLM in 500M. Surpassed Moondream2 and SmolVLM. Training from scratch with ease.☆220Updated 2 months ago
- Codebase for the Recognize Anything Model (RAM)☆81Updated last year
- [ICCV2023] TinyCLIP: CLIP Distillation via Affinity Mimicking and Weight Inheritance☆99Updated last year
- Official repo of Griffon series including v1(ECCV 2024), v2, and G☆223Updated last month
- Multimodal Open-O1 (MO1) is designed to enhance the accuracy of inference models by utilizing a novel prompt-based approach. This tool wo…☆29Updated 9 months ago
- [ICML 2025] Official PyTorch implementation of LongVU☆384Updated 2 months ago
- The official repo for "Vidi: Large Multimodal Models for Video Understanding and Editing"☆120Updated 3 weeks ago
- ☆179Updated 9 months ago
- 💡 VideoMind: A Chain-of-LoRA Agent for Long Video Reasoning☆228Updated 3 weeks ago
- Fine-tuning Qwen2.5-VL for vision-language tasks | Optimized for Vision understanding | LoRA & PEFT support.☆97Updated 5 months ago
- Official Repository of paper VideoGPT+: Integrating Image and Video Encoders for Enhanced Video Understanding☆279Updated 3 months ago
- This is the official implementation of our paper "Video-RAG: Visually-aligned Retrieval-Augmented Long Video Comprehension"☆208Updated this week
- [CVPR 2024] VCoder: Versatile Vision Encoders for Multimodal Large Language Models☆278Updated last year