retkowsky / florence-2
Florence-2
☆60Updated last month
Alternatives and similar repositories for florence-2:
Users that are interested in florence-2 are comparing it to the libraries listed below
- Codebase for the Recognize Anything Model (RAM)☆75Updated last year
- Code for ChatRex: Taming Multimodal LLM for Joint Perception and Understanding☆168Updated 2 months ago
- Quick exploration into fine tuning florence 2☆304Updated 6 months ago
- Official code of "EVF-SAM: Early Vision-Language Fusion for Text-Prompted Segment Anything Model"☆375Updated last week
- Implementation of PALI3 from the paper PALI-3 VISION LANGUAGE MODELS: SMALLER, FASTER, STRONGER"☆145Updated last month
- [ICLR2025] LLaVA-HR: High-Resolution Large Language-Vision Assistant☆234Updated 7 months ago
- [ICCV2023] Segment Every Reference Object in Spatial and Temporal Spaces☆236Updated last month
- OLA-VLM: Elevating Visual Perception in Multimodal LLMs with Auxiliary Embedding Distillation, arXiv 2024☆57Updated last month
- [CVPR 2024] VCoder: Versatile Vision Encoders for Multimodal Large Language Models☆275Updated 11 months ago
- Use Segment Anything 2, grounded with Florence-2, to auto-label data for use in training vision models.☆116Updated 7 months ago
- ✨✨Beyond LLaVA-HD: Diving into High-Resolution Large Multimodal Models☆154Updated 2 months ago
- [ECCV2024] This is an official implementation for "PSALM: Pixelwise SegmentAtion with Large Multi-Modal Model"☆234Updated 2 months ago
- An open-source implementaion for fine-tuning Molmo-7B-D and Molmo-7B-O by allenai.☆54Updated 2 months ago
- Official repository for paper MG-LLaVA: Towards Multi-Granularity Visual Instruction Tuning(https://arxiv.org/abs/2406.17770).☆154Updated 5 months ago
- Democratization of "PaLI: A Jointly-Scaled Multilingual Language-Image Model"☆88Updated last year
- Referring any person or objects given a natural language description. Code base for RexSeek and HumanRef Benchmark☆84Updated this week
- Official repo of Griffon series including v1(ECCV 2024), v2, and G☆132Updated this week
- [CVPR2024] ViP-LLaVA: Making Large Multimodal Models Understand Arbitrary Visual Prompts☆315Updated 8 months ago
- ☆167Updated 5 months ago
- [CVPR 2024] PixelLM is an effective and efficient LMM for pixel-level reasoning and understanding.☆215Updated last month
- [NeurIPS 2024] Official implementation of the paper "Interfacing Foundation Models' Embeddings"☆122Updated 7 months ago
- Pytorch code for paper From CLIP to DINO: Visual Encoders Shout in Multi-modal Large Language Models☆193Updated 2 months ago
- [CVPR 24] The repository provides code for running inference and training for "Segment and Caption Anything" (SCA) , links for downloadin…☆217Updated 5 months ago
- A CPU Realtime VLM in 500M. Surpassed Moondream2 and SmolVLM. Training from scratch with ease.☆165Updated 3 weeks ago
- a family of highly capabale yet efficient large multimodal models☆178Updated 7 months ago
- [NeurIPS 2024] MoVA: Adapting Mixture of Vision Experts to Multimodal Context☆149Updated 6 months ago
- LLaVA-UHD v2: an MLLM Integrating High-Resolution Semantic Pyramid via Hierarchical Window Transformer☆369Updated this week
- LongLLaVA: Scaling Multi-modal LLMs to 1000 Images Efficiently via Hybrid Architecture☆199Updated 2 months ago
- Official implementation of OV-DINO: Unified Open-Vocabulary Detection with Language-Aware Selective Fusion☆299Updated last week
- This is the official implementation of "Flash-VStream: Memory-Based Real-Time Understanding for Long Video Streams"☆173Updated 3 months ago