LHBuilder / SA-Segment-Anything
Vision-oriented multimodal AI
☆49Updated 8 months ago
Alternatives and similar repositories for SA-Segment-Anything:
Users that are interested in SA-Segment-Anything are comparing it to the libraries listed below
- Official Pytorch Implementation of Self-emerging Token Labeling☆32Updated 10 months ago
- MM-Instruct: Generated Visual Instructions for Large Multimodal Model Alignment☆33Updated 7 months ago
- OLA-VLM: Elevating Visual Perception in Multimodal LLMs with Auxiliary Embedding Distillation, arXiv 2024☆48Updated 2 weeks ago
- Codebase for the Recognize Anything Model (RAM)☆71Updated last year
- Multimodal Open-O1 (MO1) is designed to enhance the accuracy of inference models by utilizing a novel prompt-based approach. This tool wo…☆29Updated 4 months ago
- arXiv 23 "Towards Improving Document Understanding: An Exploration on Text-Grounding via MLLMs"☆14Updated 2 months ago
- ☆19Updated last year
- ☆34Updated last year
- [AAAI2025] ChatterBox: Multi-round Multimodal Referring and Grounding, Multimodal, Multi-round dialogues☆50Updated 2 months ago
- Project for "LaSagnA: Language-based Segmentation Assistant for Complex Queries".☆51Updated 9 months ago
- Open-vocabulary Semantic Segmentation☆34Updated last year
- ☆47Updated last year
- Evaluate the performance of computer vision models and prompts for zero-shot models (Grounding DINO, CLIP, BLIP, DINOv2, ImageBind, model…☆35Updated last year
- This is the official repo for ByteVideoLLM/Dynamic-VLM☆19Updated 2 months ago
- EfficientViT is a new family of vision models for efficient high-resolution vision.☆24Updated last year
- Detectron2 Toolbox and Benchmark for V3Det☆16Updated 8 months ago
- [ICLR2025] Draw-and-Understand: Leveraging Visual Prompts to Enable MLLMs to Comprehend What You Want☆65Updated 2 weeks ago
- ☆68Updated 7 months ago
- ☆73Updated 11 months ago
- MLLM-DataEngine: An Iterative Refinement Approach for MLLM☆42Updated 8 months ago
- A subset of YFCC100M. Tools, checking scripts and links of web drive to download datasets(uncompressed).☆19Updated 3 months ago
- ☆41Updated 3 weeks ago
- LAVIS - A One-stop Library for Language-Vision Intelligence☆47Updated 6 months ago
- [ICML 2024] This repository includes the official implementation of our paper "Rejuvenating image-GPT as Strong Visual Representation Lea…☆97Updated 9 months ago
- [NeurIPS2022] This is the official implementation of the paper "Expediting Large-Scale Vision Transformer for Dense Prediction without Fi…☆83Updated last year
- Task Preference Optimization: Improving Multimodal Large Language Models with Vision Task Alignment☆40Updated last month
- A huge dataset for Document Visual Question Answering☆15Updated 6 months ago
- Codes for ICML 2023 Learning Dynamic Query Combinations for Transformer-based Object Detection and Segmentation☆37Updated last year
- Code for experiments for "ConvNet vs Transformer, Supervised vs CLIP: Beyond ImageNet Accuracy"☆100Updated 5 months ago
- ZoomEye: Enhancing Multimodal LLMs with Human-Like Zooming Capabilities through Tree-Based Image Exploration☆22Updated last month