LHBuilder / SA-Segment-AnythingLinks
Vision-oriented multimodal AI
☆49Updated last year
Alternatives and similar repositories for SA-Segment-Anything
Users that are interested in SA-Segment-Anything are comparing it to the libraries listed below
Sorting:
- Official Pytorch Implementation of Self-emerging Token Labeling☆35Updated last year
- ☆35Updated last year
- Evaluate the performance of computer vision models and prompts for zero-shot models (Grounding DINO, CLIP, BLIP, DINOv2, ImageBind, model…☆37Updated 2 years ago
- LAVIS - A One-stop Library for Language-Vision Intelligence☆48Updated last year
- [NeurIPS2022] This is the official implementation of the paper "Expediting Large-Scale Vision Transformer for Dense Prediction without Fi…☆85Updated 2 years ago
- Code for experiments for "ConvNet vs Transformer, Supervised vs CLIP: Beyond ImageNet Accuracy"☆101Updated last year
- Baby-DALL3: Annotation anything in visual tasks and Generate anything just all in one-pipeline with GPT-4 (a small baby of DALL·E 3).☆85Updated 2 years ago
- Code for our ICLR 2024 paper "PerceptionCLIP: Visual Classification by Inferring and Conditioning on Contexts"☆79Updated last year
- EfficientSAM + YOLO World base model for use with Autodistill.☆10Updated last year
- Codes for ICML 2023 Learning Dynamic Query Combinations for Transformer-based Object Detection and Segmentation☆37Updated 2 years ago
- MM-Instruct: Generated Visual Instructions for Large Multimodal Model Alignment☆35Updated last year
- Codebase for the Recognize Anything Model (RAM)☆87Updated last year
- Detectron2 is a platform for object detection, segmentation and other visual recognition tasks.☆20Updated 3 years ago
- Implementation of the model: "(MC-ViT)" from the paper: "Memory Consolidation Enables Long-Context Video Understanding"☆25Updated last month
- Auto Segmentation label generation with SAM (Segment Anything) + Grounding DINO☆22Updated 9 months ago
- PyTorch Implementation of Object Recognition as Next Token Prediction [CVPR'24 Highlight]☆181Updated 7 months ago
- [AAAI2025] ChatterBox: Multi-round Multimodal Referring and Grounding, Multimodal, Multi-round dialogues☆58Updated 7 months ago
- Detectron2 Toolbox and Benchmark for V3Det☆18Updated last year
- ☆75Updated last year
- [ACL 2023] PuMer: Pruning and Merging Tokens for Efficient Vision Language Models☆36Updated last year
- [PR 2024] A large Cross-Modal Video Retrieval Dataset with Reading Comprehension☆28Updated last year
- The open source implementation of "AnyMAL: An Efficient and Scalable Any-Modality Augmented Language Model"☆22Updated 10 months ago
- Pytorch implementation of HyperLLaVA: Dynamic Visual and Language Expert Tuning for Multimodal Large Language Models☆28Updated last year
- How Good is Google Bard's Visual Understanding? An Empirical Study on Open Challenges☆30Updated 2 years ago
- "Towards Improving Document Understanding: An Exploration on Text-Grounding via MLLMs" 2023☆16Updated last year
- ☆19Updated 2 years ago
- EdgeSAM model for use with Autodistill.☆29Updated last year
- ☆87Updated last year
- INF-LLaVA: Dual-perspective Perception for High-Resolution Multimodal Large Language Model☆42Updated last year
- ☆30Updated 2 years ago