roboflow / cvevalsLinks
Evaluate the performance of computer vision models and prompts for zero-shot models (Grounding DINO, CLIP, BLIP, DINOv2, ImageBind, models hosted on Roboflow)
☆37Updated 2 years ago
Alternatives and similar repositories for cvevals
Users that are interested in cvevals are comparing it to the libraries listed below
Sorting:
- EdgeSAM model for use with Autodistill.☆29Updated last year
- Use Florence 2 to auto-label data for use in training fine-tuned object detection models.☆67Updated last year
- Unofficial implementation and experiments related to Set-of-Mark (SoM) 👁️☆88Updated 2 years ago
- EfficientSAM + YOLO World base model for use with Autodistill.☆10Updated last year
- ☆59Updated last year
- Vision-oriented multimodal AI☆49Updated last year
- Official Pytorch Implementation of Self-emerging Token Labeling☆35Updated last year
- GroundedSAM Base Model plugin for Autodistill☆54Updated last year
- ☆69Updated last year
- YOLOExplorer : Iterate on your YOLO / CV datasets using SQL, Vector semantic search, and more within seconds☆137Updated this week
- Implementation of VisionLLaMA from the paper: "VisionLLaMA: A Unified LLaMA Interface for Vision Tasks" in PyTorch and Zeta☆16Updated last year
- [NeurIPS 2023] HASSOD: Hierarchical Adaptive Self-Supervised Object Detection☆58Updated last year
- Pixel Parsing. A reproduction of OCR-free end-to-end document understanding models with open data☆23Updated last year
- LoRA fine-tuned Stable Diffusion Deployment☆31Updated 2 years ago
- Use Segment Anything 2, grounded with Florence-2, to auto-label data for use in training vision models.☆134Updated last year
- Simplify Your Visual Data Ops. Find and visualize issues with your computer vision datasets such as duplicates, anomalies, data leakage, …☆69Updated 7 months ago
- ☆15Updated 2 years ago
- Use Grounding DINO, Segment Anything, and CLIP to label objects in images.☆33Updated last year
- Timm model explorer☆42Updated last year
- My personal implementation of the model from "Qwen-VL: A Frontier Large Vision-Language Model with Versatile Abilities", they haven't rel…☆12Updated last year
- SAM-CLIP module for use with Autodistill.☆16Updated 2 years ago
- Fine-tuning OpenAI CLIP Model for Image Search on medical images☆77Updated 3 years ago
- Use Grounding DINO, Segment Anything, and GPT-4V to label images with segmentation masks for use in training smaller, fine-tuned models.☆65Updated 2 years ago
- ClickDiffusion: Harnessing LLMs for Interactive Precise Image Editing☆69Updated last year
- Repository for the paper: "TiC-CLIP: Continual Training of CLIP Models" ICLR 2024☆109Updated last year
- A simple demo for utilizing grounding dino and segment anything v2 models together☆20Updated last year
- Official Code for Tracking Any Object Amodally☆120Updated last year
- Visualize multi-model embedding spaces. The first goal is to quickly get a lay of the land of any embedding space. Then be able to scroll…☆27Updated last year
- A component that allows you to annotate an image with points and boxes.☆21Updated last year
- Code for experiments for "ConvNet vs Transformer, Supervised vs CLIP: Beyond ImageNet Accuracy"☆101Updated last year