picselliahq / atlasLinks
Solving Computer Vision with AI agents
☆33Updated last month
Alternatives and similar repositories for atlas
Users that are interested in atlas are comparing it to the libraries listed below
Sorting:
- Which model is the best at object detection? Which is best for small or large objects? We compare the results in a handy leaderboard.☆84Updated last week
- Testing and evaluating the capabilities of Vision-Language models (PaliGemma) in performing computer vision tasks such as object detectio…☆82Updated last year
- Inference and fine-tuning examples for vision models from 🤗 Transformers☆158Updated 3 months ago
- A high-performance library for detecting objects in images and videos, leveraging Rust's speed and safety. Optionally supports a gRPC API…☆32Updated 3 months ago
- Fine tune Gemma 3 on an object detection task☆74Updated 3 weeks ago
- A tool for converting computer vision label formats.☆67Updated 3 months ago
- Notebooks using the Neural Magic libraries 📓☆40Updated last year
- Evaluate custom and HuggingFace text-to-image/zero-shot-image-classification models like CLIP, SigLIP, DFN5B, and EVA-CLIP. Metrics inclu…☆54Updated 6 months ago
- Easily get basic insights about your ML dataset☆39Updated last year
- This Repository demostrates various examples using YOLO☆13Updated last year
- Machine Learning Serving focused on GenAI with simplicity as the top priority.☆59Updated last month
- ☆34Updated 9 months ago
- YOLOExplorer : Iterate on your YOLO / CV datasets using SQL, Vector semantic search, and more within seconds☆133Updated this week
- ☆76Updated last month
- AnyModal is a Flexible Multimodal Language Model Framework for PyTorch☆101Updated 7 months ago
- Lightweight, open-source, high-performance Yolo implementation☆37Updated 2 months ago
- Notebooks for fine tuning pali gemma☆112Updated 3 months ago
- Compare Savant and PyTorch performance☆13Updated last year
- An integration of Segment Anything Model, Molmo, and, Whisper to segment objects using voice and natural language.☆28Updated 5 months ago
- Chat with Phi 3.5/3 Vision LLMs. Phi-3.5-vision is a lightweight, state-of-the-art open multimodal model built upon datasets which includ…☆34Updated 7 months ago
- VLM driven tool that processes surveillance videos, extracts frames, and generates insightful annotations using a fine-tuned Florence-2 V…☆119Updated 2 months ago
- An SDK for Transformers + YOLO and other SSD family models☆63Updated 6 months ago
- ☆125Updated 3 weeks ago
- Use Grounding DINO, Segment Anything, and CLIP to label objects in images.☆31Updated last year
- Vision Transformers for image classification, image segmentation, and object detection.☆56Updated 9 months ago
- ☆13Updated 2 years ago
- auto_labeler - An all-in-one library to automatically label vision data☆16Updated 6 months ago
- Build Agentic workflows with function calling using open LLMs☆28Updated this week
- Eye exploration☆27Updated 5 months ago
- ☆59Updated last year