SkalskiP / SoMLinks
Unofficial implementation and experiments related to Set-of-Mark (SoM) ποΈ
β88Updated 2 years ago
Alternatives and similar repositories for SoM
Users that are interested in SoM are comparing it to the libraries listed below
Sorting:
- The Next Generation Multi-Modality Superintelligenceβ70Updated last year
- Cerule - A Tiny Mighty Vision Modelβ68Updated 2 months ago
- Testing and evaluating the capabilities of Vision-Language models (PaliGemma) in performing computer vision tasks such as object detectioβ¦β85Updated last year
- Using multiple LLMs for ensemble Forecastingβ16Updated 2 years ago
- Enhancement in Multimodal Representation Learning.β41Updated last year
- Evaluate the performance of computer vision models and prompts for zero-shot models (Grounding DINO, CLIP, BLIP, DINOv2, ImageBind, modelβ¦β37Updated 2 years ago
- β63Updated last year
- Finetune any model on HF in less than 30 secondsβ56Updated last week
- EdgeSAM model for use with Autodistill.β29Updated last year
- Maybe the new state of the art vision model? we'll see π€·ββοΈβ171Updated 2 years ago
- β69Updated last year
- Extract information, summarize, ask questions, and search videos using OpenAI's Vision API ππ¦β62Updated 2 years ago
- Summarize any Arixv Paper with easeβ66Updated 2 years ago
- A framework to enable multimodal models to play games on a computer.β96Updated last year
- An automated tool for discovering insights from research papaer corporaβ137Updated last year
- Visual RAG using less than 300 lines of code.β29Updated last year
- Use Florence 2 to auto-label data for use in training fine-tuned object detection models.β69Updated last year
- Streamlit app presented to the Streamlit LLMs Hackathon September 23β16Updated last year
- A real-time video caption to conversation bot that captures frames generates captions and creates conversational responses using a Large β¦β120Updated 2 years ago
- β87Updated last year
- run paligemma in real timeβ133Updated last year
- Use Grounding DINO, Segment Anything, and GPT-4V to label images with segmentation masks for use in training smaller, fine-tuned models.β66Updated 2 years ago
- Not financial advice.β28Updated 2 years ago
- Implementation of VisionLLaMA from the paper: "VisionLLaMA: A Unified LLaMA Interface for Vision Tasks" in PyTorch and Zetaβ16Updated last year
- Command-line script for inferencing from models such as WizardCoderβ25Updated 2 years ago
- Multi-Modal Multi-Embodied Hivemind-like Iteration of RTX-2β15Updated 7 months ago
- Simple CogVLM client scriptβ14Updated 2 years ago
- BH hackathonβ14Updated last year
- β52Updated 2 years ago
- β54Updated 2 years ago