ictlab-unict / not-with-my-nameLinks
This is an official implementation for "Not with my name! Inferring artists' names of input strings employed by Diffusion Models".
☆15Updated 2 years ago
Alternatives and similar repositories for not-with-my-name
Users that are interested in not-with-my-name are comparing it to the libraries listed below
Sorting:
- Official code for "FeatUp: A Model-Agnostic Frameworkfor Features at Any Resolution" ICLR 2024☆1,619Updated last year
- Official repository of "SAMURAI: Adapting Segment Anything Model for Zero-Shot Visual Tracking with Motion-Aware Memory"☆7,031Updated 9 months ago
- Grounded SAM 2: Ground and Track Anything in Videos with Grounding DINO, Florence-2 and SAM 2☆3,172Updated 2 months ago
- State-of-the-art Image & Video CLIP, Multimodal Large Language Models, and More!☆2,068Updated 3 weeks ago
- [ICCV 2023] Tracking Anything with Decoupled Video Segmentation☆1,473Updated 8 months ago
- The system detects players and the ball with YOLO, assigns teams via zero-shot jersey classification, tracks ball possession, maps court …☆34Updated 5 months ago
- Official repository for "AM-RADIO: Reduce All Domains Into One"☆1,428Updated last week
- [CVPR 2024] Depth Anything: Unleashing the Power of Large-Scale Unlabeled Data. Foundation Model for Monocular Depth Estimation☆7,944Updated last year
- An open-source project dedicated to tracking and segmenting any objects in videos, either automatically or interactively. The primary alg…☆3,094Updated last year
- One-step image-to-image with Stable Diffusion turbo: sketch2image, day2night, and more☆2,350Updated 5 months ago
- Efficient vision foundation models for high-resolution generation and perception.☆3,193Updated 4 months ago
- [ECCV2024] Video Foundation Models & Data for Multimodal Understanding☆2,163Updated 3 weeks ago
- [ECCV2024] VideoMamba: State Space Model for Efficient Video Understanding☆1,058Updated last year
- YOLOE: Real-Time Seeing Anything [ICCV 2025]☆1,975Updated 6 months ago
- This repository is a curated collection of the most exciting and influential CVPR 2024 papers. 🔥 [Paper + Code + Demo]☆743Updated 7 months ago
- SAM with text prompt☆2,518Updated 4 months ago
- Official Implementation of CVPR24 highlight paper: Matching Anything by Segmenting Anything☆1,361Updated 8 months ago
- [CVPR 2025] Official PyTorch Implementation of MambaVision: A Hybrid Mamba-Transformer Vision Backbone☆1,977Updated 5 months ago
- High-resolution models for human tasks.☆5,257Updated last year
- [NeurIPS 2024] Depth Anything V2. A More Capable Foundation Model for Monocular Depth Estimation☆7,370Updated 11 months ago
- [CVPR 2024] Real-Time Open-Vocabulary Object Detection☆6,140Updated 10 months ago
- Tracking Any Point (TAP)☆1,767Updated 2 months ago
- ☆17Updated 2 years ago
- ☆17Updated 2 years ago
- The repository provides code for running inference with the Meta Segment Anything Model 2 (SAM 2), links for downloading the trained mode…☆18,215Updated last year
- Reference PyTorch implementation and models for DINOv3☆9,238Updated last month
- [ECCV 2024] Official implementation of the paper "Grounding DINO: Marrying DINO with Grounded Pre-Training for Open-Set Object Detection"☆9,543Updated last year
- Open-source and strong foundation image recognition models.☆3,553Updated 10 months ago
- Video datasets☆1,587Updated 2 years ago
- HunyuanVideo: A Systematic Framework For Large Video Generation Model☆11,579Updated last month