SkalskiP / top-cvpr-2024-papers
This repository is a curated collection of the most exciting and influential CVPR 2024 papers. π₯ [Paper + Code + Demo]
β664Updated 4 months ago
Related projects β
Alternatives and complementary repositories for top-cvpr-2024-papers
- ποΈ + π¬ + π§ = π€ Curated list of top foundation and multimodal models! [Paper + Code + Examples + Tutorials]β577Updated 8 months ago
- Official repository for "AM-RADIO: Reduce All Domains Into One"β808Updated 2 weeks ago
- Official Implementation of CVPR24 highligt paper: Matching Anything by Segmenting Anythingβ1,004Updated 2 weeks ago
- RobustSAM: Segment Anything Robustly on Degraded Images (CVPR 2024 Highlight)β312Updated 2 months ago
- This repo is the homebase of a community driven course on Computer Vision with Neural Networks. Feel free to join us on the Hugging Face β¦β488Updated last week
- Official code for "FeatUp: A Model-Agnostic Frameworkfor Features at Any Resolution" ICLR 2024β1,378Updated 4 months ago
- This repository is a curated collection of the most exciting and influential CVPR 2023 papers. π₯ [Paper + Code]β639Updated 4 months ago
- Grounded SAM 2: Ground and Track Anything in Videos with Grounding DINO, Florence-2 and SAM 2β1,141Updated 2 weeks ago
- streamline the fine-tuning process for multimodal models: PaliGemma, Florence-2, and Qwen2-VLβ1,390Updated this week
- [CVPRW'24] SoccerNet Game State Reconstruction: End-to-End Athlete Tracking and Identification on a Minimap (CVPR24 - CVSports workshop)β238Updated last week
- [ICCV 2023] Tracking Anything with Decoupled Video Segmentationβ1,269Updated 3 months ago
- Recipes for shrinking, optimizing, customizing cutting edge vision models. πβ890Updated 2 months ago
- API for Grounding DINO 1.5: IDEA Research's Most Capable Open-World Object Detection Model Seriesβ778Updated 3 months ago
- π€© An AWESOME Curated List of Papers, Workshops, Datasets, and Challenges from CVPR 2024β135Updated 5 months ago
- 4M: Massively Multimodal Masked Modelingβ1,607Updated last month
- Hiera: A fast, powerful, and simple hierarchical vision transformer.β896Updated 8 months ago
- A curated list of foundation models for vision and language tasksβ844Updated this week
- [NeurIPS 2024] Code release for "Segment Anything without Supervision"β420Updated last month
- SAM with text promptβ1,736Updated this week
- Combining Segment Anything (SAM) with Grounded DINO for zero-shot object detection and CLIPSeg for zero-shot segmentationβ385Updated 6 months ago
- A curated list of papers that released datasets along with their workβ124Updated 3 weeks ago
- code for CVPR2024 paper: DiffMOT: A Real-time Diffusion-based Multiple Object Tracker with Non-linear Predictionβ389Updated 5 months ago
- Official codebase used to develop Vision Transformer, SigLIP, MLP-Mixer, LiT and more.β2,339Updated 2 months ago
- Official code of "EVF-SAM: Early Vision-Language Fusion for Text-Prompted Segment Anything Model"β312Updated this week
- β462Updated last week
- [CVPR 2024 π₯] Grounding Large Multimodal Model (GLaMM), the first-of-its-kind model capable of generating natural language responses thaβ¦β782Updated 5 months ago
- Tracking Any Point (TAP)β1,313Updated 3 weeks ago
- This repository contains the official implementation of the research paper, "MobileCLIP: Fast Image-Text Models through Multi-Modal Reinfβ¦β621Updated last month
- Images to inference with no labeling (use foundation models to train supervised models).β1,989Updated 2 weeks ago
- A distilled Segment Anything (SAM) model capable of running real-time with NVIDIA TensorRTβ664Updated last year