SkalskiP / top-cvpr-2024-papers
This repository is a curated collection of the most exciting and influential CVPR 2024 papers. π₯ [Paper + Code + Demo]
β708Updated 9 months ago
Alternatives and similar repositories for top-cvpr-2024-papers:
Users that are interested in top-cvpr-2024-papers are comparing it to the libraries listed below
- ποΈ + π¬ + π§ = π€ Curated list of top foundation and multimodal models! [Paper + Code + Examples + Tutorials]β606Updated last year
- RobustSAM: Segment Anything Robustly on Degraded Images (CVPR 2024 Highlight)β340Updated 6 months ago
- Official code for "FeatUp: A Model-Agnostic Frameworkfor Features at Any Resolution" ICLR 2024β1,473Updated 8 months ago
- Official Implementation of CVPR24 highligt paper: Matching Anything by Segmenting Anythingβ1,235Updated 4 months ago
- β502Updated 4 months ago
- Official repository for "AM-RADIO: Reduce All Domains Into One"β947Updated last week
- Hiera: A fast, powerful, and simple hierarchical vision transformer.β958Updated last year
- π€© An AWESOME Curated List of Papers, Workshops, Datasets, and Challenges from CVPR 2024β143Updated 9 months ago
- [NeurIPS 2024] Code release for "Segment Anything without Supervision"β453Updated 5 months ago
- This repo is the homebase of a community driven course on Computer Vision with Neural Networks. Feel free to join us on the Hugging Face β¦β598Updated this week
- [CVPRW'24] SoccerNet Game State Reconstruction: End-to-End Athlete Tracking and Identification on a Minimap (CVPR24 - CVSports workshop)β281Updated last month
- A curated list of foundation models for vision and language tasksβ962Updated last month
- Grounding DINO 1.5: IDEA Research's Most Capable Open-World Object Detection Model Seriesβ914Updated 2 months ago
- code for CVPR2024 paper: DiffMOT: A Real-time Diffusion-based Multiple Object Tracker with Non-linear Predictionβ405Updated 9 months ago
- 4M: Massively Multimodal Masked Modelingβ1,701Updated 2 weeks ago
- Efficient Track Anythingβ500Updated 2 months ago
- A collection of papers on the topic of ``Computer Vision in the Wild (CVinW)''β1,264Updated last year
- This repository is a curated collection of the most exciting and influential CVPR 2023 papers. π₯ [Paper + Code]β646Updated 8 months ago
- ICLR2024 Spotlight: curation/training code, metadata, distribution and pre-trained models for MetaCLIP; CVPR 2024: MoDE: CLIP Data Expertβ¦β1,371Updated last week
- From scratch implementation of a vision language model in pure PyTorchβ205Updated 10 months ago
- DINO-X: The World's Top-Performing Vision Model for Open-World Object Detection and Understandingβ937Updated 2 months ago
- [ACCV 2024 (Oral)] Official Implementation of "Moving Object Segmentation: All You Need Is SAM (and Flow)" Junyu Xie, Charig Yang, Weidi β¦β299Updated 3 months ago
- Tracking Any Point (TAP)β1,423Updated last week
- Official Pytorch Implementation for βDINO-Tracker: Taming DINO for Self-Supervised Point Tracking in a Single Videoβ (ECCV 2024)β470Updated 4 months ago
- Grounded SAM 2: Ground and Track Anything in Videos with Grounding DINO, Florence-2 and SAM 2β1,877Updated 3 months ago
- This series will take you on a journey from the fundamentals of NLP and Computer Vision to the cutting edge of Vision-Language Models.β1,049Updated 2 months ago
- A distilled Segment Anything (SAM) model capable of running real-time with NVIDIA TensorRTβ729Updated last year
- [ICCV 2023] Tracking Anything with Decoupled Video Segmentationβ1,347Updated 7 months ago
- [CVPR 2024 π₯] Grounding Large Multimodal Model (GLaMM), the first-of-its-kind model capable of generating natural language responses thaβ¦β855Updated 4 months ago
- [ICLR 2024] Official PyTorch implementation of FasterViT: Fast Vision Transformers with Hierarchical Attentionβ824Updated this week