SkalskiP / top-cvpr-2024-papersLinks
This repository is a curated collection of the most exciting and influential CVPR 2024 papers. π₯ [Paper + Code + Demo]
β732Updated last month
Alternatives and similar repositories for top-cvpr-2024-papers
Users that are interested in top-cvpr-2024-papers are comparing it to the libraries listed below
Sorting:
- About This repository is a curated collection of the most exciting and influential CVPR 2025 papers. π₯ [Paper + Code + Demo]β675Updated 3 weeks ago
- ποΈ + π¬ + π§ = π€ Curated list of top foundation and multimodal models! [Paper + Code + Examples + Tutorials]β622Updated last year
- Official code for "FeatUp: A Model-Agnostic Frameworkfor Features at Any Resolution" ICLR 2024β1,543Updated last year
- Official repository for "AM-RADIO: Reduce All Domains Into One"β1,230Updated last week
- This repository is a curated collection of the most exciting and influential CVPR 2023 papers. π₯ [Paper + Code]β653Updated last month
- Official Implementation of CVPR24 highlight paper: Matching Anything by Segmenting Anythingβ1,317Updated 2 months ago
- State-of-the-art Image & Video CLIP, Multimodal Large Language Models, and More!β1,402Updated last month
- RobustSAM: Segment Anything Robustly on Degraded Images (CVPR 2024 Highlight)β356Updated 10 months ago
- This repo is the homebase of a community driven course on Computer Vision with Neural Networks. Feel free to join us on the Hugging Face β¦β663Updated this week
- Hiera: A fast, powerful, and simple hierarchical vision transformer.β999Updated last year
- π€© An AWESOME Curated List of Papers, Workshops, Datasets, and Challenges from CVPR 2024β143Updated last year
- 4M: Massively Multimodal Masked Modelingβ1,742Updated last month
- Grounding DINO 1.5: IDEA Research's Most Capable Open-World Object Detection Model Seriesβ979Updated 5 months ago
- This series will take you on a journey from the fundamentals of NLP and Computer Vision to the cutting edge of Vision-Language Models.β1,098Updated 5 months ago
- Efficient Track Anythingβ580Updated 6 months ago
- [CVPR25] Official repository for the paper: "SAMWISE: Infusing Wisdom in SAM2 for Text-Driven Video Segmentation"β284Updated 2 weeks ago
- LightlyTrain is the first PyTorch framework to pretrain computer vision models on unlabeled data for industrial applicationsβ723Updated this week
- [NeurIPS 2024] Code release for "Segment Anything without Supervision"β475Updated 2 weeks ago
- β522Updated 8 months ago
- Tracking Any Point (TAP)β1,574Updated last month
- This repository provides the code and model checkpoints for AIMv1 and AIMv2 research projects.β1,323Updated 2 months ago
- Recipes for shrinking, optimizing, customizing cutting edge vision models. πβ1,507Updated last week
- [CVPR 2024 π₯] Grounding Large Multimodal Model (GLaMM), the first-of-its-kind model capable of generating natural language responses thaβ¦β893Updated last month
- β57Updated last year
- [ICCV 2023] Tracking Anything with Decoupled Video Segmentationβ1,409Updated 2 months ago
- A curated list of important published computer vision paper on a weekly basisβ171Updated 5 months ago
- DINO-X: The World's Top-Performing Vision Model for Open-World Object Detection and Understandingβ1,117Updated 3 weeks ago
- A curated list of foundation models for vision and language tasksβ1,046Updated 2 weeks ago
- ICCV 2023 Papers: Discover cutting-edge research from ICCV 2023, the leading computer vision conference. Stay updated on the latest in coβ¦β953Updated 10 months ago
- Official code of "EVF-SAM: Early Vision-Language Fusion for Text-Prompted Segment Anything Model"β429Updated 3 months ago