showlab / afformerLinks
Affordance Grounding from Demonstration Video to Target Image (CVPR 2023)
☆44Updated last year
Alternatives and similar repositories for afformer
Users that are interested in afformer are comparing it to the libraries listed below
Sorting:
- Official PyTorch Implementation of Learning Affordance Grounding from Exocentric Images, CVPR 2022☆64Updated 9 months ago
- Code implementation for paper titled "HOI-Ref: Hand-Object Interaction Referral in Egocentric Vision"☆28Updated last year
- Code and data release for the paper "Learning Object State Changes in Videos: An Open-World Perspective" (CVPR 2024)☆33Updated 11 months ago
- Code for NeurIPS 2022 Datasets and Benchmarks paper - EgoTaskQA: Understanding Human Tasks in Egocentric Videos.☆34Updated 2 years ago
- [ECCV 2022] AssistQ: Affordance-centric Question-driven Task Completion for Egocentric Assistant☆21Updated last month
- [CVPR 2022] Joint hand motion and interaction hotspots prediction from egocentric videos☆68Updated last year
- Python scripts to download Assembly101 from Google Drive☆48Updated 10 months ago
- LOCATE: Localize and Transfer Object Parts for Weakly Supervised Affordance Grounding (CVPR 2023)☆40Updated 2 years ago
- One-Shot Open Affordance Learning with Foundation Models (CVPR 2024)☆42Updated last year
- Code and data release for the paper "Learning Fine-grained View-Invariant Representations from Unpaired Ego-Exo Videos via Temporal Align…☆18Updated last year
- This the official repository of OCL (ICCV 2023).☆24Updated last year
- [WIP] Code for LangToMo☆16Updated last month
- [ECCV2024, Oral, Best Paper Finalist] This is the official implementation of the paper "LEGO: Learning EGOcentric Action Frame Generation…☆37Updated 5 months ago
- Bidirectional Mapping between Action Physical-Semantic Space☆31Updated 11 months ago
- [CVPR 2022] Sequential Voting with Relational Box Fields for Active Object Detection☆10Updated 3 years ago
- [CVPR 2024 Champions][ICLR 2025] Solutions for EgoVis Chanllenges in CVPR 2024☆127Updated 3 months ago
- [ICCV2025 Oral] Latent Motion Token as the Bridging Language for Learning Robot Manipulation from Videos☆120Updated 3 months ago
- ☆53Updated 7 months ago
- HandsOnVLM: Vision-Language Models for Hand-Object Interaction Prediction☆34Updated 7 months ago
- IMProv: Inpainting-based Multimodal Prompting for Computer Vision Tasks☆57Updated 10 months ago
- 🔍 Explore Egocentric Vision: research, data, challenges, real-world apps. Stay updated & contribute to our dynamic repository! Work-in-p…☆116Updated 8 months ago
- An Examination of the Compositionality of Large Generative Vision-Language Models☆19Updated last year
- [ICML 2024] A Touch, Vision, and Language Dataset for Multimodal Alignment☆81Updated 2 months ago
- ☆25Updated 3 years ago
- Code for the paper: F. Ragusa, G. M. Farinella, A. Furnari. StillFast: An End-to-End Approach for Short-Term Object Interaction Anticipat…☆11Updated 2 years ago
- [CVPR 2024] Data and benchmark code for the EgoExoLearn dataset☆65Updated 11 months ago
- [NeurIPS 2022] Egocentric Video-Language Pretraining☆243Updated last year
- OpenScan: A Benchmark for Generalized Open-Vocabulary 3D Scene Understanding☆18Updated last week
- Code implementation of the paper 'FIction: 4D Future Interaction Prediction from Video'☆14Updated 4 months ago
- An unofficial pytorch dataloader for Open X-Embodiment Datasets https://github.com/google-deepmind/open_x_embodiment☆18Updated 7 months ago