silicx / GoldFromOres-BiLPLinks
Preview code of ECCV'24 paper "Distill Gold from Massive Ores" (BiLP)
☆25Updated last year
Alternatives and similar repositories for GoldFromOres-BiLP
Users that are interested in GoldFromOres-BiLP are comparing it to the libraries listed below
Sorting:
- Official implementation of Dancing with Still Images: Video Distillation via Static-Dynamic Disentanglement.☆30Updated last year
- Official implementation of ECCV 2024 paper: Take A Step Back: Rethinking the Two Stages in Visual Reasoning☆15Updated 5 months ago
- Code for our ICML'24 on multimodal dataset distillation☆41Updated last year
- [ICLR 2025] This repo is the official implementation of "The Labyrinth of Links: Navigating the Associative Maze of Multi-modal LLMs".☆13Updated 9 months ago
- This the official repository of OCL (ICCV 2023).☆25Updated last year
- Can 3D Vision-Language Models Truly Understand Natural Language?☆20Updated last year
- [ECCV2024, Oral, Best Paper Finalist] This is the official implementation of the paper "LEGO: Learning EGOcentric Action Frame Generation…☆39Updated 8 months ago
- Official PyTorch Implementation of Learning Affordance Grounding from Exocentric Images, CVPR 2022☆68Updated last year
- IMProv: Inpainting-based Multimodal Prompting for Computer Vision Tasks☆57Updated last year
- ☆16Updated last year
- [ICML 2024] A Touch, Vision, and Language Dataset for Multimodal Alignment☆85Updated 5 months ago
- ☆32Updated 2 years ago
- Egocentric Video Understanding Dataset (EVUD)☆32Updated last year
- Official implementation of the CVPR'24 paper [Adaptive Slot Attention: Object Discovery with Dynamic Slot Number]☆59Updated 9 months ago
- official implementation of "Interpreting CLIP's Image Representation via Text-Based Decomposition"☆232Updated 5 months ago
- [IJCV] EgoPlan-Bench: Benchmarking Multimodal Large Language Models for Human-Level Planning☆74Updated 11 months ago
- An unofficial pytorch dataloader for Open X-Embodiment Datasets https://github.com/google-deepmind/open_x_embodiment☆18Updated 10 months ago
- ☆30Updated last year
- [CVPR 2024] Binding Touch to Everything: Learning Unified Multimodal Tactile Representations☆66Updated 9 months ago
- ☆36Updated 2 months ago
- An Examination of the Compositionality of Large Generative Vision-Language Models☆19Updated last year
- [NeurIPS-2024] The offical Implementation of "Instruction-Guided Visual Masking"☆39Updated 11 months ago
- (NeurIPS 2024 Spotlight) TOPA: Extend Large Language Models for Video Understanding via Text-Only Pre-Alignment☆31Updated last year
- official implementation of CVPR 23 paper "M3Video: Masked Motion Modeling for Self-Supervised Video Representation Learning"☆52Updated last year
- Uni-OVSeg is a weakly supervised open-vocabulary segmentation framework that leverages unpaired mask-text pairs.☆52Updated last year
- Official repository of DoraemonGPT: Toward Understanding Dynamic Scenes with Large Language Models☆86Updated last year
- ☆11Updated 3 months ago
- [ICCV 2025] Official code for "AIM: Adaptive Inference of Multi-Modal LLMs via Token Merging and Pruning"☆43Updated last month
- [CVPR 2024] Data and benchmark code for the EgoExoLearn dataset☆70Updated 2 months ago
- [ICCV2025 Oral] Latent Motion Token as the Bridging Language for Learning Robot Manipulation from Videos☆145Updated last month