lihongcs / LLM_InceptionLinks
[ICLR 2025] This repo is the official implementation of "The Labyrinth of Links: Navigating the Associative Maze of Multi-modal LLMs".
☆13Updated 10 months ago
Alternatives and similar repositories for LLM_Inception
Users that are interested in LLM_Inception are comparing it to the libraries listed below
Sorting:
- This the official repository of OCL (ICCV 2023).☆25Updated last year
- [CVPR 2024] Narrative Action Evaluation with Prompt-Guided Multimodal Interaction☆39Updated last year
- [ICLR 2024] Seer: Language Instructed Video Prediction with Latent Diffusion Models☆33Updated last year
- (ECCV 2024) Official repository of paper "EgoExo-Fitness: Towards Egocentric and Exocentric Full-Body Action Understanding"☆31Updated 7 months ago
- [ICML 2024] A Touch, Vision, and Language Dataset for Multimodal Alignment☆88Updated 5 months ago
- Official PyTorch Implementation of Learning Affordance Grounding from Exocentric Images, CVPR 2022☆69Updated last year
- Affordance Grounding from Demonstration Video to Target Image (CVPR 2023)☆44Updated last year
- Official code for MotionBench (CVPR 2025)☆59Updated 8 months ago
- Preview code of ECCV'24 paper "Distill Gold from Massive Ores" (BiLP)☆25Updated last year
- ☆22Updated 2 months ago
- [ICCV2025 Oral] Latent Motion Token as the Bridging Language for Learning Robot Manipulation from Videos☆148Updated last month
- [ECCV2024, Oral, Best Paper Finalist] This is the official implementation of the paper "LEGO: Learning EGOcentric Action Frame Generation…☆39Updated 9 months ago
- HandsOnVLM: Vision-Language Models for Hand-Object Interaction Prediction☆42Updated 2 months ago
- Official implementation of EgoHOD at ICLR 2025; 14 EgoVis Challenge Winners in CVPR 2024☆26Updated this week
- ☆100Updated 3 weeks ago
- [ICLR 2025 Spotlight] Grounding Video Models to Actions through Goal Conditioned Exploration☆58Updated 6 months ago
- Code implementation for paper titled "HOI-Ref: Hand-Object Interaction Referral in Egocentric Vision"☆29Updated last year
- ☆60Updated 11 months ago
- CVPR 2025☆35Updated 7 months ago
- Being-H0: Vision-Language-Action Pretraining from Large-Scale Human Videos☆180Updated 2 months ago
- ☆26Updated last year
- [CVPR 2024] Binding Touch to Everything: Learning Unified Multimodal Tactile Representations☆71Updated last week
- ☆39Updated 2 months ago
- Egocentric Video Understanding Dataset (EVUD)☆32Updated last year
- Data pre-processing and training code on Open-X-Embodiment with pytorch☆11Updated 10 months ago
- ☆21Updated last year
- [NeurIPS 2025] OST-Bench: Evaluating the Capabilities of MLLMs in Online Spatio-temporal Scene Understanding☆67Updated 2 months ago
- [arXiv 2025] MMSI-Bench: A Benchmark for Multi-Image Spatial Intelligence☆57Updated 2 weeks ago
- ☆39Updated last year
- An unofficial pytorch dataloader for Open X-Embodiment Datasets https://github.com/google-deepmind/open_x_embodiment☆18Updated 10 months ago