facebookresearch / EgocentricUserAdaptationLinks
In this codebase we establish a benchmark for egocentric user adaptation based on Ego4d.First, we start from a population model which has data from many users to learn user-agnostic representations.As the user gains more experience over its lifetime, we aim to tailor the general model to user-specific expert models.
☆15Updated 8 months ago
Alternatives and similar repositories for EgocentricUserAdaptation
Users that are interested in EgocentricUserAdaptation are comparing it to the libraries listed below
Sorting:
- Official Code Repository for EnvGen: Generating and Adapting Environments via LLMs for Training Embodied Agents (COLM 2024)☆37Updated last year
- Implementation of the model: "(MC-ViT)" from the paper: "Memory Consolidation Enables Long-Context Video Understanding"☆22Updated last week
- Official repository for the General Robust Image Task (GRIT) Benchmark☆54Updated 2 years ago
- [WACV 2025] Official implementation of "Online-LoRA: Task-free Online Continual Learning via Low Rank Adaptation" by Xiwen Wei, Guihong L…☆51Updated last month
- A Data Source for Reasoning Embodied Agents☆19Updated 2 years ago
- [TMLR 2024] Official implementation of "Sight Beyond Text: Multi-Modal Training Enhances LLMs in Truthfulness and Ethics"☆20Updated 2 years ago
- ☆24Updated 2 years ago
- Code for the Ask4Help project☆22Updated 2 years ago
- Code for CVPR 2023 paper "Procedure-Aware Pretraining for Instructional Video Understanding"☆50Updated 8 months ago
- Code for paper "Point and Ask: Incorporating Pointing into Visual Question Answering"☆19Updated 3 years ago
- PyTorch Implementation of the paper "MM1: Methods, Analysis & Insights from Multimodal LLM Pre-training"☆24Updated last week
- Code for the paper: "No Zero-Shot Without Exponential Data: Pretraining Concept Frequency Determines Multimodal Model Performance" [NeurI…☆90Updated last year
- How Good is Google Bard's Visual Understanding? An Empirical Study on Open Challenges☆30Updated 2 years ago
- Official implementation of "Gemini in Reasoning: Unveiling Commonsense in Multimodal Large Language Models"☆37Updated last year
- Language Repository for Long Video Understanding☆32Updated last year
- Repository for the paper: "TiC-CLIP: Continual Training of CLIP Models" ICLR 2024☆104Updated last year
- A benchmark dataset and simple code examples for measuring the perception and reasoning of multi-sensor Vision Language models.☆19Updated 9 months ago
- ☆60Updated 2 years ago
- [NeurIPS 2022] code for "K-LITE: Learning Transferable Visual Models with External Knowledge" https://arxiv.org/abs/2204.09222☆51Updated 2 years ago
- [ACL2025] Unsolvable Problem Detection: Robust Understanding Evaluation for Large Multimodal Models☆78Updated 4 months ago
- Official code for "Disentangling Visual Embeddings for Attributes and Objects" Published at CVPR 2022☆35Updated 2 years ago
- Repository for the paper Do SSL Models Have Déjà Vu? A Case of Unintended Memorization in Self-supervised Learning☆36Updated 2 years ago
- [CVPR 2023] HierVL Learning Hierarchical Video-Language Embeddings☆46Updated 2 years ago
- ☆26Updated last year
- ☆29Updated 2 years ago
- Code for “Pretrained Language Models as Visual Planners for Human Assistance”☆61Updated 2 years ago
- Code release for the paper "Egocentric Video Task Translation" (CVPR 2023 Highlight)☆33Updated 2 years ago
- Official code repository for "Video-Mined Task Graphs for Keystep Recognition in Instructional Videos" arXiv, 2023☆13Updated last year
- Pytorch Implementation for "Preserving Linear Separability in Continual Learning by Backward Feature Projection" (CVPR 2023)☆18Updated 2 years ago
- ☆33Updated 2 years ago