human-centeredAI / awesomeHAI
a reading list for human-centered AI
☆42Updated 2 years ago
Related projects ⓘ
Alternatives and complementary repositories for awesomeHAI
- IMProv: Inpainting-based Multimodal Prompting for Computer Vision Tasks☆59Updated last month
- [ICLR 2022] RelViT: Concept-guided Vision Transformer for Visual Relational Reasoning☆64Updated 2 years ago
- ☆72Updated 2 years ago
- [ICCV2023] EgoObjects: A Large-Scale Egocentric Dataset for Fine-Grained Object Understanding☆75Updated last year
- This repository is a collection of research papers on World Models.☆36Updated last year
- Official repository for the General Robust Image Task (GRIT) Benchmark☆50Updated last year
- ElasticTok: Adaptive Tokenization for Image and Video☆33Updated 2 weeks ago
- Code release for NeurIPS 2023 paper SlotDiffusion: Object-centric Learning with Diffusion Models☆78Updated 10 months ago
- Official Release of NeurIPS 2023 Spotlight paper "Object-Centric Slot Diffusion"☆58Updated 8 months ago
- Semantic-Aware Fine-Grained Correspondence, at ECCV 2022 (Oral)☆14Updated 2 years ago
- [NeurIPS 2022] code for "Visual Concepts Tokenization"☆21Updated 2 years ago
- ☆37Updated 2 years ago
- 💭 Intentonomy: towards Human Intent Understanding [CVPR 2021]☆33Updated 3 years ago
- [arXiv:2309.16669] Code release for "Training a Large Video Model on a Single Machine in a Day"☆116Updated 3 months ago
- Code for the paper titled "CiT Curation in Training for Effective Vision-Language Data".☆78Updated last year
- Generative World Explorer☆32Updated this week
- ☆58Updated last year
- ☆43Updated 7 months ago
- Code for Look for the Change paper published at CVPR 2022☆35Updated 2 years ago
- A list of papers and other resources on language-guided image editing.☆37Updated 3 years ago
- Code release for the paper "Egocentric Video Task Translation" (CVPR 2023 Highlight)☆31Updated last year
- This repo contains the code for the recipe of the winning entry to the Ego4d VQ2D challenge at CVPR 2022.☆41Updated last year
- Code for “Pretrained Language Models as Visual Planners for Human Assistance”☆57Updated last year
- Which fellows cited my article?☆22Updated 2 years ago
- ☆29Updated 4 months ago
- ☆27Updated 5 months ago
- [ECCV 2024] This is the official implementation of "Stitched ViTs are Flexible Vision Backbones".☆23Updated 10 months ago
- Source code for "A Dense Reward View on Aligning Text-to-Image Diffusion with Preference" (ICML'24).☆31Updated 6 months ago
- Paper List for In-context Learning 🌷☆20Updated last year