CMU-MultiComp-Lab / mmml-courseLinks
☆97Updated last year
Alternatives and similar repositories for mmml-course
Users that are interested in mmml-course are comparing it to the libraries listed below
Sorting:
- ☆40Updated last year
- ☆30Updated 2 years ago
- ☆101Updated 3 years ago
- https://slds-lmu.github.io/seminar_multimodal_dl/☆171Updated 2 years ago
- This repository holds code and other relevant files for the NeurIPS 2022 tutorial: Foundational Robustness of Foundation Models.☆72Updated 2 years ago
- [ICLR 2023] MultiViz: Towards Visualizing and Understanding Multimodal Models☆98Updated last year
- Collection of Tools and Papers related to Adapters / Parameter-Efficient Transfer Learning/ Fine-Tuning☆201Updated last year
- [TMLR 2022] High-Modality Multimodal Transformer☆117Updated last year
- This repository provides a comprehensive collection of research papers focused on multimodal representation learning, all of which have b…☆82Updated 6 months ago
- ☆65Updated 3 years ago
- Video descriptions of research papers relating to foundation models and scaling☆30Updated 2 years ago
- code for the ddp tutorial☆32Updated 3 years ago
- In this page, I will provide a list of survey papers on topics related to deep learning and its applications in various fields.☆127Updated last year
- A curated list of awesome human-centered AI resources.☆48Updated 3 years ago
- Open source code for AAAI 2023 Paper "BridgeTower: Building Bridges Between Encoders in Vision-Language Representation Learning"☆167Updated 2 years ago
- Reading list for Multimodal Large Language Models☆69Updated 2 years ago
- ICLR 2023 Paper submission analysis from https://openreview.net/group?id=ICLR.cc/2023/Conference☆107Updated 3 years ago
- In-the-wild Question Answering☆15Updated 2 years ago
- Website☆57Updated 2 years ago
- ☆72Updated 4 years ago
- ☆81Updated last year
- PyTorch implementation of Soft MoE by Google Brain in "From Sparse to Soft Mixtures of Experts" (https://arxiv.org/pdf/2308.00951.pdf)☆81Updated 2 years ago
- ☆34Updated last year
- Basic guidance on how to contribute to Papers with Code☆24Updated 3 years ago
- A collection of multimodal datasets, and visual features for VQA and captionning in pytorch. Just run "pip install multimodal"☆83Updated 3 years ago
- A curated list of vision-and-language pre-training (VLP). :-)☆59Updated 3 years ago
- ☆49Updated 2 years ago
- A guide to improve your research proposals.☆203Updated 5 years ago
- ☆35Updated 4 years ago
- A Survey on multimodal learning research.☆334Updated 2 years ago