drmuskangarg / Multimodal-datasetsLinks
This repository is build in association with our position paper on "Multimodality for NLP-Centered Applications: Resources, Advances and Frontiers". As a part of this release we share the information about recent multimodal datasets which are available for research purposes. We found that although 100+ multimodal language resources are availab…
☆323Updated 4 years ago
Alternatives and similar repositories for Multimodal-datasets
Users that are interested in Multimodal-datasets are comparing it to the libraries listed below
Sorting:
- This repository provides a comprehensive collection of research papers focused on multimodal representation learning, all of which have b…☆83Updated 7 months ago
- Hate-CLIPper: Multimodal Hateful Meme Classification with Explicit Cross-modal Interaction of CLIP features - Accepted at EMNLP 2022 Work…☆56Updated 9 months ago
- EMNLP 2023 Papers: Explore cutting-edge research from EMNLP 2023, the premier conference for advancing empirical methods in natural langu…☆112Updated last year
- [ICLR 2023] MultiViz: Towards Visualizing and Understanding Multimodal Models☆98Updated last year
- [NeurIPS 2021] Multiscale Benchmarks for Multimodal Representation Learning☆608Updated last year
- [Paperlist] Awesome paper list of multimodal dialog, including methods, datasets and metrics☆37Updated 11 months ago
- MIntRec: A New Dataset for Multimodal Intent Recognition (ACM MM 2022)☆126Updated 8 months ago
- A Survey on multimodal learning research.☆334Updated 2 years ago
- This repository presents UR-FUNNY dataset: first dataset for multimodal humor detection☆151Updated 5 years ago
- Recent Advances in Vision and Language Pre-training (VLP)☆297Updated 2 years ago
- ☆212Updated 4 years ago
- Evaluation tools for image captioning. Including BLEU, ROUGE-L, CIDEr, METEOR, SPICE scores.☆34Updated 2 years ago
- Must-read Papers on Large Language Model (LLM) Continual Learning☆148Updated 2 years ago
- ☆41Updated 2 years ago
- ☆96Updated 3 years ago
- A curated list of awesome vision and language resources (still under construction... stay tuned!)☆559Updated last year
- Multimodal datasets.☆33Updated last year
- Extensive acceptance rates and information of main AI conferences☆166Updated last year
- [NeurIPS 2023, ICMI 2023] Quantifying & Modeling Multimodal Interactions☆84Updated last year
- Multimodal Sarcasm Detection Dataset☆364Updated last year
- [T-PAMI] A curated list of self-supervised multimodal learning resources.☆273Updated last year
- Humor Knowledge Enriched Transformer☆31Updated 4 years ago
- [NAACL 2024] Data and code for our paper "Sentiment Analysis in the Era of Large Language Models: A Reality Check"☆109Updated 2 months ago
- ☆64Updated 4 years ago
- This is the official implementation of the paper "MM-SHAP: A Performance-agnostic Metric for Measuring Multimodal Contributions in Vision…☆32Updated last year
- ☆101Updated 3 years ago
- [MIR-2023-Survey] A continuously updated paper list for multi-modal pre-trained big models☆291Updated 6 months ago
- ☆14Updated last year
- Official Implementation of "Geometric Multimodal Contrastive Representation Learning" (https://arxiv.org/abs/2202.03390)☆28Updated last year
- Collection of Tools and Papers related to Adapters / Parameter-Efficient Transfer Learning/ Fine-Tuning☆201Updated last year