JingfengYang / Multi-modal-Deep-LearningLinks
☆73Updated 3 years ago
Alternatives and similar repositories for Multi-modal-Deep-Learning
Users that are interested in Multi-modal-Deep-Learning are comparing it to the libraries listed below
Sorting:
- Source code for our EMNLP'21 paper 《Raise a Child in Large Language Model: Towards Effective and Generalizable Fine-tuning》☆61Updated 3 years ago
- [ICLR 2022] Differentiable Prompt Makes Pre-trained Language Models Better Few-shot Learners☆132Updated 2 years ago
- This respository contains the code for extracting the test samples we used in our paper: "A Multitask, Multilingual, Multimodal Evaluatio…☆79Updated last year
- 🎁[ChatGPT4NLU] A Comparative Study on ChatGPT and Fine-tuned BERT☆195Updated 2 years ago
- Policies of scientific publisher and conferences towards large language model (LLM), such as ChatGPT☆75Updated 2 years ago
- ☆32Updated 3 years ago
- ☆79Updated 3 years ago
- Code for paper "UniPELT: A Unified Framework for Parameter-Efficient Language Model Tuning", ACL 2022☆63Updated 3 years ago
- Implementation of ICLR 2022 paper "Enhancing Cross-lingual Transfer by Manifold Mixup".☆21Updated 3 years ago
- This repo contains codes and instructions for baselines in the VLUE benchmark.☆41Updated 3 years ago
- Code, Models and Datasets for OpenViDial Dataset☆131Updated 3 years ago
- [EMNLP 2023] C-STS: Conditional Semantic Textual Similarity☆73Updated last year
- ☆134Updated 2 years ago
- ☆67Updated last year
- ☆57Updated 2 years ago
- ☆63Updated 2 years ago
- Code for ACL 2022 paper "BERT Learns to Teach: Knowledge Distillation with Meta Learning".☆86Updated 3 years ago
- Official Repository for CLRCMD (Appear in ACL2022)☆42Updated 2 years ago
- code for promptCSE, emnlp 2022☆11Updated 2 years ago
- Must-read papers on improving efficiency for pre-trained language models.☆105Updated 2 years ago
- [COLING22] An End-to-End Library for Evaluating Natural Language Generation☆92Updated last year
- Implementation of the paper Parameter-Efficient Transfer Learning for NLP, Houlsby [Google], 2019. Published in ICML 2019.☆34Updated 2 years ago
- Released code for our ICLR23 paper.☆65Updated 2 years ago
- Code for "Small Models are Valuable Plug-ins for Large Language Models"☆131Updated 2 years ago
- [NAACL'22] TaCL: Improving BERT Pre-training with Token-aware Contrastive Learning☆93Updated 3 years ago
- ☆19Updated 3 years ago
- Code for EMNLP 2022 paper “Distilled Dual-Encoder Model for Vision-Language Understanding”☆31Updated 2 years ago
- Code for EMNLP 2021 main conference paper "Dynamic Knowledge Distillation for Pre-trained Language Models"☆41Updated 3 years ago
- A paper list of pre-trained language models (PLMs).☆81Updated 3 years ago
- This repository implements a prompt tuning model for hierarchical text classification. This work has been accepted as the long paper "HPT…☆67Updated last year