NYCU-MLLab / Variational-Attention-and-Disentanglement-with-Regularization-for-Conversational-Question-AnsweringLinks
☆19Updated 3 years ago
Alternatives and similar repositories for Variational-Attention-and-Disentanglement-with-Regularization-for-Conversational-Question-Answering
Users that are interested in Variational-Attention-and-Disentanglement-with-Regularization-for-Conversational-Question-Answering are comparing it to the libraries listed below
Sorting:
- ☆19Updated 3 years ago
- ☆19Updated 4 years ago
- Code for SRMRL☆19Updated 4 years ago
- ☆20Updated 4 years ago
- ☆18Updated 4 years ago
- Code for Submission titled "Guidance Learning for Multi-Domain DIalogue Management"☆23Updated 3 years ago
- ☆24Updated 4 years ago
- ☆12Updated last week
- Master Thesis☆21Updated last year
- ☆26Updated 3 years ago
- [ACMMM2025] Official released code for ALLM4ADD☆28Updated last week
- The offical code of "Parameter-Efficient Learning for Text-to-Speech Accent Adaptation"☆13Updated 2 years ago
- Pytorch implementation of additive margin softmax loss☆12Updated 4 years ago
- A curated list of awesome adversarial reprogramming and input prompting methods for neural networks since 2022☆37Updated last year
- This is a short tutorial for using the CMU-MultimodalSDK.☆85Updated 6 years ago
- Textless (ASR-transcript free) Spoken Question Answering. The official release of NMSQA dataset and the implementation of "DUAL: Textless…☆35Updated 2 years ago
- Deep Learning for Computer Vision 深度學習於電腦視覺 by Frank Wang 王鈺強☆25Updated last year
- The code for our INTERSPEECH 2020 paper - Jointly Fine-Tuning "BERT-like'" Self Supervised Models to Improve Multimodal Speech Emotion R…☆119Updated 4 years ago
- Modulated Fusion using Transformer for Linguistic-Acoustic Emotion Recognition☆31Updated 4 years ago
- ☆28Updated 3 years ago
- The official repository of Dynamic-SUPERB.☆192Updated 4 months ago
- ☆12Updated 2 years ago
- ☆110Updated 3 years ago
- Code for T5lephone: Bridging Speech and Text Self-supervised Models for Spoken Language Understanding via Phoneme level T5☆19Updated 2 years ago
- ☆18Updated 2 years ago
- Code for the InterSpeech 2023 paper: MMER: Multimodal Multi-task learning for Speech Emotion Recognition☆75Updated last year
- ☆41Updated 5 years ago
- Chang Gung University Computer Science / Artificial Intelligence learning material☆28Updated last year
- Code for the AICUP2023 competition "Meet the truth: Fact Extraction and Verification for Disinformation".☆22Updated 2 years ago
- Source code for ICASSP 2022 paper "MM-DFN: Multimodal Dynamic Fusion Network For Emotion Recognition in Conversations".☆92Updated 2 years ago