KingH12138 / Pytorch-AudioClassification-masterLinks
A python code based on pytorch applied to AudioClassification
☆48Updated 3 years ago
Alternatives and similar repositories for Pytorch-AudioClassification-master
Users that are interested in Pytorch-AudioClassification-master are comparing it to the libraries listed below
Sorting:
- The Pytorch implementation of sound classification supports EcapaTdnn, PANNS, TDNN, Res2Net, ResNetSE and other models, as well as a vari…☆577Updated last month
- 基于Pytorch实现的语音情感识别☆255Updated last month
- 说话人识别(声纹识别)算法的Python实现。包括GMM(已完成)、GMM-UBM、ivector、基于深度学习的声纹识别(self-attention已完成)。☆106Updated 2 years ago
- The PyTorch code for "Unraveling Complex Data Diversity in Underwater Acoustic Target Recognition through Convolution-based Mixture of Ex…☆31Updated last year
- Code for Speech Emotion Recognition with Co-Attention based Multi-level Acoustic Information☆164Updated 2 years ago
- Acoustic feature extraction using Librosa library and openSMILE toolkit.使用Librosa音频处理库和openSMILE工具包,进行简单的声学特征提取☆215Updated 5 years ago
- An unofficial train-test split for ShipsEar: An underwater vessel noise database☆23Updated last year
- This project uses a variety of advanced voiceprint recognition models such as EcapaTdnn, ResNetSE, ERes2Net, CAM++, etc. It is not exclud…☆1,222Updated last month
- Unofficial reimplementation of ECAPA-TDNN for speaker recognition (EER=0.86 for Vox1_O when train only in Vox2)☆777Updated last year
- Speech Emotion Recognition☆28Updated 5 years ago
- Official implementation of the paper "An Investigation of Preprocessing Filters and Deep Learning Methods for Vessel Type Classification …☆29Updated last year
- 用CASIA database数据集做的,做的语音情感识别和语音识人的练习☆73Updated 3 years ago
- Deformable Speech Transformer (DST)☆35Updated last year
- alaaNfissi / SigWavNet-Learning-Multiresolution-Signal-Wavelet-Network-for-Speech-Emotion-RecognitionThis paper has been accepted for publication in IEEE Transactions on Affective Computing.☆19Updated 10 months ago
- 这个项目将 RAVDESS 数据集切割成 1s 短语音,利用 openSMILE+CNN 进行训练,目标是将短语音分类到四种情感中,分别是:开心(happy)、悲伤(sad)、生气(angry)和中性(neutral)。最后准确率达到 76% 左右。☆64Updated 4 years ago
- A spectro-temporal fusion feature, STgram, with MobileFaceNet For more stable Anomalous Sound Detection☆99Updated 2 years ago
- Method for Splitting the DeepShip Dataset☆55Updated last month
- 基于梅尔频谱的信号分类和识别☆23Updated 2 years ago
- Official github page of Oceanship Dataset☆43Updated last year
- ☆25Updated last year
- The official code repo of "HTS-AT: A Hierarchical Token-Semantic Audio Transformer for Sound Classification and Detection"☆463Updated 3 months ago
- ☆70Updated 5 years ago
- 语音感情识别☆44Updated last month
- ICASSP 2023-2024 Papers: A complete collection of influential and exciting research papers from the ICASSP 2023-24 conferences. Explore t…☆516Updated 8 months ago
- 语音信号处理试验教程,Python代码☆342Updated 3 years ago
- ☆16Updated 6 years ago
- ☆22Updated 5 years ago
- [ICASSP 2023] Official Tensorflow implementation of "Temporal Modeling Matters: A Novel Temporal Emotional Modeling Approach for Speech E…☆187Updated last year
- 语音方向实验室/公司/资源/实习等,欢迎推荐或自荐☆592Updated last year
- the baseline model of CMDC corpus☆51Updated 3 years ago