drammock / spectrogram-tutorialLinks
A walkthrough of how to make spectrograms in python that are customized for human speech research.
☆39Updated last year
Alternatives and similar repositories for spectrogram-tutorial
Users that are interested in spectrogram-tutorial are comparing it to the libraries listed below
Sorting:
- ESC: Dataset for Environmental Sound Classification - paper replication data☆80Updated 7 years ago
 - Spectrograms, MFCCs, and Inversion Demo in a jupyter notebook☆169Updated 6 years ago
 - Toolkit to asses speech impairments in patients with neurological disorders☆55Updated 7 years ago
 - Tensorflow - Very Deep Convolutional Neural Networks For Raw Waveforms - https://arxiv.org/pdf/1610.00087.pdf☆75Updated 4 years ago
 - Audio Denoising with Deep Network Priors☆163Updated 5 years ago
 - A simple audio feature extraction library☆80Updated 6 years ago
 - Keras version of Syncnet, by Joon Son Chung and Andrew Zisserman.☆51Updated 6 years ago
 - Machine Learning Sound Classifier☆137Updated 6 years ago
 - Train a Deep Learning model to classify audio embeddings on IBM's Deep Learning as a Service (DLaaS) platform - Watson Machine Learning☆102Updated last month
 - ☆156Updated 4 years ago
 - Deep Learning experiments for audio classification☆148Updated 8 years ago
 - SoundNet, built in Keras with pre-trained 8-layer model.☆29Updated 6 years ago
 - Environmental Sound Classification with Convolutional Neural Networks - paper replication data☆75Updated 8 years ago
 - Vocode spectrograms to audio with generative adversarial networks☆63Updated 6 years ago
 - Walk through insanely commented code for an advanced recurrent model in TensorFlow☆48Updated 7 years ago
 - TiFGAN: Time Frequency Generative Adversarial Networks☆120Updated 3 years ago
 - collaborative audio module for fast.ai☆99Updated 6 years ago
 - Pytorch Implementation of FFTNet☆86Updated 7 years ago
 - An end-to-end MATLAB toolkit for completely unsupervised Speaker Diarization using state-of-the-art algorithms.☆15Updated 9 years ago
 - WIP: Open Source Implementation of "MelNet: A Generative Model for Audio in the Frequency Domain"☆255Updated 6 years ago
 - ☆59Updated 7 years ago
 - Implementation of the Griffin and Lim algorithm to recover an audio signal from a magnitude-only spectrogram.☆176Updated 7 years ago
 - Penn Phonetics Lab Forced Aligner Toolkit (P2FA) for Python3☆107Updated last year
 - Utils and data sets for audio and PyTorch☆86Updated 3 years ago
 - Freesound Audio Tagging 2019☆95Updated 6 years ago
 - A TensorFlow implementation of Griffin-Lim algorithm☆79Updated 7 years ago
 - [DEPRECATED] Audio Module for fastai v2☆65Updated 2 years ago
 - ☆138Updated last year
 - A Pytorch implementation of WaveVAE ("Parallel Neural Text-to-Speech")☆126Updated last year
 - A test bed for updates and new features | pytorch/audio☆170Updated 5 years ago