cfoster0 / CLAP
Contrastive Language-Audio Pretraining
☆87Updated 3 years ago
Alternatives and similar repositories for CLAP:
Users that are interested in CLAP are comparing it to the libraries listed below
- Implementation / replication of DALL-E, OpenAI's Text to Image Transformer, in Pytorch☆89Updated 3 years ago
- Implementation / replication of DALL-E, OpenAI's Text to Image Transformer, in Pytorch☆59Updated 4 years ago
- Refactoring dalle-pytorch and taming-transformers for TPU VM☆60Updated 3 years ago
- Implementation of NWT, audio-to-video generation, in Pytorch☆90Updated 3 years ago
- ☆64Updated 3 years ago
- An open source implementation of CLIP.☆32Updated 2 years ago
- Implementation of Feedback Transformer in Pytorch☆105Updated 4 years ago
- ☆27Updated 4 years ago
- Contrastive Language-Audio Pretraining☆15Updated 3 years ago
- Another attempt at a long-context / efficient transformer by me☆37Updated 3 years ago
- Contrastive Language-Image Pretraining☆142Updated 2 years ago
- Implementation of N-Grammer, augmenting Transformers with latent n-grams, in Pytorch☆73Updated 2 years ago
- PyTorch implementation of the paper "NanoFlow: Scalable Normalizing Flows with Sublinear Parameter Complexity." (NeurIPS 2020)☆65Updated 4 years ago
- GPT, but made only out of MLPs☆88Updated 3 years ago
- Implementation of Long-Short Transformer, combining local and global inductive biases for attention over long sequences, in Pytorch☆118Updated 3 years ago
- Script and models for clustering LAION-400m CLIP embeddings.☆26Updated 3 years ago
- A GPT, made only of MLPs, in Jax☆57Updated 3 years ago
- Implementation of a Transformer that Ponders, using the scheme from the PonderNet paper☆80Updated 3 years ago
- ☆12Updated 3 years ago
- Implementation of "compositional attention" from MILA, a multi-head attention variant that is reframed as a two-step attention process wi…☆50Updated 2 years ago
- ☆28Updated 3 years ago
- Implementation of Multistream Transformers in Pytorch☆53Updated 3 years ago
- A home for audio ML in JAX. Has common features, learnable frontends, pretrained supervised and self-supervised models.☆68Updated 2 years ago
- Aggregating embeddings over time☆31Updated 2 years ago
- A variant of Transformer-XL where the memory is updated not with a queue, but with attention☆48Updated 4 years ago
- A generative modelling toolkit for PyTorch.☆70Updated 3 years ago
- Implementation of Token Shift GPT - An autoregressive model that solely relies on shifting the sequence space for mixing☆48Updated 3 years ago
- ☆9Updated 4 years ago
- Code for ICLR 2021 Paper, "Anytime Sampling for Autoregressive Models via Ordered Autoencoding"☆26Updated last year
- ☆21Updated 3 years ago