cfoster0 / CLAPLinks
Contrastive Language-Audio Pretraining
☆87Updated 3 years ago
Alternatives and similar repositories for CLAP
Users that are interested in CLAP are comparing it to the libraries listed below
Sorting:
- Implementation / replication of DALL-E, OpenAI's Text to Image Transformer, in Pytorch☆59Updated 4 years ago
- Implementation of NWT, audio-to-video generation, in Pytorch☆91Updated 3 years ago
- Implementation / replication of DALL-E, OpenAI's Text to Image Transformer, in Pytorch☆89Updated 3 years ago
- Contrastive Language-Audio Pretraining☆15Updated 4 years ago
- ☆27Updated 4 years ago
- Implementation of Long-Short Transformer, combining local and global inductive biases for attention over long sequences, in Pytorch☆119Updated 3 years ago
- PyTorch implementation of the paper "NanoFlow: Scalable Normalizing Flows with Sublinear Parameter Complexity." (NeurIPS 2020)☆66Updated 4 years ago
- Implementation of N-Grammer, augmenting Transformers with latent n-grams, in Pytorch☆75Updated 2 years ago
- Implementation of Feedback Transformer in Pytorch☆107Updated 4 years ago
- Continuous Augmented Positional Embeddings (CAPE) implementation for PyTorch☆41Updated 2 years ago
- Refactoring dalle-pytorch and taming-transformers for TPU VM☆60Updated 3 years ago
- ☆28Updated 3 years ago
- GPT, but made only out of MLPs☆89Updated 4 years ago
- Another attempt at a long-context / efficient transformer by me☆38Updated 3 years ago
- Script and models for clustering LAION-400m CLIP embeddings.☆26Updated 3 years ago
- Implementation of a Transformer that Ponders, using the scheme from the PonderNet paper☆81Updated 3 years ago
- A GPT, made only of MLPs, in Jax☆58Updated 4 years ago
- Implementation of Fast Transformer in Pytorch☆175Updated 3 years ago
- High performance pytorch modules☆18Updated 2 years ago
- Local Attention - Flax module for Jax☆22Updated 4 years ago
- Aggregating embeddings over time☆32Updated 2 years ago
- Implementation of Multistream Transformers in Pytorch☆54Updated 3 years ago
- Implementation of Perceiver AR, Deepmind's new long-context attention network based on Perceiver architecture, in Pytorch☆88Updated 2 years ago
- Relative Positional Encoding for Transformers with Linear Complexity☆64Updated 3 years ago
- Implementation of "compositional attention" from MILA, a multi-head attention variant that is reframed as a two-step attention process wi…☆51Updated 3 years ago
- An open source implementation of CLIP.☆32Updated 2 years ago
- Implementation of Insertion-deletion Denoising Diffusion Probabilistic Models☆30Updated 3 years ago
- AdaCat☆49Updated 2 years ago
- Implementation of Mega, the Single-head Attention with Multi-headed EMA architecture that currently holds SOTA on Long Range Arena☆204Updated last year
- ☆64Updated 3 years ago