The entmax mapping and its loss, a family of sparse softmax alternatives.
☆464Jun 22, 2024Updated last year
Alternatives and similar repositories for entmax
Users that are interested in entmax are comparing it to the libraries listed below
Sorting:
- Fast, general, and tested differentiable structured prediction in PyTorch☆1,124Apr 20, 2022Updated 3 years ago
- Transformer training code for sequential tasks☆609Sep 14, 2021Updated 4 years ago
- Implementation of Sparsemax activation in Pytorch☆165May 27, 2020Updated 5 years ago
- Cascaded Text Generation with Markov Transformers☆130Mar 20, 2023Updated 3 years ago
- Official PyTorch (Lightning) implementation of the NeurIPS 2020 paper "Efficient Marginalization of Discrete and Structured Latent Variab…☆27May 3, 2021Updated 4 years ago
- ☆14May 14, 2019Updated 6 years ago
- Neural Text Generation with Unlikelihood Training☆310Aug 31, 2021Updated 4 years ago
- Code for bidirectional sequence generation (BiSon) for generating from BERT pre-trained models.☆51Mar 17, 2020Updated 6 years ago
- ☆53Apr 29, 2020Updated 5 years ago
- SparseMAP: differentiable sparse structure inference☆112Feb 10, 2019Updated 7 years ago
- ☆221Jun 8, 2020Updated 5 years ago
- A tool for holistic analysis of language generations systems☆471Sep 22, 2025Updated 6 months ago
- higher is a pytorch library allowing users to obtain higher order gradients over losses spanning training loops rather than individual tr…☆1,628Mar 25, 2022Updated 3 years ago
- Generative Flow based Sequence-to-Sequence Toolkit written in Python.☆247Jan 28, 2020Updated 6 years ago
- Examples of using sparse attention, as in "Generating Long Sequences with Sparse Transformers"☆1,611Aug 12, 2020Updated 5 years ago
- Cooperative Learning of Disjoint Syntax and Semantics☆50May 23, 2019Updated 6 years ago
- This repository provides open-source code for sparse continuous distributions and corresponding Fenchel-Young losses.☆15May 10, 2023Updated 2 years ago
- Understanding the Difficulty of Training Transformers☆332May 31, 2022Updated 3 years ago
- Implementation for "Rational Recurrences", Peng et al., EMNLP 2018.☆28Jun 21, 2022Updated 3 years ago
- ☆19Oct 26, 2022Updated 3 years ago
- Sparse and structured neural attention mechanisms☆225Aug 31, 2020Updated 5 years ago
- [NeurIPS'19] Deep Equilibrium Models☆794Jul 4, 2022Updated 3 years ago
- ☆178Jul 31, 2020Updated 5 years ago
- Pytorch library for fast transformer implementations☆1,763Mar 23, 2023Updated 2 years ago
- This code repository presents the pytorch implementation of the paper “Implicit Deep Latent Variable Models for Text Generation”(EMNLP 20…☆55Mar 11, 2022Updated 4 years ago
- Python implementation of projection losses.☆27Nov 18, 2019Updated 6 years ago
- ☆397Nov 1, 2018Updated 7 years ago
- code for Explicit Sparse Transformer☆61Jul 21, 2023Updated 2 years ago
- PyTorch Implementation of "Unsupervised Learning of Syntactic Structure with Invertible Neural Projections" (EMNLP 2018)☆68Feb 19, 2020Updated 6 years ago
- SentAugment is a data augmentation technique for NLP that retrieves similar sentences from a large bank of sentences. It can be used in c…☆359Feb 22, 2022Updated 4 years ago
- Code publication to the paper "Normalized Attention Without Probability Cage"☆17Nov 9, 2021Updated 4 years ago
- Latent Alignment and Variational Attention☆329Nov 5, 2018Updated 7 years ago
- Integrating the Best of TF into PyTorch, for Machine Learning, Natural Language Processing, and Text Generation. This is part of the CAS…☆746Apr 14, 2022Updated 3 years ago
- Materials from the ACL 2018 tutorial on neural semantic parsing☆404Jul 17, 2018Updated 7 years ago
- PyTorch original implementation of Cross-lingual Language Model Pretraining.☆2,927Feb 14, 2023Updated 3 years ago
- Pytorch Implemetation for our NAACL2019 Paper "Riemannian Normalizing Flow on Variational Wasserstein Autoencoder for Text Modeling" http…☆63Apr 1, 2020Updated 5 years ago
- Training RNNs as Fast as CNNs (https://arxiv.org/abs/1709.02755)☆2,111Jan 4, 2022Updated 4 years ago
- Code for the Eager Translation Model from the paper You May Not Need Attention☆294Dec 17, 2018Updated 7 years ago
- ☆48Jun 8, 2020Updated 5 years ago