eggie5 / NCE-lossLinks
Tensorflow NCE loss in Keras
☆34Updated 7 years ago
Alternatives and similar repositories for NCE-loss
Users that are interested in NCE-loss are comparing it to the libraries listed below
Sorting:
- SNAIL Attention Block for Keras.☆16Updated 5 years ago
- Pytorch implementation of Dauphin et al. (2016) "Language Modeling with Gated Convolutional Networks"☆29Updated 2 years ago
- Attention based sequence to sequence neural machine translation model built in keras.☆30Updated 7 years ago
- Tensorflow port implementation of Single Headed Attention RNN☆16Updated 5 years ago
- Efficient Transformers for research, PyTorch and Tensorflow using Locality Sensitive Hashing☆95Updated 5 years ago
- Introduction Notebook to Extreme Multi-Label Classification problem (XML)☆22Updated 7 years ago
- Keras implementation of “Gated Linear Unit ”☆23Updated last year
- Hash Embedding code for the paper "Hash Embeddings for Efficient Word Representations"☆42Updated 7 years ago
- ☆40Updated 7 years ago
- Discover relevant information about categorical data with entity embeddings using Neural Networks (powered by Keras)☆70Updated 2 years ago
- Adaptive embedding and softmax☆17Updated 3 years ago
- Mixture of experts layers for Keras☆94Updated 7 years ago
- Tutorial for Multi-Stakeholder Recommender Systems☆22Updated 4 years ago
- Storage for Kaggle Quora competition☆16Updated 8 years ago
- Implementation of Symmetric SNE and t-SNE in numpy and python☆75Updated 4 years ago
- Minimalistic TensorFlow2+ deep metric/similarity learning library with loss functions, miners, and utils as embedding projector.☆38Updated 2 years ago
- Fast Differentiable Forest lib with the advantages of both decision trees and neural networks☆78Updated 4 years ago
- kaggle competition: https://www.kaggle.com/c/web-traffic-time-series-forecasting☆16Updated 8 years ago
- This repository contain various types of attention mechanism like Bahdanau , Soft attention , Additive Attention , Hierarchical Attention…☆127Updated 4 years ago
- Implementation of the LAMB optimizer for Keras from the paper "Reducing BERT Pre-Training Time from 3 Days to 76 Minutes"☆75Updated 6 years ago
- A TensorFlow implementation of the collaborative RNN (Ko et al, 2016).☆59Updated 7 years ago
- Quasi-RNN for language modeling☆57Updated 8 years ago
- AdaBound optimizer in Keras☆56Updated 5 years ago
- Density Order Embeddings☆33Updated 6 years ago
- Collection of TensorFlow Examples☆37Updated 7 years ago
- This repository contains notebooks showing how to perform mixed precision training in tf.keras 2.0☆12Updated 5 years ago
- Machine-generated summaries and highlights of the every accepted paper at Thirty-second Conference on Neural Information Processing Syste…☆71Updated 6 years ago
- ☆16Updated 8 years ago
- Language Model Fine-tuning for Moby Dick☆42Updated 6 years ago
- Large Scale BERT Distillation☆33Updated 2 years ago