inoryy / tensorflow-optimized-wheelsLinks
TensorFlow wheels built for latest CUDA/CuDNN and enabled performance flags: SSE, AVX, FMA; XLA
☆119Updated 5 years ago
Alternatives and similar repositories for tensorflow-optimized-wheels
Users that are interested in tensorflow-optimized-wheels are comparing it to the libraries listed below
Sorting:
- Optimize layers structure of Keras model to reduce computation time☆157Updated 5 years ago
- Corrupted labels and label smoothing☆129Updated 8 years ago
- Simple Tensorflow implementation of "On The Variance Of The Adaptive Learning Rate And Beyond"☆97Updated 5 years ago
- Convert trained PyTorch models to Keras, and the other way around☆223Updated 6 years ago
- AdamW optimizer for Keras☆115Updated 6 years ago
- Simple Tensorflow implementation of "Adaptive Gradient Methods with Dynamic Bound of Learning Rate" (ICLR 2019)☆150Updated 6 years ago
- Serving PyTorch 1.0 Models as a Web Server in C++☆226Updated 5 years ago
- keras implementation of AdamW from Fixing Weight Decay Regularization in Adam (https://arxiv.org/abs/1711.05101)☆71Updated 7 years ago
- Keras implementation of AdaBound☆130Updated 5 years ago
- Mish Deep Learning Activation Function for PyTorch / FastAI☆161Updated 5 years ago
- Efficient Data Loading Pipeline in Pure Python☆212Updated 5 years ago
- Implementation of the LAMB optimizer for Keras from the paper "Reducing BERT Pre-Training Time from 3 Days to 76 Minutes"☆75Updated 6 years ago
- tf.keras + tf.data with Eager Execution☆74Updated 6 years ago
- An implementation of DropConnect Layer in Keras☆36Updated 5 years ago
- Extending Keras to support tfrecord dataset☆61Updated 8 years ago
- Python script which monitors gpu access☆107Updated 7 years ago
- Python way to Read/Write TFRecords☆65Updated 7 years ago
- repo that holds code for improving on dropout using Stochastic Delta Rule☆142Updated 6 years ago
- Implementation of Rectified Adam in Keras☆70Updated 6 years ago
- AdaBound optimizer in Keras☆56Updated 5 years ago
- Use TensorFlow efficiently☆96Updated 4 years ago
- This is an example of how to train a MNIST network in Python and run it in c++ with pytorch 1.0☆96Updated 7 years ago
- Using the CLR algorithm for training (https://arxiv.org/abs/1506.01186)☆108Updated 7 years ago
- PyTorch 1.0 inference in C++ on Windows10 platforms☆89Updated 6 years ago
- Pytorch implementation of MaxPoolingLoss.☆176Updated 7 years ago
- Experiments with Adam/AdamW/amsgrad☆201Updated 7 years ago
- A simpler version of the self-attention layer from SAGAN, and some image classification results.☆214Updated 6 years ago
- Keras implementation of NASNet-A☆88Updated 7 years ago
- Keras implementation of CoordConv for all Convolution layers☆148Updated 3 years ago
- Lookahead mechanism for optimizers in Keras.☆50Updated 4 years ago