cmikeh2 / grnnLinks
☆13Updated 6 years ago
Alternatives and similar repositories for grnn
Users that are interested in grnn are comparing it to the libraries listed below
Sorting:
- ☆22Updated 6 years ago
- ddl-benchmarks: Benchmarks for Distributed Deep Learning☆36Updated 5 years ago
- ParaDnn: A systematic performance analysis methodology for deep learning.☆39Updated 5 years ago
- ☆22Updated 6 years ago
- Thinking is hard - automate it☆18Updated 3 years ago
- ☆21Updated 2 years ago
- CUDA templates for tile-sparse matrix multiplication based on CUTLASS.☆50Updated 7 years ago
- An analytical performance modeling tool for deep neural networks.☆91Updated 5 years ago
- ☆23Updated last month
- Training neural networks in TensorFlow 2.0 with 5x less memory☆136Updated 3 years ago
- Cavs: An Efficient Runtime System for Dynamic Neural Networks☆15Updated 5 years ago
- ☆42Updated 2 years ago
- ☆47Updated 2 years ago
- Mille Crepe Bench: layer-wise performance analysis for deep learning frameworks.☆17Updated 5 years ago
- Machine Learning System☆14Updated 5 years ago
- FTPipe and related pipeline model parallelism research.☆43Updated 2 years ago
- Implementation of Parameter Server using PyTorch communication lib☆42Updated 6 years ago
- Benchmark for matrix multiplications between dense and block sparse (BSR) matrix in TVM, blocksparse (Gray et al.) and cuSparse.☆23Updated 5 years ago
- Kernel Fusion and Runtime Compilation Based on NNVM☆71Updated 8 years ago
- Chameleon: Adaptive Code Optimization for Expedited Deep Neural Network Compilation☆27Updated 5 years ago
- Research and development for optimizing transformers☆131Updated 4 years ago
- ☆14Updated 3 years ago
- ☆56Updated 4 years ago
- ☆12Updated 5 years ago
- Large Graph Convolutional Network Training with GPU-Oriented Data Communication Architecture (accepted by PVLDB)☆44Updated 2 years ago
- Code for paper "Design Principles for Sparse Matrix Multiplication on the GPU" accepted to Euro-Par 2018☆73Updated 5 years ago
- Simple Distributed Deep Learning on TensorFlow☆134Updated 3 months ago
- Ok-Topk is a scheme for distributed training with sparse gradients. Ok-Topk integrates a novel sparse allreduce algorithm (less than 6k c…☆27Updated 2 years ago
- Crossbow: A Multi-GPU Deep Learning System for Training with Small Batch Sizes☆56Updated 3 years ago
- ☆27Updated 5 years ago