sebgao / cTensorLinks
A super light-weight deep learning library based on NumPy in PyTorch fashion.
☆94Updated 3 years ago
Alternatives and similar repositories for cTensor
Users that are interested in cTensor are comparing it to the libraries listed below
Sorting:
- pytorch源码阅读 0.2.0 版本☆90Updated 5 years ago
- cnn☆135Updated 5 years ago
- A small deep-learning framework with C++/Python/CUDA☆54Updated 7 years ago
- A lightweight deep learning library☆386Updated last month
- InsNet Runs Instance-dependent Neural Networks with Padding-free Dynamic Batching.☆66Updated 3 years ago
- Simple CuDNN wrapper☆30Updated 9 years ago
- A simple deep learning framework in pure python for purpose of learning in DL☆442Updated 5 months ago
- Easy to use Pytorch☆70Updated 3 months ago
- Simple Dynamic Batching Inference☆145Updated 3 years ago
- A simple deep learning framework that supports automatic differentiation and GPU acceleration.☆58Updated 2 years ago
- tinynn with automatic differentiation☆40Updated last year
- Optimize an example model with Python, CPP, and CUDA extensions and Ring-Allreduce.☆109Updated 6 years ago
- Inplement an CNN frame with Numpy, easy to learn, hard to use hhhh☆304Updated 7 years ago
- ☆45Updated 5 years ago
- Sublinear memory optimization for deep learning. https://arxiv.org/abs/1604.06174☆598Updated 5 years ago
- ☆97Updated 3 years ago
- 动手学习TVM核心原理教程☆62Updated 4 years ago
- PyTorch Dataset Rank Dataset☆43Updated 4 years ago
- A hyperparameter manager for deep learning experiments.☆96Updated 2 years ago
- ☆99Updated 3 years ago
- Papers for deep neural network compression and acceleration☆399Updated 4 years ago
- ☆169Updated 4 years ago
- Place for meetup slides☆140Updated 4 years ago
- EasyQuant(EQ) is an efficient and simple post-training quantization method via effectively optimizing the scales of weights and activatio…☆401Updated 2 years ago
- This is an implementation of sgemm_kernel on L1d cache.☆229Updated last year
- Deep Learning in pure C++☆28Updated 5 years ago
- A brief of TorchScript by MNIST☆112Updated 3 years ago
- ActNN: Reducing Training Memory Footprint via 2-Bit Activation Compressed Training☆200Updated 2 years ago
- OneFlow models for benchmarking.☆104Updated 11 months ago
- The pure and clear PyTorch Distributed Training Framework.☆276Updated last year