chncwang / InsNetLinks
InsNet Runs Instance-dependent Neural Networks with Padding-free Dynamic Batching.
☆66Updated 3 years ago
Alternatives and similar repositories for InsNet
Users that are interested in InsNet are comparing it to the libraries listed below
Sorting:
- OneFlow models for benchmarking.☆104Updated last year
- A Fast Muti-processing BERT-Inference System☆101Updated 2 years ago
- Simple Dynamic Batching Inference☆145Updated 3 years ago
- Place for meetup slides☆141Updated 4 years ago
- pytorch源码阅读 0.2.0 版本☆91Updated 5 years ago
- DeepLearning Framework Performance Profiling Toolkit☆287Updated 3 years ago
- Tutorials for writing high-performance GPU operators in AI frameworks.☆130Updated 2 years ago
- Simple CuDNN wrapper☆30Updated 9 years ago
- Models and examples built with OneFlow☆98Updated 10 months ago
- oneflow documentation☆69Updated last year
- A small deep-learning framework with C++/Python/CUDA☆54Updated 7 years ago
- Dynamic Tensor Rematerialization prototype (modified PyTorch) and simulator. Paper: https://arxiv.org/abs/2006.09616☆132Updated 2 years ago
- A simple deep learning framework that supports automatic differentiation and GPU acceleration.☆59Updated 2 years ago
- Tutorial code on how to build your own Deep Learning System in 2k Lines☆125Updated 8 years ago
- Inference framework for MoE layers based on TensorRT with Python binding☆41Updated 4 years ago
- ☆127Updated 4 years ago
- My learning notes about AI, including Machine Learning and Deep Learning.☆18Updated 6 years ago
- Running BERT without Padding☆475Updated 3 years ago
- Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.☆40Updated 5 months ago
- AutodiffEngine☆13Updated 6 years ago
- Compiler Infrastructure for Neural Networks☆147Updated 2 years ago
- This is an implementation of sgemm_kernel on L1d cache.☆229Updated last year
- ☆79Updated last year
- ☆98Updated 4 years ago
- Transformer related optimization, including BERT, GPT☆59Updated last year
- ☆23Updated 2 years ago
- optimized BERT transformer inference on NVIDIA GPU. https://arxiv.org/abs/2210.03052☆475Updated last year
- tensorflow源码阅读笔记☆193Updated 6 years ago
- Efficient Top-K implementation on the GPU☆183Updated 6 years ago
- 动手学习TVM核心原理教程☆62Updated 4 years ago