Oneflow-Inc / oneflow-documentationLinks
oneflow documentation
☆69Updated last year
Alternatives and similar repositories for oneflow-documentation
Users that are interested in oneflow-documentation are comparing it to the libraries listed below
Sorting:
- DeepLearning Framework Performance Profiling Toolkit☆294Updated 3 years ago
- OneFlow models for benchmarking.☆104Updated last year
- ☆23Updated 2 years ago
- Models and examples built with OneFlow☆100Updated last year
- Compiler Infrastructure for Neural Networks☆147Updated 2 years ago
- OneFlow->ONNX☆43Updated 2 years ago
- ☆219Updated 2 years ago
- ☆141Updated last year
- Simple Dynamic Batching Inference☆145Updated 3 years ago
- Easy Parallel Library (EPL) is a general and efficient deep learning framework for distributed model training.☆271Updated 2 years ago
- ☆130Updated 11 months ago
- An unofficial cuda assembler, for all generations of SASS, hopefully :)☆84Updated 2 years ago
- ☆152Updated 11 months ago
- LiBai(李白): A Toolbox for Large-Scale Distributed Parallel Training☆406Updated 4 months ago
- Place for meetup slides☆140Updated 5 years ago
- InsNet Runs Instance-dependent Neural Networks with Padding-free Dynamic Batching.☆67Updated 4 years ago
- heterogeneity-aware-lowering-and-optimization☆257Updated last year
- A toolkit for developers to simplify the transformation of nn.Module instances. It's now corresponding to Pytorch.fx.☆13Updated 2 years ago
- ☆60Updated last year
- TePDist (TEnsor Program DISTributed) is an HLO-level automatic distributed system for DL models.☆98Updated 2 years ago
- optimized BERT transformer inference on NVIDIA GPU. https://arxiv.org/abs/2210.03052☆476Updated last year
- ☆192Updated 2 years ago
- Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.☆43Updated 9 months ago
- A benchmark suited especially for deep learning operators☆42Updated 2 years ago
- A Fast Muti-processing BERT-Inference System☆101Updated 3 years ago
- ☆103Updated last year
- ☆98Updated 4 years ago
- PET: Optimizing Tensor Programs with Partially Equivalent Transformations and Automated Corrections☆122Updated 3 years ago
- ☆38Updated last year
- A standalone GEMM kernel for fp16 activation and quantized weight, extracted from FasterTransformer☆96Updated 3 months ago