Oneflow-Inc / models
Models and examples built with OneFlow
☆96Updated 5 months ago
Alternatives and similar repositories for models:
Users that are interested in models are comparing it to the libraries listed below
- LiBai(李白): A Toolbox for Large-Scale Distributed Parallel Training☆397Updated 2 months ago
- ☆78Updated last year
- ☆214Updated last year
- Simple Dynamic Batching Inference☆145Updated 3 years ago
- OneFlow models for benchmarking.☆105Updated 7 months ago
- Transformer related optimization, including BERT, GPT☆59Updated last year
- oneflow documentation☆68Updated 8 months ago
- Easy Parallel Library (EPL) is a general and efficient deep learning framework for distributed model training.☆267Updated last year
- ☆127Updated 2 months ago
- OneFlow->ONNX☆42Updated last year
- ☆139Updated 10 months ago
- Tutorials for writing high-performance GPU operators in AI frameworks.☆129Updated last year
- Transformer related optimization, including BERT, GPT☆39Updated 2 years ago
- Datasets, Transforms and Models specific to Computer Vision☆84Updated last year
- Transformer related optimization, including BERT, GPT☆17Updated last year
- DeepLearning Framework Performance Profiling Toolkit☆285Updated 2 years ago
- Compiler Infrastructure for Neural Networks☆145Updated last year
- A collection of memory efficient attention operators implemented in the Triton language.☆250Updated 9 months ago
- optimized BERT transformer inference on NVIDIA GPU. https://arxiv.org/abs/2210.03052☆471Updated last year
- Running BERT without Padding☆472Updated 3 years ago
- ☆44Updated this week
- ☆23Updated last year
- A Tight-fisted Optimizer☆47Updated 2 years ago
- export llama to onnx☆115Updated 2 months ago
- OneFlow Serving☆20Updated 2 months ago
- ☆145Updated 2 months ago
- A MoE impl for PyTorch, [ATC'23] SmartMoE☆61Updated last year
- Paddle Automatically Diff Precision Toolkits.☆49Updated 11 months ago
- ☆52Updated last year
- InsNet Runs Instance-dependent Neural Networks with Padding-free Dynamic Batching.☆66Updated 3 years ago