PaddlePaddle / continuous_integrationLinks
☆16Updated last year
Alternatives and similar repositories for continuous_integration
Users that are interested in continuous_integration are comparing it to the libraries listed below
Sorting:
- Macro Continuous Evaluation Platform for Paddle.☆19Updated 5 years ago
- hapi is a High-level API that supports both static and dynamic execution modes☆75Updated 3 years ago
- Paddle Continuous Evaluation, keep updating.☆26Updated 4 years ago
- contribution works with PaddlePaddle from the third party developers☆20Updated 3 years ago
- Documentations for PaddlePaddle☆273Updated this week
- ☆80Updated last month
- ☆18Updated 5 years ago
- ☆20Updated 2 years ago
- 飞桨大模型开发套件,提供大语言模型、跨模态大模型、生物计算大模型等领域的全流程开发工具链。☆475Updated last year
- Easy & Effective Application Framework for PaddlePaddle☆34Updated 5 years ago
- PaddlePaddle TestSuite☆47Updated this week
- upgrade paddle-1.x to paddle-2.0☆12Updated 4 years ago
- PaddleSlim is an open-source library for deep model compression and architecture search.☆1,611Updated last month
- Deep learning model converter for PaddlePaddle. (『飞桨』深度学习模型转换工具)☆765Updated last month
- Compiler Infrastructure for Neural Networks☆147Updated 2 years ago
- PaddlePaddle Developer Community☆128Updated this week
- A flexible, high-performance carrier for machine learning models(『飞桨』服务化部署框架)☆919Updated last week
- Take neural networks as APIs for human-like AI.☆20Updated 6 years ago
- SOTA benchmark☆18Updated 2 years ago
- ONNX Model Exporter for PaddlePaddle☆876Updated 5 months ago
- PaddlePaddle custom device implementaion. (『飞桨』自定义硬件接入实现)☆101Updated this week
- 面向产业应用的AI开源评测基准☆11Updated 2 years ago
- ☆268Updated 3 weeks ago
- Adlik: Toolkit for Accelerating Deep Learning Inference☆810Updated last year
- This repo would collect the frequent FAQ.☆29Updated 4 years ago
- ☆17Updated 10 months ago
- A library for high performance deep learning inference on NVIDIA GPUs.☆556Updated 3 years ago
- A primitive library for neural network☆1,370Updated last year
- High performance Cross-platform Inference-engine, you could run Anakin on x86-cpu,arm, nv-gpu, amd-gpu,bitmain and cambricon devices.☆535Updated 3 years ago
- To make it easy to benchmark AI accelerators☆193Updated 2 years ago