An open optimized software library project for the ARM® Architecture
☆1,534Dec 9, 2022Updated 3 years ago
Alternatives and similar repositories for Ne10
Users that are interested in Ne10 are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- The Compute Library is a set of computer vision and machine learning functions optimised for both Arm CPUs and GPUs using SIMD technologi…☆3,139Apr 23, 2026Updated last week
- Just my local copy of math-neon with build script☆95Aug 10, 2018Updated 7 years ago
- The platform independent header allowing to compile any C/C++ code containing ARM NEON intrinsic functions for x86 target systems using S…☆490Updated this week
- Low-precision matrix multiplication☆1,841Jan 29, 2024Updated 2 years ago
- Makes ARM NEON documentation accessible (with examples)☆409Apr 13, 2024Updated 2 years ago
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- C++ image processing and machine learning library with using of SIMD: SSE, AVX, AVX-512, AMX for x86/x64, NEON for ARM, HVX for Hexagon☆2,245Updated this week
- Quantized Neural Network PACKage - mobile-optimized implementation of quantized neural network operators☆1,548Aug 28, 2019Updated 6 years ago
- Arm NN ML Software.☆1,301Jan 23, 2026Updated 3 months ago
- Optimized implementations of various library functions for ARM architecture processors☆690Apr 8, 2026Updated 3 weeks ago
- arm neon 相关文档和指令意义☆248May 21, 2019Updated 6 years ago
- Heterogeneous Run Time version of Caffe. Added heterogeneous capabilities to the Caffe, uses heterogeneous computing infrastructure frame…☆269Oct 16, 2018Updated 7 years ago
- ncnn is a high-performance neural network inference framework optimized for the mobile platform☆23,151Apr 22, 2026Updated last week
- Automatically exported from code.google.com/p/math-neon☆40Apr 20, 2015Updated 11 years ago
- MACE is a deep learning inference framework optimized for mobile heterogeneous computing platforms.☆5,037Jun 17, 2024Updated last year
- Managed Kubernetes at scale on DigitalOcean • AdDigitalOcean Kubernetes includes the control plane, bandwidth allowance, container registry, automatic updates, and more for free.
- FeatherCNN is a high performance inference engine for convolutional neural networks.☆1,227Sep 24, 2019Updated 6 years ago
- Fast, modern C++ DSP framework, FFT, Sample Rate Conversion, FIR/IIR/Biquad Filters (SSE, AVX, AVX-512, ARM NEON, RISC-V RVV)☆1,859Apr 8, 2026Updated 3 weeks ago
- Acceleration package for neural networks on multi-core CPUs☆1,704Jun 11, 2024Updated last year
- Arm neon optimization practice☆393Dec 22, 2020Updated 5 years ago
- a language for fast, portable data-parallel computation☆6,521Updated this week
- Tengine is a lite, high performance, modular inference engine for embedded device☆4,521Mar 6, 2025Updated last year
- ☆2,010Jul 29, 2023Updated 2 years ago
- Winograd minimal convolution algorithm generator for convolutional neural networks.☆627Feb 9, 2026Updated 2 months ago
- The Fastest Fourier Transform in the South☆558Jul 14, 2024Updated last year
- Wordpress hosting with auto-scaling - Free Trial Offer • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- Automatically exported from code.google.com/p/sse2neon☆289Jul 21, 2020Updated 5 years ago
- demo code of my blog☆56Dec 20, 2023Updated 2 years ago
- OpenBLAS is an optimized BLAS library based on GotoBLAS2 1.13 BSD version.☆7,393Updated this week
- Bolt is a deep learning library with high performance and heterogeneous flexibility.☆958Apr 11, 2025Updated last year
- TNN: developed by Tencent Youtu Lab and Guangying Lab, a uniform deep learning inference framework for mobile、desktop and server. TNN is …☆4,631May 9, 2025Updated 11 months ago
- CMSIS Version 5 Development Repository☆1,586Sep 3, 2024Updated last year
- MNN: A blazing-fast, lightweight inference engine battle-tested by Alibaba, powering high-performance on-device LLMs and Edge AI.☆15,009Updated this week
- Vector math library with NEON/SSE support☆355Jan 21, 2024Updated 2 years ago
- Open Source Library for GPU-Accelerated Execution of Trained Deep Convolutional Neural Networks on Android☆543Apr 12, 2017Updated 9 years ago
- Deploy to Railway using AI coding agents - Free Credits Offer • AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- CPU INFOrmation library (x86/x86-64/ARM/ARM64, Linux/Windows/Android/macOS/iOS)☆1,169Apr 15, 2026Updated 2 weeks ago
- a Fast Fourier Transform (FFT) library that tries to Keep it Simple, Stupid☆1,896Apr 22, 2026Updated last week
- High performance Cross-platform Inference-engine, you could run Anakin on x86-cpu,arm, nv-gpu, amd-gpu,bitmain and cambricon devices.☆537Sep 23, 2022Updated 3 years ago
- PaddlePaddle High Performance Deep Learning Inference Engine for Mobile and Edge (飞桨高性能深度学习端侧推理引擎)☆7,248Updated this week
- a software library containing BLAS functions written in OpenCL☆864Aug 2, 2024Updated last year
- DO NOT CHECK OUT THESE FILES FROM GITHUB UNLESS YOU KNOW WHAT YOU ARE DOING. (See below.)☆3,065Updated this week
- High-efficiency floating-point neural network inference operators for mobile, server, and Web☆2,326Updated this week