lx200916 / ChatBotAppLinks
☆38Updated 5 months ago
Alternatives and similar repositories for ChatBotApp
Users that are interested in ChatBotApp are comparing it to the libraries listed below
Sorting:
- the original reference implementation of a specified llama.cpp backend for Qualcomm Hexagon NPU on Android phone, https://github.com/ggml…☆30Updated last month
- Code for ACM MobiCom 2024 paper "FlexNN: Efficient and Adaptive DNN Inference on Memory-Constrained Edge Devices"☆53Updated 7 months ago
- This repository is a read-only mirror of https://gitlab.arm.com/kleidi/kleidiai☆74Updated this week
- High-speed and easy-use LLM serving framework for local deployment☆118Updated 3 weeks ago
- Fast Multimodal LLM on Mobile Devices☆1,024Updated this week
- mperf是一个面向移动/嵌入式平台的算子性能调优工具箱☆188Updated 2 years ago
- LLM inference in C/C++☆45Updated last week
- Inference RWKV v5, v6 and v7 with Qualcomm AI Engine Direct SDK☆79Updated last week
- QAI AppBuilder is designed to help developers easily execute models on WoS and Linux platforms. It encapsulates the Qualcomm® AI Runtime …☆68Updated this week
- 分层解耦的深度学习推理引擎☆75Updated 6 months ago
- The Qualcomm® AI Hub apps are a collection of state-of-the-art machine learning models optimized for performance (latency, memory etc.) a…☆273Updated last week
- 机器学习编译 陈天奇☆42Updated 2 years ago
- Awesome Mobile LLMs☆241Updated last month
- This is a list of awesome edgeAI inference related papers.☆97Updated last year
- llm deploy project based onnx.☆43Updated 10 months ago
- 模型加速/模型压缩(已完成所有Lab)☆11Updated last year
- ☆78Updated last week
- hands on model tuning with TVM and profile it on a Mac M1, x86 CPU, and GTX-1080 GPU.☆49Updated 2 years ago
- The open-source project for "Mandheling: Mixed-Precision On-Device DNN Training with DSP Offloading"[MobiCom'2022]☆19Updated 3 years ago
- 使用 CUDA C++ 实现的 llama 模型推理框架☆61Updated 9 months ago
- Triton adapter for Ascend. Mirror of https://gitee.com/ascend/triton-ascend☆68Updated this week
- llm theoretical performance analysis tools and support params, flops, memory and latency analysis.☆104Updated last month
- ☆164Updated this week
- Repo for SpecEE: Accelerating Large Language Model Inference with Speculative Early Exiting (ISCA25)☆48Updated 4 months ago
- This is a demo how to write a high performance convolution run on apple silicon☆54Updated 3 years ago
- Efficient inference of large language models.☆151Updated this week
- ☆253Updated last week
- ⚡️Write HGEMM from scratch using Tensor Cores with WMMA, MMA and CuTe API, Achieve Peak⚡️ Performance.☆109Updated 3 months ago
- Demonstration of running a native LLM on Android device.☆174Updated 2 weeks ago
- Low-bit LLM inference on CPU/NPU with lookup table☆845Updated 3 months ago