JackZeng0208 / llama.cpp-android-tutorialLinks
llama.cpp tutorial on Android phone
☆138Updated 7 months ago
Alternatives and similar repositories for llama.cpp-android-tutorial
Users that are interested in llama.cpp-android-tutorial are comparing it to the libraries listed below
Sorting:
- ☆273Updated last month
- A mobile Implementation of llama.cpp☆323Updated last year
- This is a simple shell script to install the alpaca llama 7B model on termux for Android phones. All credits goes to the original develop…☆64Updated 2 years ago
- A set of bash scripts to automate deployment of GGML/GGUF models [default: RWKV] with the use of KoboldCpp on Android - Termux☆43Updated last year
- 使用Android手机的CPU推理stable diffusion☆158Updated 2 years ago
- Instructions for installing Open Interpreter on your Android device.☆237Updated last year
- ☆65Updated last year
- Running any GGUF SLMs/LLMs locally, on-device in Android☆610Updated 2 weeks ago
- automatically quant GGUF models☆218Updated last month
- The Qualcomm® AI Hub apps are a collection of state-of-the-art machine learning models optimized for performance (latency, memory etc.) a…☆350Updated this week
- React Native binding of llama.cpp☆45Updated last week
- A mobile Implementation of llama.cpp☆26Updated 2 years ago
- A minimal Android demo app for Kokoro-TTS☆38Updated 10 months ago
- Tool to download models from Huggingface Hub and convert them to GGML/GGUF for llama.cpp☆166Updated 7 months ago
- Demonstration of running a native LLM on Android device.☆205Updated last week
- ☆127Updated last year
- After my server ui improvements were successfully merged, consider this repo a playground for experimenting, tinkering and hacking around…☆53Updated last year
- Evaling and unaligning Chinese LLM censorship☆70Updated 7 months ago
- This reference can be used with any existing OpenAI integrated apps to run with TRT-LLM inference locally on GeForce GPU on Windows inste…☆126Updated last year
- A open webui function for better R1 experience☆78Updated 9 months ago
- Port of Facebook's LLaMA model in C/C++☆105Updated 3 weeks ago
- Run SD onnx model on termux☆23Updated 2 years ago
- Docker compose to run vLLM on Windows☆111Updated last year
- Run SD1.x/2.x/3.x, SDXL, and FLUX.1 on your phone device☆69Updated 5 months ago
- MiniCPM on Android platform.☆634Updated 9 months ago
- Train your own small bitnet model☆75Updated last year
- PyPlexitas is an open-source Python CLI alternative to Perplexity AI, designed to perform web searches, scrape content, generate embeddin…☆36Updated last year
- Locally running LLM with internet access☆97Updated 5 months ago
- Cortex.Tensorrt-LLM is a C++ inference library that can be loaded by any server at runtime. It submodules NVIDIA’s TensorRT-LLM for GPU a…☆42Updated last year
- This repository represents my final assignment of "Module 3 - Android App Development" at Syntax Institut.☆27Updated last year