a-ghorbani / pocketpal-aiLinks
An app that brings language models directly to your phone.
☆5,156Updated 2 weeks ago
Alternatives and similar repositories for pocketpal-ai
Users that are interested in pocketpal-ai are comparing it to the libraries listed below
Sorting:
- Simple frontend for LLMs built in react-native.☆1,936Updated last week
- A modern and easy-to-use client for Ollama☆1,561Updated last month
- Running any GGUF SLMs/LLMs locally, on-device in Android☆578Updated last week
- Maid is a cross-platform Flutter app for interfacing with GGUF / llama.cpp models locally, and with Ollama and OpenAI models remotely.☆2,192Updated 4 months ago
- No need for Termux, you can start the Ollama service with one click on an Android device. 无需Termux,在安卓设备上一键启动Ollama服务。☆241Updated 6 months ago
- llama and other large language models on iOS and MacOS offline using GGML library.☆1,915Updated 2 months ago
- LM Studio TypeScript SDK☆1,409Updated last week
- A gallery that showcases on-device ML/GenAI use cases and allows people to try and use models locally.☆14,395Updated this week
- MLX-VLM is a package for inference and fine-tuning of Vision Language Models (VLMs) on your Mac using MLX.☆1,867Updated last week
- Kernels & AI inference engine for mobile devices.☆3,759Updated this week
- Go manage your Ollama models☆1,579Updated 2 weeks ago
- Reliable model swapping for any local OpenAI compatible server - llama.cpp, vllm, etc☆1,933Updated this week
- VS Code extension for LLM-assisted code/text completion☆1,062Updated last week
- A text-to-speech (TTS), speech-to-text (STT) and speech-to-speech (STS) library built on Apple's MLX framework, providing efficient speec…☆2,946Updated last week
- Run Stable Diffusion on Android Devices with Snapdragon NPU acceleration. Also supports CPU/GPU inference.☆1,240Updated 3 weeks ago
- LM Studio CLI☆3,906Updated this week
- the terminal client for Ollama☆2,269Updated last month
- Stable Diffusion AI client app for Android☆1,067Updated last week
- Distributed LLM inference. Connect home devices into a powerful cluster to accelerate LLM inference. More devices means faster inference.☆2,750Updated 3 weeks ago
- An awesome repository of local AI tools☆1,742Updated last year
- Effortlessly run LLM backends, APIs, frontends, and services with one command.☆2,154Updated last week
- Making the community's best AI chat models available to everyone.☆1,985Updated 9 months ago
- A minimal LLM chat app that runs entirely in your browser☆1,036Updated last month
- Use your locally running AI models to assist you in your web browsing☆7,299Updated this week
- Run your own AI cluster at home with everyday devices 📱💻 🖥️⌚☆32,584Updated 3 weeks ago
- A collection of 🤗 Transformers.js demos and example applications☆1,851Updated this week
- Convert any PDF into a podcast episode!☆2,508Updated 11 months ago
- The open-source AI-native IDE☆2,181Updated 9 months ago
- Big & Small LLMs working together☆1,209Updated this week
- LM Studio Python SDK☆705Updated last month