mlc-ai / binary-mlc-llm-libsLinks
☆256Updated last month
Alternatives and similar repositories for binary-mlc-llm-libs
Users that are interested in binary-mlc-llm-libs are comparing it to the libraries listed below
Sorting:
- llama.cpp tutorial on Android phone☆132Updated 5 months ago
- A mobile Implementation of llama.cpp☆319Updated last year
- IRIS is an android app for interfacing with GGUF / llama.cpp models locally.☆239Updated 8 months ago
- LLaMA Server combines the power of LLaMA C++ with the beauty of Chatbot UI.☆130Updated 2 years ago
- A mobile Implementation of llama.cpp☆26Updated last year
- MiniCPM on Android platform.☆635Updated 6 months ago
- 使用Android手机的CPU推理stable diffusion☆159Updated last year
- automatically quant GGUF models☆204Updated last week
- C++ implementation for 💫StarCoder☆457Updated 2 years ago
- [ICLR-2025-SLLM Spotlight 🔥]MobiLlama : Small Language Model tailored for edge devices☆663Updated 4 months ago
- Falcon LLM ggml framework with CPU and GPU support☆248Updated last year
- llama.cpp fork used by GPT4All☆56Updated 7 months ago
- High-speed and easy-use LLM serving framework for local deployment☆122Updated 2 months ago
- WebAssembly (Wasm) Build and Bindings for llama.cpp☆281Updated last year
- Train your own small bitnet model☆75Updated 11 months ago
- Tool to download models from Huggingface Hub and convert them to GGML/GGUF for llama.cpp☆159Updated 5 months ago
- LLM inference in C/C++☆102Updated last month
- Making offline AI models accessible to all types of edge devices.☆141Updated last year
- Python bindings for ggml☆146Updated last year
- Visual Studio Code extension for WizardCoder☆148Updated 2 years ago
- Locally run an Instruction-Tuned Chat-Style LLM (Android/Linux/Windows/Mac)☆265Updated 2 years ago
- An innovative library for efficient LLM inference via low-bit quantization☆350Updated last year
- ☆63Updated 10 months ago
- Pure C++ implementation of several models for real-time chatting on your computer (CPU & GPU)☆716Updated this week
- Extension for using alternative GitHub Copilot (StarCoder API) in VSCode☆100Updated last year
- ☆162Updated last month
- React Native binding of llama.cpp☆40Updated 2 weeks ago
- Some simple scripts that I use day-to-day when working with LLMs and Huggingface Hub☆160Updated 2 years ago
- Cortex.Tensorrt-LLM is a C++ inference library that can be loaded by any server at runtime. It submodules NVIDIA’s TensorRT-LLM for GPU a…☆42Updated last year
- Inference on CPU code for LLaMA models☆137Updated 2 years ago