janhq / cortex.llamacppLinks
cortex.llamacpp is a high-efficiency C++ inference engine for edge computing. It is a dynamic library that can be loaded by any server at runtime.
☆41Updated 6 months ago
Alternatives and similar repositories for cortex.llamacpp
Users that are interested in cortex.llamacpp are comparing it to the libraries listed below
Sorting:
- TTS support with GGML☆209Updated 3 months ago
- Cortex.Tensorrt-LLM is a C++ inference library that can be loaded by any server at runtime. It submodules NVIDIA’s TensorRT-LLM for GPU a…☆42Updated last year
- Lightweight C inference for Qwen3 GGUF. Multiturn prefix caching & batch processing.☆21Updated 4 months ago
- ☆26Updated 11 months ago
- Running Microsoft's BitNet via Electron, React & Astro☆49Updated 3 months ago
- instinct.cpp provides ready to use alternatives to OpenAI Assistant API and built-in utilities for developing AI Agent applications (RAG,…☆55Updated last year
- A chat UI for Llama.cpp☆15Updated last month
- Port of Suno AI's Bark in C/C++ for fast inference☆54Updated last year
- A ggml (C++) re-implementation of tortoise-tts☆194Updated last year
- Thin wrapper around GGML to make life easier☆41Updated 2 months ago
- Yet Another (LLM) Web UI, made with Gemini☆12Updated last year
- GGML implementation of BERT model with Python bindings and quantization.☆58Updated last year
- Train your own small bitnet model☆76Updated last year
- Inference of Large Multimodal Models in C/C++. LLaVA and others☆48Updated 2 years ago
- llama.cpp fork used by GPT4All☆55Updated 10 months ago
- Use safetensors with ONNX 🤗☆80Updated 3 months ago
- Locally running LLM with internet access☆97Updated 6 months ago
- ggml implementation of embedding models including SentenceTransformer and BGE☆63Updated 2 years ago
- Experiments with BitNet inference on CPU☆55Updated last year
- General purpose GPU compute framework built on Vulkan to support 1000s of cross vendor graphics cards (AMD, Qualcomm, NVIDIA & friends). …☆52Updated 10 months ago
- ☆108Updated 4 months ago
- Simple, Fast, Parallel Huggingface GGML model downloader written in python☆24Updated 2 years ago
- On-device streaming text-to-speech engine powered by deep learning☆127Updated last week
- AirLLM 70B inference with single 4GB GPU☆14Updated 6 months ago
- run ollama & gguf easily with a single command☆52Updated last year
- Course Project for COMP4471 on RWKV☆17Updated last year
- Something similar to Apple Intelligence?☆59Updated last year
- An endpoint server for efficiently serving quantized open-source LLMs for code.☆58Updated 2 years ago
- Port of Microsoft's BioGPT in C/C++ using ggml☆85Updated last year
- Tcurtsni: Reverse Instruction Chat, ever wonder what your LLM wants to ask you?☆23Updated last year