menloresearch / cortex.llamacppLinks
cortex.llamacpp is a high-efficiency C++ inference engine for edge computing. It is a dynamic library that can be loaded by any server at runtime.
☆42Updated 3 months ago
Alternatives and similar repositories for cortex.llamacpp
Users that are interested in cortex.llamacpp are comparing it to the libraries listed below
Sorting:
- Cortex.Tensorrt-LLM is a C++ inference library that can be loaded by any server at runtime. It submodules NVIDIA’s TensorRT-LLM for GPU a…☆42Updated last year
- instinct.cpp provides ready to use alternatives to OpenAI Assistant API and built-in utilities for developing AI Agent applications (RAG,…☆52Updated last year
- TTS support with GGML☆180Updated this week
- Yet Another (LLM) Web UI, made with Gemini☆12Updated 9 months ago
- Running Microsoft's BitNet via Electron, React & Astro☆44Updated 2 weeks ago
- Lightweight C inference for Qwen3 GGUF. Multiturn prefix caching & batch processing.☆18Updated last month
- llama.cpp fork used by GPT4All☆57Updated 7 months ago
- ☆102Updated last month
- Locally running LLM with internet access☆97Updated 3 months ago
- A ggml (C++) re-implementation of tortoise-tts☆189Updated last year
- ☆24Updated 8 months ago
- A chat UI for Llama.cpp☆15Updated last month
- Train your own small bitnet model☆75Updated 11 months ago
- Course Project for COMP4471 on RWKV☆17Updated last year
- Experiments with BitNet inference on CPU☆54Updated last year
- AirLLM 70B inference with single 4GB GPU☆14Updated 3 months ago
- Inference of Large Multimodal Models in C/C++. LLaVA and others☆48Updated 2 years ago
- LLM Ripper is a framework for component extraction (embeddings, attention heads, FFNs), activation capture, functional analysis, and adap…☆45Updated this week
- Thin wrapper around GGML to make life easier☆39Updated 3 months ago
- Port of Suno AI's Bark in C/C++ for fast inference☆52Updated last year
- Transplants vocabulary between language models, enabling the creation of draft models for speculative decoding WITHOUT retraining.☆42Updated last month
- Generate a llama-quantize command to copy the quantization parameters of any GGUF☆24Updated 2 months ago
- Run multiple resource-heavy Large Models (LM) on the same machine with limited amount of VRAM/other resources by exposing them on differe…☆80Updated last week
- Simple, Fast, Parallel Huggingface GGML model downloader written in python☆24Updated 2 years ago
- SPLAA is an AI assistant framework that utilizes voice recognition, text-to-speech, and tool-calling capabilities to provide a conversati…☆29Updated 5 months ago
- Lightweight continuous batching OpenAI compatibility using HuggingFace Transformers include T5 and Whisper.☆28Updated 6 months ago
- Something similar to Apple Intelligence?☆61Updated last year
- Resources regarding evML (edge verified machine learning)☆20Updated 9 months ago
- Yet another frontend for LLM, written using .NET and WinUI 3☆10Updated 3 weeks ago
- Spotlight-like client for Ollama on Windows.☆28Updated last year