trzy / llava-cpp-server
LLaVA server (llama.cpp).
☆178Updated last year
Alternatives and similar repositories for llava-cpp-server:
Users that are interested in llava-cpp-server are comparing it to the libraries listed below
- Command-line script for inferencing from models such as MPT-7B-Chat☆101Updated last year
- Python bindings for ggml☆140Updated 6 months ago
- Full finetuning of large language models without large memory requirements☆93Updated last year
- GRDN.AI app for garden optimization☆70Updated last year
- Demo python script app to interact with llama.cpp server using whisper API, microphone and webcam devices.☆46Updated last year
- Inference of Large Multimodal Models in C/C++. LLaVA and others☆46Updated last year
- Video+code lecture on building nanoGPT from scratch☆66Updated 9 months ago
- Local ML voice chat using high-end models.☆161Updated this week
- Scripts to create your own moe models using mlx☆89Updated last year
- Landmark Attention: Random-Access Infinite Context Length for Transformers QLoRA☆123Updated last year
- CLIP inference in plain C/C++ with no extra dependencies☆486Updated 7 months ago
- Extend the original llama.cpp repo to support redpajama model.☆117Updated 6 months ago
- Port of Suno AI's Bark in C/C++ for fast inference☆53Updated 11 months ago
- run paligemma in real time☆131Updated 10 months ago
- This is our own implementation of 'Layer Selective Rank Reduction'☆233Updated 9 months ago
- Port of Microsoft's BioGPT in C/C++ using ggml☆87Updated last year
- A fast batching API to serve LLM models☆182Updated 10 months ago
- Maybe the new state of the art vision model? we'll see 🤷♂️☆162Updated last year
- ☆154Updated last year
- an implementation of Self-Extend, to expand the context window via grouped attention☆118Updated last year
- Mistral7B playing DOOM☆130Updated 8 months ago
- llama.cpp with BakLLaVA model describes what does it see☆384Updated last year
- A simple UI / Web / Frontend for MLX mlx-lm using Streamlit.☆245Updated last month
- Run inference on replit-3B code instruct model using CPU☆154Updated last year
- inference code for mixtral-8x7b-32kseqlen☆99Updated last year
- WebGPU LLM inference tuned by hand☆149Updated last year
- Port of Suno's Bark TTS transformer in Apple's MLX Framework☆78Updated last year
- ☆152Updated 8 months ago
- Testing LLM reasoning abilities with family relationship quizzes.☆62Updated last month
- Generate Synthetic Data Using OpenAI, MistralAI or AnthropicAI☆224Updated 10 months ago