willbnu / Qwen-3.5-16G-Vram-LocalView on GitHub
Configs, launchers, benchmarks, and tooling for running Qwen3.5 GGUF models locally with llama.cpp on a 16GB NVIDIA GPU
35Mar 29, 2026Updated this week

Alternatives and similar repositories for Qwen-3.5-16G-Vram-Local

Users that are interested in Qwen-3.5-16G-Vram-Local are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.

Sorting:

Are these results useful?