animehacker / llama-turboquantView on GitHub
TurboQuant for GGML: 4.57x KV Cache Compression with 72K+ Context for Llama-3.3-70B on Consumer GPUs.
35Mar 28, 2026Updated last week

Alternatives and similar repositories for llama-turboquant

Users that are interested in llama-turboquant are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.

Sorting:

Are these results useful?