SqueezeAILab / KVQuant

[NeurIPS 2024] KVQuant: Towards 10 Million Context Length LLM Inference with KV Cache Quantization
305Updated 3 months ago

Related projects

Alternatives and complementary repositories for KVQuant