RayFernando1337 / LLM-Calc

Instantly calculate the maximum size of quantized language models that can fit in your available RAM, helping you optimize your models for inference.
β˜†102Updated 3 weeks ago

Related projects β“˜

Alternatives and complementary repositories for LLM-Calc