IST-DASLab / marlin

FP16xINT4 LLM inference kernel that can achieve near-ideal ~4x speedups up to medium batchsizes of 16-32 tokens.
624Updated 2 months ago

Related projects

Alternatives and complementary repositories for marlin