IST-DASLab / marlinView on GitHub
FP16xINT4 LLM inference kernel that can achieve near-ideal ~4x speedups up to medium batchsizes of 16-32 tokens.
1,018Sep 4, 2024Updated last year

Alternatives and similar repositories for marlin

Users that are interested in marlin are comparing it to the libraries listed below

Sorting:

Are these results useful?