larq / compute-engine

Highly optimized inference engine for Binarized Neural Networks
242Updated last month

Related projects: