b4rtaz / distributed-llamaView on GitHub
Distributed LLM inference. Connect home devices into a powerful cluster to accelerate LLM inference. More devices means faster inference.
2,865Feb 10, 2026Updated last month

Alternatives and similar repositories for distributed-llama

Users that are interested in distributed-llama are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.

Sorting:

Are these results useful?