b4rtaz / distributed-llama

Connect home devices into a powerful cluster to accelerate LLM inference. More devices means faster inference.
β˜†2,028Updated this week

Alternatives and similar repositories for distributed-llama:

Users that are interested in distributed-llama are comparing it to the libraries listed below