anordin95 / run-llama-locally

Run and explore Llama models locally with minimal dependencies on CPU
181Updated last month

Related projects

Alternatives and complementary repositories for run-llama-locally