anordin95 / run-llama-locally

Run and explore Llama models locally with minimal dependencies on CPU
183Updated last month

Related projects

Alternatives and complementary repositories for run-llama-locally