karelnagel / llama-app

Run LLaMA inference on CPU, with Rust πŸ¦€πŸš€πŸ¦™
β˜†18Updated last year

Related projects: β“˜