Gygeek / Framework-strix-halo-llm-setupView on GitHub
Complete guide to running large language models locally on AMD Ryzen AI Max+ 395 (Strix Halo) with 128GB unified memory. Covers BIOS config, kernel setup, ROCm installation, and llama.cpp deployment. Run 70B+ parameter models on a single APU.
45Dec 14, 2025Updated 3 months ago

Alternatives and similar repositories for Framework-strix-halo-llm-setup

Users that are interested in Framework-strix-halo-llm-setup are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.

Sorting:

Are these results useful?