Gygeek / Framework-strix-halo-llm-setupView on GitHub
Complete guide to running large language models locally on AMD Ryzen AI Max+ 395 (Strix Halo) with 128GB unified memory. Covers BIOS config, kernel setup, ROCm installation, and llama.cpp deployment. Run 70B+ parameter models on a single APU.
36Dec 14, 2025Updated 2 months ago

Alternatives and similar repositories for Framework-strix-halo-llm-setup

Users that are interested in Framework-strix-halo-llm-setup are comparing it to the libraries listed below

Sorting:

Are these results useful?