woct0rdho / transformers-qwen3-moe-fusedView on GitHub
Fused Qwen3 MoE layer for faster training, compatible with Transformers, LoRA, bnb 4-bit quant, Unsloth. Also possible to train LoRA over GGUF
241Feb 19, 2026Updated last month

Alternatives and similar repositories for transformers-qwen3-moe-fused

Users that are interested in transformers-qwen3-moe-fused are comparing it to the libraries listed below

Sorting:

Are these results useful?