woct0rdho / transformers-qwen3-moe-fused
View external linksLinks

Fused Qwen3 MoE layer for faster training, compatible with Transformers, LoRA, bnb 4-bit quant, Unsloth. Also possible to train LoRA over GGUF
235Feb 5, 2026Updated last week

Alternatives and similar repositories for transformers-qwen3-moe-fused

Users that are interested in transformers-qwen3-moe-fused are comparing it to the libraries listed below

Sorting:

Are these results useful?