woct0rdho / transformers-qwen3-moe-fusedLinks

Fused Qwen3 MoE layer for faster training, compatible with Transformers, LoRA, bnb 4-bit quant, Unsloth. Also possible to train LoRA over GGUF
229Updated last week

Alternatives and similar repositories for transformers-qwen3-moe-fused

Users that are interested in transformers-qwen3-moe-fused are comparing it to the libraries listed below

Sorting: