maidacundo / MoE-LoRA

Adapt an LLM model to a Mixture-of-Experts model using Parameter Efficient finetuning (LoRA), injecting the LoRAs in the FFN.
21Updated 3 weeks ago

Related projects

Alternatives and complementary repositories for MoE-LoRA