maidacundo / MoE-LoRA

Adapt an LLM model to a Mixture-of-Experts model using Parameter Efficient finetuning (LoRA), injecting the LoRAs in the FFN.
24Updated 2 months ago

Alternatives and similar repositories for MoE-LoRA:

Users that are interested in MoE-LoRA are comparing it to the libraries listed below