AdrianBZG / LLM-distributed-finetuneLinks

Tune efficiently any LLM model from HuggingFace using distributed training (multiple GPU) and DeepSpeed. Uses Ray AIR to orchestrate the training on multiple AWS GPU instances
58Updated 2 years ago

Alternatives and similar repositories for LLM-distributed-finetune

Users that are interested in LLM-distributed-finetune are comparing it to the libraries listed below

Sorting: