AdrianBZG / LLM-distributed-finetune

Tune efficiently any LLM model from HuggingFace using distributed training (multiple GPU) and DeepSpeed. Uses Ray AIR to orchestrate the training on multiple AWS GPU instances
53Updated last year

Related projects

Alternatives and complementary repositories for LLM-distributed-finetune