AdrianBZG / LLM-distributed-finetune
Tune efficiently any LLM model from HuggingFace using distributed training (multiple GPU) and DeepSpeed. Uses Ray AIR to orchestrate the training on multiple AWS GPU instances
☆55Updated last year
Alternatives and similar repositories for LLM-distributed-finetune:
Users that are interested in LLM-distributed-finetune are comparing it to the libraries listed below
- ☆116Updated last year
- experiments with inference on llama☆104Updated 9 months ago
- Pretrain, finetune and serve LLMs on Intel platforms with Ray☆122Updated last month
- ArcticTraining is a framework designed to simplify and accelerate the post-training process for large language models (LLMs)☆51Updated this week
- ☆54Updated 6 months ago
- The code for the paper ROUTERBENCH: A Benchmark for Multi-LLM Routing System☆107Updated 9 months ago
- Ray - A curated list of resources: https://github.com/ray-project/ray☆52Updated last month
- Docker image NVIDIA GH200 machines - optimized for vllm serving and hf trainer finetuning☆37Updated last month
- ring-attention experiments☆128Updated 5 months ago
- Benchmark suite for LLMs from Fireworks.ai☆69Updated last month
- [ICLR2025] Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decoding☆110Updated 3 months ago
- The driver for LMCache core to run in vLLM☆35Updated last month
- 🚀 Efficiently (pre)training foundation models with native PyTorch features, including FSDP for training and SDPA implementation of Flash…☆232Updated 2 weeks ago
- batched loras☆340Updated last year
- NeurIPS Large Language Model Efficiency Challenge: 1 LLM + 1GPU + 1Day☆255Updated last year
- Code repository for the paper - "AdANNS: A Framework for Adaptive Semantic Search"☆63Updated last year
- Manage scalable open LLM inference endpoints in Slurm clusters☆253Updated 8 months ago
- Collection of kernels written in Triton language☆114Updated last month
- Evaluate and Enhance Your LLM Deployments for Real-World Inference Needs☆224Updated this week
- ☆49Updated 4 months ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆262Updated 5 months ago
- KernelBench: Can LLMs Write GPU Kernels? - Benchmark with Torch -> CUDA problems☆237Updated this week
- Applied AI experiments and examples for PyTorch☆249Updated this week
- LLM Serving Performance Evaluation Harness☆70Updated last month
- Easy and Efficient Quantization for Transformers☆193Updated last month
- ☆158Updated last month
- Cold Compress is a hackable, lightweight, and open-source toolkit for creating and benchmarking cache compression methods built on top of…☆122Updated 7 months ago
- A simple service that integrates vLLM with Ray Serve for fast and scalable LLM serving.☆65Updated 11 months ago
- ☆62Updated 3 weeks ago
- Data preparation code for Amber 7B LLM☆86Updated 10 months ago