BlackSamorez / tensor_parallel
Automatically split your PyTorch models on multiple GPUs for training & inference
☆626Updated 10 months ago
Related projects ⓘ
Alternatives and complementary repositories for tensor_parallel
- Official code for ReLoRA from the paper Stack More Layers Differently: High-Rank Training Through Low-Rank Updates☆435Updated 6 months ago
- [ICML 2024] Break the Sequential Dependency of LLM Inference Using Lookahead Decoding☆1,149Updated last month
- [ICLR 2024] Sheared LLaMA: Accelerating Language Model Pre-training via Structured Pruning☆558Updated 8 months ago
- [ICML 2024] SqueezeLLM: Dense-and-Sparse Quantization☆649Updated 3 months ago
- Fast Inference Solutions for BLOOM☆560Updated last month
- Transformers with Arbitrarily Large Context☆641Updated 3 months ago
- Official Implementation of EAGLE-1 (ICML'24) and EAGLE-2 (EMNLP'24)☆826Updated this week
- ☆527Updated 10 months ago
- Memory optimization and training recipes to extrapolate language models' context length to 1 million tokens, with minimal hardware.☆647Updated last month
- Ring attention implementation with flash attention☆585Updated last week
- FlashInfer: Kernel Library for LLM Serving☆1,452Updated this week
- distributed trainer for LLMs☆545Updated 6 months ago
- Minimalistic large language model 3D-parallelism training☆1,260Updated this week
- Official PyTorch implementation of QA-LoRA☆117Updated 8 months ago
- [NeurIPS 2023] LLM-Pruner: On the Structural Pruning of Large Language Models. Support Llama-3/3.1, Llama-2, LLaMA, BLOOM, Vicuna, Baich…☆874Updated last month
- FP16xINT4 LLM inference kernel that can achieve near-ideal ~4x speedups up to medium batchsizes of 16-32 tokens.☆624Updated 2 months ago
- Code for the ICLR 2023 paper "GPTQ: Accurate Post-training Quantization of Generative Pretrained Transformers".☆1,941Updated 7 months ago
- Scalable toolkit for efficient model alignment☆620Updated this week
- Official repository of NEFTune: Noisy Embeddings Improves Instruction Finetuning☆384Updated 6 months ago
- Serving multiple LoRA finetuned LLM as one☆984Updated 6 months ago
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆1,338Updated 8 months ago
- Pipeline Parallelism for PyTorch☆726Updated 3 months ago
- train llama on a single A100 80G node using 🤗 transformers and 🚀 Deepspeed Pipeline Parallelism☆207Updated last year
- YaRN: Efficient Context Window Extension of Large Language Models☆1,353Updated 7 months ago
- Microsoft Automatic Mixed Precision Library☆525Updated last month
- Implementation of 💍 Ring Attention, from Liu et al. at Berkeley AI, in Pytorch☆476Updated 3 weeks ago
- USP: Unified (a.k.a. Hybrid, 2D) Sequence Parallel Attention for Long Context Transformers Model Training and Inference☆357Updated this week
- Fast inference from large lauguage models via speculative decoding☆569Updated 2 months ago
- ☆289Updated 7 months ago