BlackSamorez / tensor_parallel
Automatically split your PyTorch models on multiple GPUs for training & inference
☆652Updated last year
Alternatives and similar repositories for tensor_parallel:
Users that are interested in tensor_parallel are comparing it to the libraries listed below
- [ICLR 2024] Sheared LLaMA: Accelerating Language Model Pre-training via Structured Pruning☆598Updated last year
- [ICML 2024] Break the Sequential Dependency of LLM Inference Using Lookahead Decoding☆1,242Updated last month
- [ICML 2024] SqueezeLLM: Dense-and-Sparse Quantization☆686Updated 8 months ago
- Large Context Attention☆704Updated 3 months ago
- Ring attention implementation with flash attention☆743Updated 2 weeks ago
- Official code for ReLoRA from the paper Stack More Layers Differently: High-Rank Training Through Low-Rank Updates☆452Updated last year
- distributed trainer for LLMs☆572Updated 11 months ago
- Fast Inference Solutions for BLOOM☆561Updated 6 months ago
- ☆543Updated 4 months ago
- Memory optimization and training recipes to extrapolate language models' context length to 1 million tokens, with minimal hardware.☆715Updated 6 months ago
- FP16xINT4 LLM inference kernel that can achieve near-ideal ~4x speedups up to medium batchsizes of 16-32 tokens.☆803Updated 7 months ago
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆1,386Updated last year
- Microsoft Automatic Mixed Precision Library☆593Updated 6 months ago
- Pipeline Parallelism for PyTorch☆764Updated 8 months ago
- train llama on a single A100 80G node using 🤗 transformers and 🚀 Deepspeed Pipeline Parallelism☆218Updated last year
- Official repository of NEFTune: Noisy Embeddings Improves Instruction Finetuning☆395Updated 11 months ago
- Scalable toolkit for efficient model alignment☆770Updated this week
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆2,049Updated last month
- Code for the ICLR 2023 paper "GPTQ: Accurate Post-training Quantization of Generative Pretrained Transformers".☆2,093Updated last year
- LOMO: LOw-Memory Optimization☆985Updated 9 months ago
- [ICML 2023] SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models☆1,392Updated 9 months ago
- Serving multiple LoRA finetuned LLM as one☆1,054Updated 11 months ago
- Official PyTorch implementation of QA-LoRA☆131Updated last year
- Official Implementation of EAGLE-1 (ICML'24), EAGLE-2 (EMNLP'24), and EAGLE-3.☆1,183Updated this week
- USP: Unified (a.k.a. Hybrid, 2D) Sequence Parallel Attention for Long Context Transformers Model Training and Inference☆477Updated this week
- Implementation of paper Data Engineering for Scaling Language Models to 128K Context☆459Updated last year
- 📰 Must-read papers and blogs on Speculative Decoding ⚡️☆696Updated this week
- Code for paper: "QuIP: 2-Bit Quantization of Large Language Models With Guarantees"☆363Updated last year
- Fast inference from large lauguage models via speculative decoding☆714Updated 8 months ago
- batched loras☆341Updated last year