vllm-project / vllm-nccl
Manages vllm-nccl dependency
☆17Updated 8 months ago
Alternatives and similar repositories for vllm-nccl:
Users that are interested in vllm-nccl are comparing it to the libraries listed below
- Efficient, Flexible, and Highly Fault-Tolerant Model Service Management Based on SGLang☆35Updated 3 months ago
- Vocabulary Parallelism☆17Updated 3 months ago
- Odysseus: Playground of LLM Sequence Parallelism☆64Updated 8 months ago
- Implementation of IceFormer: Accelerated Inference with Long-Sequence Transformers on CPUs (ICLR 2024).☆22Updated 8 months ago
- Fast LLM Training CodeBase With dynamic strategy choosing [Deepspeed+Megatron+FlashAttention+CudaFusionKernel+Compiler];☆36Updated last year
- Decoding Attention is specially optimized for multi head attention (MHA) using CUDA core for the decoding stage of LLM inference.☆29Updated 3 months ago
- Benchmark for machine learning model online serving (LLM, embedding, Stable-Diffusion, Whisper)☆28Updated last year
- TensorRT LLM Benchmark Configuration☆13Updated 6 months ago
- Ouroboros: Speculative Decoding with Large Model Enhanced Drafting (EMNLP 2024 main)☆84Updated 4 months ago
- ☆23Updated last year
- [ACL 2024] RelayAttention for Efficient Large Language Model Serving with Long System Prompts☆38Updated 11 months ago
- LLMem: GPU Memory Estimation for Fine-Tuning Pre-Trained LLMs☆17Updated last year
- This is a personal reimplementation of Google's Infini-transformer, utilizing a small 2b model. The project includes both model and train…☆56Updated 10 months ago
- Transformers components but in Triton☆31Updated 3 months ago
- ☆22Updated last year
- Summary of system papers/frameworks/codes/tools on training or serving large model☆56Updated last year
- Transformer related optimization, including BERT, GPT☆17Updated last year
- Distributed IO-aware Attention algorithm☆18Updated 5 months ago
- ☆45Updated 3 months ago
- Linear Attention Sequence Parallelism (LASP)☆77Updated 8 months ago
- GPTQ inference TVM kernel☆38Updated 9 months ago
- Repository for Sparse Finetuning of LLMs via modified version of the MosaicML llmfoundry☆40Updated last year
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models☆128Updated 8 months ago
- Contextual Position Encoding but with some custom CUDA Kernels https://arxiv.org/abs/2405.18719☆22Updated 8 months ago
- ☆18Updated last week
- ☆14Updated last year
- Elixir: Train a Large Language Model on a Small GPU Cluster☆13Updated last year
- Code for preprint "Metadata Conditioning Accelerates Language Model Pre-training (MeCo)"☆34Updated last month