runpod-workers / worker-sglangLinks
SGLang is fast serving framework for large language models and vision language models.
☆30Updated 3 weeks ago
Alternatives and similar repositories for worker-sglang
Users that are interested in worker-sglang are comparing it to the libraries listed below
Sorting:
- A high-throughput and memory-efficient inference and serving engine for LLMs☆267Updated 2 weeks ago
- Efficient non-uniform quantization with GPTQ for GGUF☆57Updated 3 months ago
- Google TPU optimizations for transformers models☆125Updated 11 months ago
- vLLM adapter for a TGIS-compatible gRPC server.☆45Updated this week
- Q-GaLore: Quantized GaLore with INT4 Projection and Layer-Adaptive Low-Rank Gradients.☆202Updated last year
- Lightweight toolkit package to train and fine-tune 1.58bit Language models☆103Updated 7 months ago
- Easy and Efficient Quantization for Transformers☆203Updated 5 months ago
- ☆51Updated last year
- ☆138Updated 4 months ago
- Optimizing Causal LMs through GRPO with weighted reward functions and automated hyperparameter tuning using Optuna☆59Updated 2 months ago
- Benchmark suite for LLMs from Fireworks.ai☆84Updated 3 weeks ago
- 🕹️ Performance Comparison of MLOps Engines, Frameworks, and Languages on Mainstream AI Models.☆139Updated last year
- Simple high-throughput inference library☆152Updated 7 months ago
- Advanced Ultra-Low Bitrate Compression Techniques for the LLaMA Family of LLMs☆110Updated last year
- Self-host LLMs with vLLM and BentoML☆161Updated 3 weeks ago
- Training-free Post-training Efficient Sub-quadratic Complexity Attention. Implemented with OpenAI Triton.☆148Updated last month
- Data preparation code for Amber 7B LLM☆94Updated last year
- OpenAI compatible API for TensorRT LLM triton backend☆218Updated last year
- A collection of all available inference solutions for the LLMs☆93Updated 9 months ago
- KV cache compression for high-throughput LLM inference☆148Updated 10 months ago
- Accelerating your LLM training to full speed! Made with ❤️ by ServiceNow Research☆272Updated this week
- An efficent implementation of the method proposed in "The Era of 1-bit LLMs"☆155Updated last year
- ☆273Updated this week
- vLLM: A high-throughput and memory-efficient inference and serving engine for LLMs☆94Updated this week
- ☆136Updated last year
- an open source reproduction of NVIDIA's nGPT (Normalized Transformer with Representation Learning on the Hypersphere)☆108Updated 9 months ago
- ArcticTraining is a framework designed to simplify and accelerate the post-training process for large language models (LLMs)☆261Updated this week
- Experiments on speculative sampling with Llama models☆127Updated 2 years ago
- EvolKit is an innovative framework designed to automatically enhance the complexity of instructions used for fine-tuning Large Language M…☆245Updated last year
- Simple and efficient DeepSeek V3 SFT using pipeline parallel and expert parallel, with both FP8 and BF16 trainings☆101Updated 4 months ago