runpod-workers / worker-sglangLinks
SGLang is fast serving framework for large language models and vision language models.
☆23Updated 5 months ago
Alternatives and similar repositories for worker-sglang
Users that are interested in worker-sglang are comparing it to the libraries listed below
Sorting:
- A high-throughput and memory-efficient inference and serving engine for LLMs☆264Updated 9 months ago
- Benchmark suite for LLMs from Fireworks.ai☆76Updated last week
- Google TPU optimizations for transformers models☆114Updated 5 months ago
- Lightweight toolkit package to train and fine-tune 1.58bit Language models☆81Updated last month
- ☆128Updated 3 months ago
- RWKV-7: Surpassing GPT☆92Updated 7 months ago
- Q-GaLore: Quantized GaLore with INT4 Projection and Layer-Adaptive Low-Rank Gradients.☆198Updated 11 months ago
- IBM development fork of https://github.com/huggingface/text-generation-inference☆61Updated 2 months ago
- vLLM: A high-throughput and memory-efficient inference and serving engine for LLMs☆87Updated this week
- ☆52Updated last year
- Simple high-throughput inference library☆120Updated 2 months ago
- ☆134Updated 10 months ago
- Data preparation code for Amber 7B LLM☆91Updated last year
- Repo hosting codes and materials related to speeding LLMs' inference using token merging.☆36Updated last year
- ☆173Updated this week
- ArcticTraining is a framework designed to simplify and accelerate the post-training process for large language models (LLMs)☆156Updated this week
- vLLM adapter for a TGIS-compatible gRPC server.☆33Updated this week
- 🕹️ Performance Comparison of MLOps Engines, Frameworks, and Languages on Mainstream AI Models.☆137Updated 11 months ago
- Advanced Ultra-Low Bitrate Compression Techniques for the LLaMA Family of LLMs☆110Updated last year
- Micro Llama is a small Llama based model with 300M parameters trained from scratch with $500 budget☆153Updated last year
- an open source reproduction of NVIDIA's nGPT (Normalized Transformer with Representation Learning on the Hypersphere)☆101Updated 4 months ago
- An efficent implementation of the method proposed in "The Era of 1-bit LLMs"☆154Updated 8 months ago
- KV cache compression for high-throughput LLM inference☆132Updated 5 months ago
- ☆214Updated 5 months ago
- Accelerating your LLM training to full speed! Made with ❤️ by ServiceNow Research☆211Updated this week
- 1.58-bit LLaMa model☆81Updated last year
- Train your own SOTA deductive reasoning model☆96Updated 4 months ago
- Optimizing Causal LMs through GRPO with weighted reward functions and automated hyperparameter tuning using Optuna☆54Updated 5 months ago
- The simplest, fastest repository for training/finetuning medium-sized xLSTMs.☆41Updated last year
- Inference server benchmarking tool☆79Updated 2 months ago