NVIDIA / trt-llm-as-openai-windowsLinks
This reference can be used with any existing OpenAI integrated apps to run with TRT-LLM inference locally on GeForce GPU on Windows instead of cloud.
☆127Updated last year
Alternatives and similar repositories for trt-llm-as-openai-windows
Users that are interested in trt-llm-as-openai-windows are comparing it to the libraries listed below
Sorting:
- 🕹️ Performance Comparison of MLOps Engines, Frameworks, and Languages on Mainstream AI Models.☆139Updated last year
- ☆102Updated last year
- A collection of all available inference solutions for the LLMs☆91Updated 8 months ago
- ☆51Updated last year
- Convenient wrapper for fine-tuning and inference of Large Language Models (LLMs) with several quantization techniques (GTPQ, bitsandbytes…☆146Updated 2 years ago
- automatically quant GGUF models☆214Updated 3 weeks ago
- Some simple scripts that I use day-to-day when working with LLMs and Huggingface Hub☆160Updated 2 years ago
- Low-Rank adapter extraction for fine-tuned transformers models☆179Updated last year
- ☆163Updated 3 months ago
- This is our own implementation of 'Layer Selective Rank Reduction'☆239Updated last year
- A pipeline parallel training script for LLMs.☆162Updated 6 months ago
- ☆138Updated 3 months ago
- ☆67Updated last year
- Deployment a light and full OpenAI API for production with vLLM to support /v1/embeddings with all embeddings models.☆44Updated last year
- Cortex.Tensorrt-LLM is a C++ inference library that can be loaded by any server at runtime. It submodules NVIDIA’s TensorRT-LLM for GPU a…☆42Updated last year
- run ollama & gguf easily with a single command☆52Updated last year
- Easy to use, High Performant Knowledge Distillation for LLMs☆95Updated 6 months ago
- The NVIDIA RTX™ AI Toolkit is a suite of tools and SDKs for Windows developers to customize, optimize, and deploy AI models across RTX PC…☆176Updated last year
- GPT-4 Level Conversational QA Trained In a Few Hours☆65Updated last year
- ☆119Updated last year
- LLaVA server (llama.cpp).☆183Updated 2 years ago
- GPTQLoRA: Efficient Finetuning of Quantized LLMs with GPTQ☆101Updated 2 years ago
- An innovative library for efficient LLM inference via low-bit quantization☆349Updated last year
- Load multiple LoRA modules simultaneously and automatically switch the appropriate combination of LoRA modules to generate the best answe…☆157Updated last year
- An unsupervised model merging algorithm for Transformers-based language models.☆108Updated last year
- ☆78Updated last year
- ☆116Updated 11 months ago
- A fast batching API to serve LLM models☆188Updated last year
- Unsloth Studio☆116Updated 7 months ago
- Lightweight continuous batching OpenAI compatibility using HuggingFace Transformers include T5 and Whisper.☆29Updated 8 months ago