NVIDIA / trt-llm-as-openai-windows
This reference can be used with any existing OpenAI integrated apps to run with TRT-LLM inference locally on GeForce GPU on Windows instead of cloud.
☆120Updated last year
Alternatives and similar repositories for trt-llm-as-openai-windows
Users that are interested in trt-llm-as-openai-windows are comparing it to the libraries listed below
Sorting:
- An innovative library for efficient LLM inference via low-bit quantization☆350Updated 8 months ago
- This is our own implementation of 'Layer Selective Rank Reduction'☆238Updated 11 months ago
- ☆120Updated last month
- A pipeline parallel training script for LLMs.☆145Updated 2 weeks ago
- Deployment a light and full OpenAI API for production with vLLM to support /v1/embeddings with all embeddings models.☆42Updated 9 months ago
- Low-Rank adapter extraction for fine-tuned transformers models☆173Updated last year
- Data preparation code for Amber 7B LLM☆89Updated last year
- Some simple scripts that I use day-to-day when working with LLMs and Huggingface Hub☆162Updated last year
- A high-throughput and memory-efficient inference and serving engine for LLMs☆263Updated 7 months ago
- automatically quant GGUF models☆175Updated this week
- An efficent implementation of the method proposed in "The Era of 1-bit LLMs"☆154Updated 7 months ago
- ☆66Updated 11 months ago
- experiments with inference on llama☆104Updated 11 months ago
- vLLM: A high-throughput and memory-efficient inference and serving engine for LLMs☆86Updated this week
- LLM-Training-API: Including Embeddings & ReRankers, mergekit, LaserRMT☆27Updated last year
- Comparison of Language Model Inference Engines☆217Updated 4 months ago
- 🕹️ Performance Comparison of MLOps Engines, Frameworks, and Languages on Mainstream AI Models.☆136Updated 9 months ago
- OpenAI compatible API for TensorRT LLM triton backend☆205Updated 9 months ago
- ☆101Updated 8 months ago
- The NVIDIA RTX™ AI Toolkit is a suite of tools and SDKs for Windows developers to customize, optimize, and deploy AI models across RTX PC…☆151Updated 5 months ago
- Q-GaLore: Quantized GaLore with INT4 Projection and Layer-Adaptive Low-Rank Gradients.☆199Updated 9 months ago
- 🏋️ A unified multi-backend utility for benchmarking Transformers, Timm, PEFT, Diffusers and Sentence-Transformers with full support of O…☆300Updated this week
- Cortex.Tensorrt-LLM is a C++ inference library that can be loaded by any server at runtime. It submodules NVIDIA’s TensorRT-LLM for GPU a…☆43Updated 7 months ago
- Banishing LLM Hallucinations Requires Rethinking Generalization☆273Updated 10 months ago
- Google TPU optimizations for transformers models☆109Updated 3 months ago
- ☆117Updated 8 months ago
- Easy and Efficient Quantization for Transformers☆197Updated 3 months ago
- ☆75Updated last year
- an implementation of Self-Extend, to expand the context window via grouped attention☆119Updated last year
- Fast approximate inference on a single GPU with sparsity aware offloading☆38Updated last year