NVIDIA / trt-llm-as-openai-windows
This reference can be used with any existing OpenAI integrated apps to run with TRT-LLM inference locally on GeForce GPU on Windows instead of cloud.
☆117Updated 10 months ago
Alternatives and similar repositories for trt-llm-as-openai-windows:
Users that are interested in trt-llm-as-openai-windows are comparing it to the libraries listed below
- A fast batching API to serve LLM models☆177Updated 8 months ago
- A pipeline parallel training script for LLMs.☆116Updated this week
- Evaluate and Enhance Your LLM Deployments for Real-World Inference Needs☆183Updated last month
- 🕹️ Performance Comparison of MLOps Engines, Frameworks, and Languages on Mainstream AI Models.☆138Updated 5 months ago
- ☆65Updated 7 months ago
- ☆151Updated 6 months ago
- The NVIDIA RTX™ AI Toolkit is a suite of tools and SDKs for Windows developers to customize, optimize, and deploy AI models across RTX PC…☆126Updated last month
- GPT-4 Level Conversational QA Trained In a Few Hours☆58Updated 4 months ago
- Low-Rank adapter extraction for fine-tuned transformers models☆165Updated 8 months ago
- ☆150Updated this week
- ☆108Updated 3 months ago
- Some simple scripts that I use day-to-day when working with LLMs and Huggingface Hub☆156Updated last year
- Tutorial for building LLM router☆170Updated 5 months ago
- autologic is a Python package that implements the SELF-DISCOVER framework proposed in the paper SELF-DISCOVER: Large Language Models Self…☆57Updated 10 months ago
- vLLM: A high-throughput and memory-efficient inference and serving engine for LLMs☆88Updated this week
- ☆122Updated 4 months ago
- experiments with inference on llama☆104Updated 7 months ago
- This is our own implementation of 'Layer Selective Rank Reduction'☆232Updated 7 months ago
- Cortex.Tensorrt-LLM is a C++ inference library that can be loaded by any server at runtime. It submodules NVIDIA’s TensorRT-LLM for GPU a…☆42Updated 3 months ago
- Banishing LLM Hallucinations Requires Rethinking Generalization☆268Updated 6 months ago
- ☆107Updated 3 weeks ago
- A stable, fast and easy-to-use inference library with a focus on a sync-to-async API☆44Updated 3 months ago
- ☆195Updated 7 months ago
- automatically quant GGUF models☆150Updated this week
- A Lightweight Library for AI Observability☆229Updated this week
- Client Code Examples, Use Cases and Benchmarks for Enterprise h2oGPTe RAG-Based GenAI Platform☆82Updated last week
- An efficent implementation of the method proposed in "The Era of 1-bit LLMs"☆154Updated 3 months ago
- An NVIDIA AI Workbench example project for Retrieval Augmented Generation (RAG)☆289Updated last month
- Utils for Unsloth☆27Updated this week
- an implementation of Self-Extend, to expand the context window via grouped attention☆118Updated last year