NVIDIA / trt-llm-as-openai-windowsLinks
This reference can be used with any existing OpenAI integrated apps to run with TRT-LLM inference locally on GeForce GPU on Windows instead of cloud.
☆128Updated last year
Alternatives and similar repositories for trt-llm-as-openai-windows
Users that are interested in trt-llm-as-openai-windows are comparing it to the libraries listed below
Sorting:
- Some simple scripts that I use day-to-day when working with LLMs and Huggingface Hub☆161Updated last year
- A collection of all available inference solutions for the LLMs☆91Updated 6 months ago
- Low-Rank adapter extraction for fine-tuned transformers models☆176Updated last year
- ☆161Updated last month
- ☆67Updated last year
- ☆102Updated last year
- ☆199Updated last year
- A pipeline parallel training script for LLMs.☆159Updated 4 months ago
- 🕹️ Performance Comparison of MLOps Engines, Frameworks, and Languages on Mainstream AI Models.☆137Updated last year
- ☆51Updated last year
- OpenAI compatible API for TensorRT LLM triton backend☆214Updated last year
- ☆116Updated 9 months ago
- run ollama & gguf easily with a single command☆52Updated last year
- An unsupervised model merging algorithm for Transformers-based language models.☆107Updated last year
- An innovative library for efficient LLM inference via low-bit quantization☆348Updated last year
- Convenient wrapper for fine-tuning and inference of Large Language Models (LLMs) with several quantization techniques (GTPQ, bitsandbytes…☆146Updated last year
- Let's create synthetic textbooks together :)☆75Updated last year
- This is our own implementation of 'Layer Selective Rank Reduction'☆240Updated last year
- Load multiple LoRA modules simultaneously and automatically switch the appropriate combination of LoRA modules to generate the best answe…☆158Updated last year
- automatically quant GGUF models☆200Updated this week
- GPT-4 Level Conversational QA Trained In a Few Hours☆64Updated last year
- ☆135Updated 3 weeks ago
- Scripts to create your own moe models using mlx☆90Updated last year
- A fast batching API to serve LLM models☆187Updated last year
- Deployment a light and full OpenAI API for production with vLLM to support /v1/embeddings with all embeddings models.☆42Updated last year
- FineTune LLMs in few lines of code (Text2Text, Text2Speech, Speech2Text)☆242Updated last year
- Easy to use, High Performant Knowledge Distillation for LLMs☆92Updated 4 months ago
- ☆118Updated last year
- Notus is a collection of fine-tuned LLMs using SFT, DPO, SFT+DPO, and/or any other RLHF techniques, while always keeping a data-first app…☆170Updated last year
- ☆77Updated last year