NVIDIA / trt-llm-as-openai-windowsLinks
This reference can be used with any existing OpenAI integrated apps to run with TRT-LLM inference locally on GeForce GPU on Windows instead of cloud.
☆122Updated last year
Alternatives and similar repositories for trt-llm-as-openai-windows
Users that are interested in trt-llm-as-openai-windows are comparing it to the libraries listed below
Sorting:
- ☆66Updated last year
- This is our own implementation of 'Layer Selective Rank Reduction'☆239Updated last year
- A pipeline parallel training script for LLMs.☆153Updated 2 months ago
- automatically quant GGUF models☆187Updated this week
- ☆115Updated 6 months ago
- Low-Rank adapter extraction for fine-tuned transformers models☆173Updated last year
- Some simple scripts that I use day-to-day when working with LLMs and Huggingface Hub☆162Updated last year
- ☆157Updated last year
- An innovative library for efficient LLM inference via low-bit quantization☆349Updated 10 months ago
- Load multiple LoRA modules simultaneously and automatically switch the appropriate combination of LoRA modules to generate the best answe…☆156Updated last year
- A collection of all available inference solutions for the LLMs☆91Updated 4 months ago
- OpenAI compatible API for TensorRT LLM triton backend☆209Updated 11 months ago
- FineTune LLMs in few lines of code (Text2Text, Text2Speech, Speech2Text)☆240Updated last year
- 🕹️ Performance Comparison of MLOps Engines, Frameworks, and Languages on Mainstream AI Models.☆137Updated 11 months ago
- Convenient wrapper for fine-tuning and inference of Large Language Models (LLMs) with several quantization techniques (GTPQ, bitsandbytes…☆146Updated last year
- ☆52Updated last year
- ☆101Updated 10 months ago
- Gradio based tool to run opensource LLM models directly from Huggingface☆93Updated last year
- An unsupervised model merging algorithm for Transformers-based language models.☆105Updated last year
- ☆128Updated 3 months ago
- run ollama & gguf easily with a single command☆52Updated last year
- Easy to use, High Performant Knowledge Distillation for LLMs☆88Updated 2 months ago
- GPT-4 Level Conversational QA Trained In a Few Hours☆62Updated 10 months ago
- Q-GaLore: Quantized GaLore with INT4 Projection and Layer-Adaptive Low-Rank Gradients.☆198Updated last year
- Lightweight continuous batching OpenAI compatibility using HuggingFace Transformers include T5 and Whisper.☆26Updated 4 months ago
- Data preparation code for Amber 7B LLM☆91Updated last year
- Scripts to create your own moe models using mlx☆90Updated last year
- A high-throughput and memory-efficient inference and serving engine for LLMs☆264Updated 9 months ago
- Maybe the new state of the art vision model? we'll see 🤷♂️☆165Updated last year
- ☆134Updated 10 months ago