NVIDIA / trt-llm-as-openai-windowsLinks
This reference can be used with any existing OpenAI integrated apps to run with TRT-LLM inference locally on GeForce GPU on Windows instead of cloud.
☆126Updated last year
Alternatives and similar repositories for trt-llm-as-openai-windows
Users that are interested in trt-llm-as-openai-windows are comparing it to the libraries listed below
Sorting:
- 🕹️ Performance Comparison of MLOps Engines, Frameworks, and Languages on Mainstream AI Models.☆140Updated last year
- Low-Rank adapter extraction for fine-tuned transformers models☆178Updated last year
- Some simple scripts that I use day-to-day when working with LLMs and Huggingface Hub☆160Updated 2 years ago
- A collection of all available inference solutions for the LLMs☆91Updated 7 months ago
- automatically quant GGUF models☆214Updated last week
- Convenient wrapper for fine-tuning and inference of Large Language Models (LLMs) with several quantization techniques (GTPQ, bitsandbytes…☆146Updated 2 years ago
- ☆67Updated last year
- An innovative library for efficient LLM inference via low-bit quantization☆349Updated last year
- Cortex.Tensorrt-LLM is a C++ inference library that can be loaded by any server at runtime. It submodules NVIDIA’s TensorRT-LLM for GPU a…☆42Updated last year
- ☆162Updated 2 months ago
- ☆102Updated last year
- Let's create synthetic textbooks together :)☆75Updated last year
- run ollama & gguf easily with a single command☆52Updated last year
- An unsupervised model merging algorithm for Transformers-based language models.☆106Updated last year
- ☆116Updated 10 months ago
- Scripts to create your own moe models using mlx☆90Updated last year
- This is our own implementation of 'Layer Selective Rank Reduction'☆239Updated last year
- 1.58-bit LLaMa model☆83Updated last year
- A high-throughput and memory-efficient inference and serving engine for LLMs☆266Updated last year
- OpenAI compatible API for TensorRT LLM triton backend☆215Updated last year
- A pipeline parallel training script for LLMs.☆159Updated 6 months ago
- Load multiple LoRA modules simultaneously and automatically switch the appropriate combination of LoRA modules to generate the best answe…☆158Updated last year
- ☆197Updated last year
- Q-GaLore: Quantized GaLore with INT4 Projection and Layer-Adaptive Low-Rank Gradients.☆202Updated last year
- Examples of RAG using Llamaindex with local LLMs - Gemma, Mixtral 8x7B, Llama 2, Mistral 7B, Orca 2, Phi-2, Neural 7B☆129Updated last year
- ☆51Updated last year
- Tune MPTs☆84Updated 2 years ago
- ☆136Updated last year
- Fast approximate inference on a single GPU with sparsity aware offloading☆38Updated last year
- an implementation of Self-Extend, to expand the context window via grouped attention☆118Updated last year