NVIDIA / trt-llm-as-openai-windowsLinks
This reference can be used with any existing OpenAI integrated apps to run with TRT-LLM inference locally on GeForce GPU on Windows instead of cloud.
☆126Updated last year
Alternatives and similar repositories for trt-llm-as-openai-windows
Users that are interested in trt-llm-as-openai-windows are comparing it to the libraries listed below
Sorting:
- A collection of all available inference solutions for the LLMs☆93Updated 9 months ago
- ☆51Updated last year
- Some simple scripts that I use day-to-day when working with LLMs and Huggingface Hub☆161Updated 2 years ago
- 🕹️ Performance Comparison of MLOps Engines, Frameworks, and Languages on Mainstream AI Models.☆139Updated last year
- Low-Rank adapter extraction for fine-tuned transformers models☆179Updated last year
- ☆164Updated 4 months ago
- ☆101Updated last year
- ☆68Updated last year
- Convenient wrapper for fine-tuning and inference of Large Language Models (LLMs) with several quantization techniques (GTPQ, bitsandbytes…☆146Updated 2 years ago
- This is our own implementation of 'Layer Selective Rank Reduction'☆240Updated last year
- A pipeline parallel training script for LLMs.☆164Updated 7 months ago
- An innovative library for efficient LLM inference via low-bit quantization☆350Updated last year
- Load multiple LoRA modules simultaneously and automatically switch the appropriate combination of LoRA modules to generate the best answe…☆157Updated last year
- Data preparation code for Amber 7B LLM☆93Updated last year
- ☆198Updated last year
- Comparison of Language Model Inference Engines☆237Updated 11 months ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆267Updated last week
- OpenAI compatible API for TensorRT LLM triton backend☆218Updated last year
- GPT-4 Level Conversational QA Trained In a Few Hours☆66Updated last year
- An efficent implementation of the method proposed in "The Era of 1-bit LLMs"☆155Updated last year
- ☆78Updated last year
- Merge Transformers language models by use of gradient parameters.☆209Updated last year
- Notus is a collection of fine-tuned LLMs using SFT, DPO, SFT+DPO, and/or any other RLHF techniques, while always keeping a data-first app…☆170Updated last year
- High level library for batched embeddings generation, blazingly-fast web-based RAG and quantized indexes processing ⚡☆68Updated 3 weeks ago
- A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.☆64Updated 2 years ago
- An endpoint server for efficiently serving quantized open-source LLMs for code.☆58Updated 2 years ago
- ☆117Updated 11 months ago
- ☆120Updated last year
- Tune MPTs☆84Updated 2 years ago
- ☆138Updated 3 months ago