NVIDIA / trt-llm-as-openai-windowsLinks
This reference can be used with any existing OpenAI integrated apps to run with TRT-LLM inference locally on GeForce GPU on Windows instead of cloud.
☆122Updated last year
Alternatives and similar repositories for trt-llm-as-openai-windows
Users that are interested in trt-llm-as-openai-windows are comparing it to the libraries listed below
Sorting:
- OpenAI compatible API for TensorRT LLM triton backend☆209Updated 10 months ago
- ☆124Updated 2 months ago
- 🕹️ Performance Comparison of MLOps Engines, Frameworks, and Languages on Mainstream AI Models.☆137Updated 11 months ago
- Some simple scripts that I use day-to-day when working with LLMs and Huggingface Hub☆162Updated last year
- An efficent implementation of the method proposed in "The Era of 1-bit LLMs"☆154Updated 8 months ago
- ☆53Updated last year
- This is our own implementation of 'Layer Selective Rank Reduction'☆239Updated last year
- Deployment a light and full OpenAI API for production with vLLM to support /v1/embeddings with all embeddings models.☆42Updated 11 months ago
- Low-Rank adapter extraction for fine-tuned transformers models☆173Updated last year
- Lightweight toolkit package to train and fine-tune 1.58bit Language models☆80Updated last month
- ☆66Updated last year
- 🏋️ A unified multi-backend utility for benchmarking Transformers, Timm, PEFT, Diffusers and Sentence-Transformers with full support of O…☆304Updated 3 weeks ago
- Just a bunch of benchmark logs for different LLMs☆119Updated 10 months ago
- Set of scripts to finetune LLMs☆37Updated last year
- Convenient wrapper for fine-tuning and inference of Large Language Models (LLMs) with several quantization techniques (GTPQ, bitsandbytes…☆147Updated last year
- QLoRA with Enhanced Multi GPU Support☆37Updated last year
- Let's create synthetic textbooks together :)☆75Updated last year
- experiments with inference on llama☆104Updated last year
- An innovative library for efficient LLM inference via low-bit quantization☆349Updated 9 months ago
- ☆76Updated last year
- an implementation of Self-Extend, to expand the context window via grouped attention☆119Updated last year
- LLM-Training-API: Including Embeddings & ReRankers, mergekit, LaserRMT☆27Updated last year
- Data preparation code for Amber 7B LLM☆91Updated last year
- Simple and fast server for GPTQ-quantized LLaMA inference☆24Updated 2 years ago
- Load multiple LoRA modules simultaneously and automatically switch the appropriate combination of LoRA modules to generate the best answe…☆155Updated last year
- Easy to use, High Performant Knowledge Distillation for LLMs☆86Updated last month
- GPT-4 Level Conversational QA Trained In a Few Hours☆62Updated 10 months ago
- ☆114Updated 6 months ago
- Inference code for LLaMA models☆42Updated 2 years ago
- A collection of all available inference solutions for the LLMs☆90Updated 3 months ago