menloresearch / cortex.tensorrt-llm
Cortex.Tensorrt-LLM is a C++ inference library that can be loaded by any server at runtime. It submodules NVIDIA’s TensorRT-LLM for GPU accelerated inference on NVIDIA's GPUs.
☆43Updated 7 months ago
Alternatives and similar repositories for cortex.tensorrt-llm
Users that are interested in cortex.tensorrt-llm are comparing it to the libraries listed below
Sorting:
- An OpenAI API compatible LLM inference server based on ExLlamaV2.☆25Updated last year
- ☆89Updated 4 months ago
- automatically quant GGUF models☆175Updated this week
- Testing LLM reasoning abilities with family relationship quizzes.☆62Updated 3 months ago
- GPT-4 Level Conversational QA Trained In a Few Hours☆61Updated 8 months ago
- Lightweight continuous batching OpenAI compatibility using HuggingFace Transformers include T5 and Whisper.☆22Updated 2 months ago
- run ollama & gguf easily with a single command☆50Updated last year
- Tcurtsni: Reverse Instruction Chat, ever wonder what your LLM wants to ask you?☆22Updated 10 months ago
- LLM inference in C/C++☆76Updated this week
- After my server ui improvements were successfully merged, consider this repo a playground for experimenting, tinkering and hacking around…☆54Updated 8 months ago
- ☆66Updated 11 months ago
- 1.58-bit LLaMa model☆81Updated last year
- A pipeline parallel training script for LLMs.☆145Updated 2 weeks ago
- A proxy that hosts multiple single-model runners such as LLama.cpp and vLLM☆12Updated last month
- Easily view and modify JSON datasets for large language models☆75Updated 2 months ago
- ☆53Updated 11 months ago
- Local LLM inference & management server with built-in OpenAI API☆31Updated last year
- cortex.llamacpp is a high-efficiency C++ inference engine for edge computing. It is a dynamic library that can be loaded by any server a…☆40Updated this week
- Deploy your GGML models to HuggingFace Spaces with Docker and gradio☆36Updated last year
- ☆24Updated 3 months ago
- A quick and optimized solution to manage llama based gguf quantized models, download gguf files, retreive messege formatting, add more mo…☆12Updated last year
- ☆18Updated 5 months ago
- Run multiple resource-heavy Large Models (LM) on the same machine with limited amount of VRAM/other resources by exposing them on differe…☆61Updated this week
- Gradio based tool to run opensource LLM models directly from Huggingface☆91Updated 10 months ago
- Serving LLMs in the HF-Transformers format via a PyFlask API☆71Updated 8 months ago
- This reference can be used with any existing OpenAI integrated apps to run with TRT-LLM inference locally on GeForce GPU on Windows inste…☆120Updated last year
- Dagger functions to import Hugging Face GGUF models into a local ollama instance and optionally push them to ollama.com.☆115Updated 11 months ago
- B-Llama3o a llama3 with Vision Audio and Audio understanding as well as text and Audio and Animation Data output.☆26Updated 11 months ago
- ☆114Updated 4 months ago
- Guaranteed Structured Output from any Language Model via Hierarchical State Machines☆128Updated 2 weeks ago