menloresearch / cortex.tensorrt-llm
Cortex.Tensorrt-LLM is a C++ inference library that can be loaded by any server at runtime. It submodules NVIDIA’s TensorRT-LLM for GPU accelerated inference on NVIDIA's GPUs.
☆43Updated 6 months ago
Alternatives and similar repositories for cortex.tensorrt-llm:
Users that are interested in cortex.tensorrt-llm are comparing it to the libraries listed below
- A proxy that hosts multiple single-model runners such as LLama.cpp and vLLM☆12Updated 3 weeks ago
- Lightweight continuous batching OpenAI compatibility using HuggingFace Transformers include T5 and Whisper.☆22Updated last month
- A fast batching API to serve LLM models☆182Updated 11 months ago
- run ollama & gguf easily with a single command☆50Updated 11 months ago
- ☆84Updated 4 months ago
- ☆24Updated 3 months ago
- ☆112Updated 4 months ago
- Local character AI chatbot with chroma vector store memory and some scripts to process documents for Chroma☆33Updated 6 months ago
- ☆46Updated 2 months ago
- After my server ui improvements were successfully merged, consider this repo a playground for experimenting, tinkering and hacking around…☆54Updated 8 months ago
- I'll be your machinery.☆14Updated this week
- ☆53Updated 10 months ago
- Tcurtsni: Reverse Instruction Chat, ever wonder what your LLM wants to ask you?☆22Updated 9 months ago
- Demo of an "always-on" AI assistant.☆24Updated last year
- Easily view and modify JSON datasets for large language models☆74Updated last month
- Accepts a Hugging Face model URL, automatically downloads and quantizes it using Bits and Bytes.☆38Updated last year
- A Windows tool to query various LLM AIs. Supports branched conversations, history and summaries among others.☆30Updated last week
- klmbr - a prompt pre-processing technique to break through the barrier of entropy while generating text with LLMs☆71Updated 7 months ago
- Run multiple resource-heavy Large Models (LM) on the same machine with limited amount of VRAM/other resources by exposing them on differe…☆56Updated 2 months ago
- BUD-E (Buddy) is an open-source voice assistant framework that facilitates seamless interaction with AI models and APIs, enabling the cre…☆33Updated 9 months ago
- Serving LLMs in the HF-Transformers format via a PyFlask API☆71Updated 7 months ago
- Low-Rank adapter extraction for fine-tuned transformers models☆171Updated 11 months ago
- B-Llama3o a llama3 with Vision Audio and Audio understanding as well as text and Audio and Animation Data output.☆26Updated 10 months ago
- A pipeline parallel training script for LLMs.☆137Updated 3 weeks ago
- The TinyLlama project is an open endeavor to pretrain a 1.1B Llama model on 3 trillion tokens.☆11Updated last year
- Kosmos-2.5 is a cutting-edge Multimodal-LLM (MLLM) specializing in image OCR. However, its stringent software requirements & Python-scrip…☆59Updated 9 months ago
- LLM-Training-API: Including Embeddings & ReRankers, mergekit, LaserRMT☆27Updated last year
- Just a simple HowTo for https://github.com/johnsmith0031/alpaca_lora_4bit☆31Updated last year
- LLM inference in C/C++☆21Updated last month
- Local LLM inference & management server with built-in OpenAI API☆31Updated last year