janhq / cortex.tensorrt-llmLinks
Cortex.Tensorrt-LLM is a C++ inference library that can be loaded by any server at runtime. It submodules NVIDIA’s TensorRT-LLM for GPU accelerated inference on NVIDIA's GPUs.
☆42Updated last year
Alternatives and similar repositories for cortex.tensorrt-llm
Users that are interested in cortex.tensorrt-llm are comparing it to the libraries listed below
Sorting:
- ☆108Updated 4 months ago
- AirLLM 70B inference with single 4GB GPU☆14Updated 5 months ago
- This reference can be used with any existing OpenAI integrated apps to run with TRT-LLM inference locally on GeForce GPU on Windows inste…☆126Updated last year
- automatically quant GGUF models☆218Updated last month
- A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.☆64Updated 2 years ago
- A fast batching API to serve LLM models☆189Updated last year
- Transplants vocabulary between language models, enabling the creation of draft models for speculative decoding WITHOUT retraining.☆47Updated last month
- LLM inference in C/C++☆103Updated this week
- Tcurtsni: Reverse Instruction Chat, ever wonder what your LLM wants to ask you?☆23Updated last year
- Lightweight continuous batching OpenAI compatibility using HuggingFace Transformers include T5 and Whisper.☆29Updated 9 months ago
- Experimental LLM Inference UX to aid in creative writing☆127Updated last year
- Something similar to Apple Intelligence?☆61Updated last year
- run ollama & gguf easily with a single command☆52Updated last year
- Gradio based tool to run opensource LLM models directly from Huggingface☆96Updated last year
- After my server ui improvements were successfully merged, consider this repo a playground for experimenting, tinkering and hacking around…☆53Updated last year
- ☆164Updated 4 months ago
- llama.cpp fork used by GPT4All☆55Updated 10 months ago
- cortex.llamacpp is a high-efficiency C++ inference engine for edge computing. It is a dynamic library that can be loaded by any server a…☆41Updated 5 months ago
- Comparison of the output quality of quantization methods, using Llama 3, transformers, GGUF, EXL2.☆165Updated last year
- 1.58-bit LLaMa model☆83Updated last year
- Easily view and modify JSON datasets for large language models☆84Updated 7 months ago
- Run multiple resource-heavy Large Models (LM) on the same machine with limited amount of VRAM/other resources by exposing them on differe…☆85Updated this week
- cli tool to quantize gguf, gptq, awq, hqq and exl2 models☆76Updated last year
- Python package wrapping llama.cpp for on-device LLM inference☆95Updated 2 months ago
- Serving LLMs in the HF-Transformers format via a PyFlask API☆72Updated last year
- An extension that lets the AI take the wheel, allowing it to use the mouse and keyboard, recognize UI elements, and prompt itself :3...no…☆127Updated last year
- ☆127Updated last year
- A pipeline parallel training script for LLMs.☆164Updated 7 months ago
- Testing LLM reasoning abilities with family relationship quizzes.☆63Updated 10 months ago
- ☆24Updated 10 months ago