wangcx18 / llm-vscode-inference-serverLinks
An endpoint server for efficiently serving quantized open-source LLMs for code.
☆58Updated 2 years ago
Alternatives and similar repositories for llm-vscode-inference-server
Users that are interested in llm-vscode-inference-server are comparing it to the libraries listed below
Sorting:
- starcoder server for huggingface-vscdoe custom endpoint☆179Updated 2 years ago
- Some simple scripts that I use day-to-day when working with LLMs and Huggingface Hub☆161Updated 2 years ago
- ☆54Updated 2 years ago
- ☆198Updated last year
- Client-side toolkit for using large language models, including where self-hosted☆114Updated 3 weeks ago
- Convenient wrapper for fine-tuning and inference of Large Language Models (LLMs) with several quantization techniques (GTPQ, bitsandbytes…☆146Updated 2 years ago
- Locally running LLM with internet access☆97Updated 6 months ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆53Updated 2 years ago
- ☆134Updated last month
- Dagger functions to import Hugging Face GGUF models into a local ollama instance and optionally push them to ollama.com.☆118Updated last year
- run ollama & gguf easily with a single command☆52Updated last year
- Visual Studio Code extension for WizardCoder☆148Updated 2 years ago
- LLaMA Server combines the power of LLaMA C++ with the beauty of Chatbot UI.☆130Updated 2 years ago
- A python package for developing AI applications with local LLMs.☆151Updated last year
- ☆119Updated last year
- Easily create LLM automation/agent workflows☆60Updated last year
- ☆68Updated last year
- Gradio based tool to run opensource LLM models directly from Huggingface☆96Updated last year
- A fast batching API to serve LLM models☆188Updated last year
- Deploy your GGML models to HuggingFace Spaces with Docker and gradio☆38Updated 2 years ago
- This reference can be used with any existing OpenAI integrated apps to run with TRT-LLM inference locally on GeForce GPU on Windows inste…☆127Updated last year
- 🚀 Scale your RAG pipeline using Ragswift: A scalable centralized embeddings management platform☆38Updated last year
- Unsloth Studio☆122Updated 9 months ago
- A simple experiment on letting two local LLM have a conversation about anything!☆112Updated last year
- Embed anything.☆27Updated last year
- A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.☆64Updated 2 years ago
- Serving LLMs in the HF-Transformers format via a PyFlask API☆72Updated last year
- Host the GPTQ model using AutoGPTQ as an API that is compatible with text generation UI API.☆90Updated 2 years ago
- Pipeline is an open source python SDK for building AI/ML workflows☆138Updated last year
- VSCode AI coding assistant powered by self-hosted llama.cpp endpoint.☆183Updated 11 months ago