wangcx18 / llm-vscode-inference-serverLinks
An endpoint server for efficiently serving quantized open-source LLMs for code.
☆58Updated 2 years ago
Alternatives and similar repositories for llm-vscode-inference-server
Users that are interested in llm-vscode-inference-server are comparing it to the libraries listed below
Sorting:
- starcoder server for huggingface-vscdoe custom endpoint☆179Updated 2 years ago
- Some simple scripts that I use day-to-day when working with LLMs and Huggingface Hub☆161Updated 2 years ago
- Visual Studio Code extension for WizardCoder☆148Updated 2 years ago
- ☆198Updated last year
- Host the GPTQ model using AutoGPTQ as an API that is compatible with text generation UI API.☆90Updated 2 years ago
- ☆54Updated 2 years ago
- An OpenAI Completions API compatible server for NLP transformers models☆66Updated 2 years ago
- A fast batching API to serve LLM models☆189Updated last year
- Deploy your GGML models to HuggingFace Spaces with Docker and gradio☆38Updated 2 years ago
- Client-side toolkit for using large language models, including where self-hosted☆115Updated this week
- ☆166Updated 5 months ago
- Convenient wrapper for fine-tuning and inference of Large Language Models (LLMs) with several quantization techniques (GTPQ, bitsandbytes…☆146Updated 2 years ago
- A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.☆64Updated 2 years ago
- ☆68Updated last year
- TheBloke's Dockerfiles☆308Updated last year
- An OpenAI-like LLaMA inference API☆113Updated 2 years ago
- LLaMA Server combines the power of LLaMA C++ with the beauty of Chatbot UI.☆130Updated 2 years ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆53Updated 2 years ago
- Falcon LLM ggml framework with CPU and GPU support☆249Updated 2 years ago
- ☆40Updated last year
- Roy: A lightweight, model-agnostic framework for crafting advanced multi-agent systems using large language models.☆78Updated 2 years ago
- Notus is a collection of fine-tuned LLMs using SFT, DPO, SFT+DPO, and/or any other RLHF techniques, while always keeping a data-first app…☆169Updated 2 years ago
- Unofficial python bindings for the rust llm library. 🐍❤️🦀☆76Updated 2 years ago
- ☆67Updated 10 months ago
- ☆119Updated last year
- VSCode AI coding assistant powered by self-hosted llama.cpp endpoint.☆183Updated last year
- Gradio based tool to run opensource LLM models directly from Huggingface☆96Updated last year
- Dagger functions to import Hugging Face GGUF models into a local ollama instance and optionally push them to ollama.com.☆119Updated last year
- Run language models on consumer hardware.☆27Updated 2 years ago
- Extension for using alternative GitHub Copilot (StarCoder API) in VSCode☆100Updated last year