wangcx18 / llm-vscode-inference-serverLinks
An endpoint server for efficiently serving quantized open-source LLMs for code.
☆58Updated last year
Alternatives and similar repositories for llm-vscode-inference-server
Users that are interested in llm-vscode-inference-server are comparing it to the libraries listed below
Sorting:
- starcoder server for huggingface-vscdoe custom endpoint☆176Updated last year
- Visual Studio Code extension for WizardCoder☆148Updated 2 years ago
- Some simple scripts that I use day-to-day when working with LLMs and Huggingface Hub☆160Updated 2 years ago
- ☆197Updated last year
- ☆162Updated 2 months ago
- Practical and advanced guide to LLMOps. It provides a solid understanding of large language models’ general concepts, deployment techniqu…☆75Updated last year
- ☆67Updated last year
- ☆53Updated 2 years ago
- A fast batching API to serve LLM models☆187Updated last year
- Deploy your GGML models to HuggingFace Spaces with Docker and gradio☆37Updated 2 years ago
- Gradio based tool to run opensource LLM models directly from Huggingface☆95Updated last year
- A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.☆63Updated last year
- ☆132Updated 5 months ago
- Dagger functions to import Hugging Face GGUF models into a local ollama instance and optionally push them to ollama.com.☆117Updated last year
- Unsloth Studio☆110Updated 6 months ago
- Convenient wrapper for fine-tuning and inference of Large Language Models (LLMs) with several quantization techniques (GTPQ, bitsandbytes…☆146Updated last year
- LLaMA Server combines the power of LLaMA C++ with the beauty of Chatbot UI.☆130Updated 2 years ago
- The RunPod worker template for serving our large language model endpoints. Powered by vLLM.☆370Updated 2 weeks ago
- ☆116Updated 9 months ago
- Auto Data is a library designed for quick and effortless creation of datasets tailored for fine-tuning Large Language Models (LLMs).☆102Updated 11 months ago
- ☆40Updated last year
- Host the GPTQ model using AutoGPTQ as an API that is compatible with text generation UI API.☆89Updated 2 years ago
- Distributed Inference for mlx LLm☆96Updated last year
- An OpenAI Completions API compatible server for NLP transformers models☆64Updated last year
- cli tool to quantize gguf, gptq, awq, hqq and exl2 models☆76Updated 9 months ago
- run ollama & gguf easily with a single command☆52Updated last year
- ☆102Updated last month
- An OpenAI-like LLaMA inference API☆113Updated 2 years ago
- Python client library for improving your LLM app accuracy☆98Updated 7 months ago
- High level library for batched embeddings generation, blazingly-fast web-based RAG and quantized indexes processing ⚡☆67Updated 11 months ago