wangcx18 / llm-vscode-inference-server
An endpoint server for efficiently serving quantized open-source LLMs for code.
☆54Updated last year
Alternatives and similar repositories for llm-vscode-inference-server:
Users that are interested in llm-vscode-inference-server are comparing it to the libraries listed below
- starcoder server for huggingface-vscdoe custom endpoint☆171Updated last year
- ☆38Updated last year
- Some simple scripts that I use day-to-day when working with LLMs and Huggingface Hub☆158Updated last year
- An OpenAI Completions API compatible server for NLP transformers models☆64Updated last year
- A high-throughput and memory-efficient inference and serving engine for LLMs☆51Updated last year
- Host the GPTQ model using AutoGPTQ as an API that is compatible with text generation UI API.☆91Updated last year
- ☆39Updated last year
- Visual Studio Code extension for WizardCoder☆147Updated last year
- ☆199Updated last year
- Let's create synthetic textbooks together :)☆73Updated last year
- Deploy your GGML models to HuggingFace Spaces with Docker and gradio☆36Updated last year
- ☆140Updated last year
- Unofficial python bindings for the rust llm library. 🐍❤️🦀☆75Updated last year
- ☆66Updated 9 months ago
- Automated prompting and scoring framework to evaluate LLMs using updated human knowledge prompts☆111Updated last year
- Convenient wrapper for fine-tuning and inference of Large Language Models (LLMs) with several quantization techniques (GTPQ, bitsandbytes…☆147Updated last year
- Model REVOLVER, a human in the loop model mixing system.☆33Updated last year
- ☆73Updated last year
- ☆152Updated 8 months ago
- Low-Rank adapter extraction for fine-tuned transformers models☆171Updated 10 months ago
- run ollama & gguf easily with a single command☆49Updated 10 months ago
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks☆31Updated 10 months ago
- Simple and fast server for GPTQ-quantized LLaMA inference☆24Updated last year
- Landmark Attention: Random-Access Infinite Context Length for Transformers QLoRA☆123Updated last year
- 📚 Datasets and models for instruction-tuning☆235Updated last year
- Instruct-tuning LLaMA on consumer hardware☆66Updated 2 years ago
- 4 bits quantization of SantaCoder using GPTQ☆51Updated last year
- An easy-to-use LLMs quantization package with user-friendly apis, based on GPTQ algorithm.☆37Updated last year
- A prompt/context management system☆170Updated last year
- QLoRA: Efficient Finetuning of Quantized LLMs☆77Updated 11 months ago