wangcx18 / llm-vscode-inference-server
An endpoint server for efficiently serving quantized open-source LLMs for code.
☆52Updated 11 months ago
Related projects: ⓘ
- Convenient wrapper for fine-tuning and inference of Large Language Models (LLMs) with several quantization techniques (GTPQ, bitsandbytes…☆139Updated 11 months ago
- starcoder server for huggingface-vscdoe custom endpoint☆166Updated 10 months ago
- Some simple scripts that I use day-to-day when working with LLMs and Huggingface Hub☆154Updated 11 months ago
- Easily view and modify JSON datasets for large language models☆55Updated this week
- A guidance compatibility layer for llama-cpp-python☆35Updated last year
- ☆144Updated 2 months ago
- Visual Studio Code extension for WizardCoder☆143Updated last year
- Extend the original llama.cpp repo to support redpajama model.☆117Updated 2 weeks ago
- ☆201Updated 7 months ago
- Host the GPTQ model using AutoGPTQ as an API that is compatible with text generation UI API.☆91Updated last year
- GPT-2 small trained on phi-like data☆65Updated 7 months ago
- A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.☆66Updated 11 months ago
- ☆136Updated 10 months ago
- A fast batching API to serve LLM models☆172Updated 4 months ago
- Deploy your GGML models to HuggingFace Spaces with Docker and gradio☆34Updated last year
- The code we currently use to fine-tune models.☆107Updated 4 months ago
- Notus is a collection of fine-tuned LLMs using SFT, DPO, SFT+DPO, and/or any other RLHF techniques, while always keeping a data-first app…☆161Updated 8 months ago
- ☆28Updated this week
- Low-Rank adapter extraction for fine-tuned transformers model☆154Updated 4 months ago
- Let's create synthetic textbooks together :)☆70Updated 7 months ago
- Client-side toolkit for using large language models, including where self-hosted☆101Updated last month
- Gradio based tool to run opensource LLM models directly from Huggingface☆84Updated 2 months ago
- ☆37Updated 9 months ago
- Evaluate and Enhance Your LLM Deployments for Real-World Inference Needs☆128Updated this week
- Unofficial python bindings for the rust llm library. 🐍❤️🦀☆72Updated last year
- For inferring and serving local LLMs using the MLX framework☆77Updated 5 months ago
- ☆64Updated 3 months ago
- Your pair programming wingman. Supports OpenAI, Anthropic, or any LLM on your local inference server.☆62Updated 2 months ago
- Simple and fast server for GPTQ-quantized LLaMA inference☆24Updated last year
- Self-hosted LLM chatbot arena, with yourself as the only judge☆36Updated 7 months ago