mzbac / wizardCoder-vsc
Visual Studio Code extension for WizardCoder
☆149Updated last year
Alternatives and similar repositories for wizardCoder-vsc:
Users that are interested in wizardCoder-vsc are comparing it to the libraries listed below
- Host the GPTQ model using AutoGPTQ as an API that is compatible with text generation UI API.☆92Updated last year
- starcoder server for huggingface-vscdoe custom endpoint☆171Updated last year
- Automated prompting and scoring framework to evaluate LLMs using updated human knowledge prompts☆112Updated last year
- An OpenAI-like LLaMA inference API☆112Updated last year
- Merge Transformers language models by use of gradient parameters.☆208Updated 9 months ago
- An Autonomous LLM Agent that runs on Wizcoder-15B☆335Updated 6 months ago
- Harnessing the Memory Power of the Camelids☆146Updated last year
- Let's create synthetic textbooks together :)☆74Updated last year
- 4 bits quantization of SantaCoder using GPTQ☆51Updated last year
- Local LLM ReAct Agent with Guidance☆158Updated last year
- Python bindings for the C++ port of GPT4All-J model.☆38Updated last year
- ☆73Updated last year
- Load local LLMs effortlessly in a Jupyter notebook for testing purposes alongside Langchain or other agents. Contains Oobagooga and Kobol…☆213Updated last year
- Some simple scripts that I use day-to-day when working with LLMs and Huggingface Hub☆161Updated last year
- Landmark Attention: Random-Access Infinite Context Length for Transformers QLoRA☆123Updated last year
- An autonomous AI agent extension for Oobabooga's web ui☆175Updated last year
- The code we currently use to fine-tune models.☆114Updated last year
- TheBloke's Dockerfiles☆303Updated last year
- Extend the original llama.cpp repo to support redpajama model.☆117Updated 8 months ago
- Your pair programming wingman. Supports OpenAI, Anthropic, or any LLM on your local inference server.☆70Updated 10 months ago
- Instruct-tuning LLaMA on consumer hardware☆66Updated 2 years ago
- Extension for using alternative GitHub Copilot (StarCoder API) in VSCode☆100Updated last year
- ☆154Updated 9 months ago
- Use local llama LLM or openai to chat, discuss/summarize your documents, youtube videos, and so on.☆152Updated 4 months ago
- Convenient wrapper for fine-tuning and inference of Large Language Models (LLMs) with several quantization techniques (GTPQ, bitsandbytes…☆146Updated last year
- LLaMA Server combines the power of LLaMA C++ with the beauty of Chatbot UI.☆122Updated last year
- Deploy your GGML models to HuggingFace Spaces with Docker and gradio☆36Updated last year
- An endpoint server for efficiently serving quantized open-source LLMs for code.☆54Updated last year
- A simple experiment on letting two local LLM have a conversation about anything!☆109Updated 10 months ago
- A prompt/context management system☆170Updated 2 years ago