Use Code Llama with Visual Studio Code and the Continue extension. A local LLM alternative to GitHub Copilot.
☆570Jul 31, 2024Updated last year
Alternatives and similar repositories for code-llama-for-vscode
Users that are interested in code-llama-for-vscode are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Inference code for CodeLlama models☆16,332Aug 12, 2024Updated last year
- Make Llama2 use Code Execution, Debug, Save Code, Reuse it, Access to Internet☆681Sep 21, 2023Updated 2 years ago
- Visual Studio Code extension for WizardCoder☆148Aug 1, 2023Updated 2 years ago
- ☆12Jan 19, 2024Updated 2 years ago
- An AI code interpreter for sensitive data, powered by GPT-4 or Code Llama / Llama 2.☆440Sep 2, 2023Updated 2 years ago
- Wordpress hosting with auto-scaling - Free Trial • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- ⏩ Source-controlled AI checks, enforceable in CI. Powered by the open-source Continue CLI☆32,365Apr 8, 2026Updated last week
- Oobabooga "Hello World" API example for node.js with Express☆13Jul 2, 2023Updated 2 years ago
- Host LLM via text-generation-inference☆16Dec 5, 2023Updated 2 years ago
- An Autonomous LLM Agent that runs on Wizcoder-15B☆336Oct 21, 2024Updated last year
- Host the GPTQ model using AutoGPTQ as an API that is compatible with text generation UI API.☆91Jun 19, 2023Updated 2 years ago
- An easy-to-use LLMs quantization package with user-friendly apis, based on GPTQ algorithm.☆5,047Apr 11, 2025Updated last year
- ☆17Dec 18, 2023Updated 2 years ago
- A sample ChatGPT API Gateway. A robust interface for interacting with third-party APIs using FastAPI☆27Nov 16, 2023Updated 2 years ago
- ☆74Sep 5, 2023Updated 2 years ago
- GPUs on demand by Runpod - Special Offer Available • AdRun AI, ML, and HPC workloads on powerful cloud GPUs—without limits or wasted spend. Deploy GPUs in under a minute and pay by the second.
- Your pair programming wingman. Supports OpenAI, Anthropic, or any LLM on your local inference server.☆70Jun 26, 2024Updated last year
- LLMs build upon Evol Insturct: WizardLM, WizardCoder, WizardMath☆9,471Jun 7, 2025Updated 10 months ago
- The original local LLM interface. Text, vision, tool-calling, training. UI + API, 100% offline and private.☆46,493Updated this week
- starcoder server for huggingface-vscdoe custom endpoint☆179Nov 18, 2023Updated 2 years ago
- LLM powered development for VSCode☆1,315Apr 2, 2026Updated last week
- Python bindings for llama.cpp☆10,181Updated this week
- Like system requirements lab but for LLMs☆31Jun 10, 2023Updated 2 years ago
- FauxPilot - an open-source alternative to GitHub Copilot server☆14,748Apr 9, 2024Updated 2 years ago
- ⧉ Deploy containers over SSH.☆15Aug 9, 2025Updated 8 months ago
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.☆2,915Sep 30, 2023Updated 2 years ago
- An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.☆39,448Jun 2, 2025Updated 10 months ago
- 💬📝 A small dictation app using OpenAI's Whisper speech recognition model.☆11Sep 13, 2024Updated last year
- ☆13Feb 18, 2024Updated 2 years ago
- Universal LLM Deployment Engine with ML Compilation☆22,414Apr 6, 2026Updated last week
- ☆135Nov 24, 2023Updated 2 years ago
- codellama on CPU without Docker☆25Feb 8, 2024Updated 2 years ago
- Stable Diffusion and Flux in pure C/C++☆25Apr 5, 2026Updated last week
- Python examples using the bigcode/tiny_starcoder_py 159M model to generate code☆45May 31, 2023Updated 2 years ago
- Bare Metal GPUs on DigitalOcean Gradient AI • AdPurpose-built for serious AI teams training foundational models, running large-scale inference, and pushing the boundaries of what's possible.
- A fast inference library for running LLMs locally on modern consumer-class GPUs☆4,493Mar 4, 2026Updated last month
- Self-evaluating interview for AI coders☆599Jun 21, 2025Updated 9 months ago
- An example of running local models with GGML☆40Aug 10, 2023Updated 2 years ago
- Turbopilot is an open source large-language-model based code completion engine that runs locally on CPU☆3,797Sep 30, 2023Updated 2 years ago
- Load local LLMs effortlessly in a Jupyter notebook for testing purposes alongside Langchain or other agents. Contains Oobagooga and Kobol…☆214Jun 25, 2023Updated 2 years ago
- ☆3,365Feb 25, 2024Updated 2 years ago
- AI Agent that handles engineering tasks end-to-end: integrates with developers’ tools, plans, executes, and iterates until it achieves a …☆3,531Mar 18, 2026Updated 3 weeks ago