xNul / code-llama-for-vscodeLinks
Use Code Llama with Visual Studio Code and the Continue extension. A local LLM alternative to GitHub Copilot.
☆569Updated last year
Alternatives and similar repositories for code-llama-for-vscode
Users that are interested in code-llama-for-vscode are comparing it to the libraries listed below
Sorting:
- Self-evaluating interview for AI coders☆600Updated 7 months ago
- C++ implementation for 💫StarCoder☆459Updated 2 years ago
- LLM powered development for VSCode☆1,316Updated last year
- Visual Studio Code extension for WizardCoder☆149Updated 2 years ago
- Uses Auto-GPT with Llama.cpp☆385Updated last year
- starcoder server for huggingface-vscdoe custom endpoint☆179Updated 2 years ago
- A llama.cpp drop-in replacement for OpenAI's GPT endpoints, allowing GPT-powered apps to run off local llama.cpp models instead of OpenAI…☆597Updated 2 years ago
- Make Llama2 use Code Execution, Debug, Save Code, Reuse it, Access to Internet☆685Updated 2 years ago
- Basaran is an open-source alternative to the OpenAI text completion API. It provides a compatible streaming API for your Hugging Face Tra…☆1,292Updated 2 years ago
- ☆1,028Updated 2 years ago
- UI tool for fine-tuning and testing your own LoRA models base on LLaMA, GPT-J and more. One-click run on Google Colab. + A Gradio ChatGPT…☆475Updated 2 years ago
- A OpenAI API compatible REST server for llama.☆208Updated 11 months ago
- Extension for using alternative GitHub Copilot (StarCoder API) in VSCode☆100Updated last year
- An Autonomous LLM Agent that runs on Wizcoder-15B☆333Updated last year
- An AI code interpreter for sensitive data, powered by GPT-4 or Code Llama / Llama 2.☆440Updated 2 years ago
- fastLLaMa: An experimental high-performance framework for running Decoder-only LLMs with 4-bit quantization in Python using a C/C++ backe…☆412Updated 2 years ago
- LLM that combines the principles of wizardLM and vicunaLM☆716Updated 2 years ago
- ☆276Updated 2 years ago
- TheBloke's Dockerfiles☆308Updated last year
- An open source UI for OpenChat models☆288Updated last year
- ☆204Updated last year
- An easy way to host your own AI API and expose alternative models, while being compatible with "open" AI clients.☆332Updated last year
- Load local LLMs effortlessly in a Jupyter notebook for testing purposes alongside Langchain or other agents. Contains Oobagooga and Kobol…☆213Updated 2 years ago
- Official supported Python bindings for llama.cpp + gpt4all☆1,016Updated 2 years ago
- 💬 Chatbot web app + HTTP and Websocket endpoints for LLM inference with the Petals client☆316Updated last year
- Chat with Meta's LLaMA models at home made easy☆842Updated 2 years ago
- A self-hosted github copilot guide using oobabooga webui☆160Updated 2 years ago
- Run inference on MPT-30B using CPU☆576Updated 2 years ago
- Falcon LLM ggml framework with CPU and GPU support☆249Updated 2 years ago
- LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale Instructions☆823Updated 2 years ago