xNul / code-llama-for-vscode
Use Code Llama with Visual Studio Code and the Continue extension. A local LLM alternative to GitHub Copilot.
β566Updated 6 months ago
Alternatives and similar repositories for code-llama-for-vscode:
Users that are interested in code-llama-for-vscode are comparing it to the libraries listed below
- Self-evaluating interview for AI codersβ569Updated 3 weeks ago
- C++ implementation for π«StarCoderβ449Updated last year
- Uses Auto-GPT with Llama.cppβ387Updated 10 months ago
- LLM powered development for VSCodeβ1,270Updated 7 months ago
- Are Copilots Local Yet? The frontier of local LLM Copilots for code completion, project generation, shell assistance, and more. Find toolβ¦β520Updated 3 weeks ago
- Simple UI for LLM Model Finetuningβ2,052Updated last year
- A llama.cpp drop-in replacement for OpenAI's GPT endpoints, allowing GPT-powered apps to run off local llama.cpp models instead of OpenAIβ¦β600Updated last year
- Finetune llama2-70b and codellama on MacBook Air without quantizationβ447Updated 10 months ago
- Llama 2 Everywhere (L2E)β1,511Updated last month
- starcoder server for huggingface-vscdoe custom endpointβ168Updated last year
- A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.β2,823Updated last year
- β276Updated last year
- LLaMa retrieval plugin script using OpenAI's retrieval pluginβ324Updated last year
- β1,452Updated last year
- AgentLLM is a PoC for browser-native autonomous agentsβ395Updated last year
- Supercharge Open-Source AI Modelsβ350Updated last year
- Make Llama2 use Code Execution, Debug, Save Code, Reuse it, Access to Internetβ689Updated last year
- An Autonomous LLM Agent that runs on Wizcoder-15Bβ337Updated 3 months ago
- Build robust LLM applications with true composability πβ415Updated last year
- 4 bits quantization of LLaMA using GPTQβ3,034Updated 7 months ago
- MiniLLM is a minimal system for running modern LLMs on consumer-grade GPUsβ894Updated last year
- A fast inference library for running LLMs locally on modern consumer-class GPUsβ3,951Updated this week
- Tune any FALCON in 4-bitβ466Updated last year
- β679Updated 2 weeks ago
- β1,024Updated last year
- Visual Studio Code extension for WizardCoderβ145Updated last year
- Inference Llama 2 in one file of pure π₯β2,107Updated 8 months ago
- UI tool for fine-tuning and testing your own LoRA models base on LLaMA, GPT-J and more. One-click run on Google Colab. + A Gradio ChatGPTβ¦β458Updated last year
- The llama-cpp-agent framework is a tool designed for easy interaction with Large Language Models (LLMs). Allowing users to chat with LLM β¦β528Updated this week
- INT4/INT5/INT8 and FP16 inference on CPU for RWKV language modelβ1,469Updated 3 weeks ago