absadiki / pyllamacppLinks
Python bindings for llama.cpp
☆65Updated last year
Alternatives and similar repositories for pyllamacpp
Users that are interested in pyllamacpp are comparing it to the libraries listed below
Sorting:
- Harnessing the Memory Power of the Camelids☆147Updated 2 years ago
- Deploy your GGML models to HuggingFace Spaces with Docker and gradio☆38Updated 2 years ago
- A prompt/context management system☆171Updated 2 years ago
- Some simple scripts that I use day-to-day when working with LLMs and Huggingface Hub☆161Updated 2 years ago
- Falcon LLM ggml framework with CPU and GPU support☆248Updated last year
- Load local LLMs effortlessly in a Jupyter notebook for testing purposes alongside Langchain or other agents. Contains Oobagooga and Kobol…☆213Updated 2 years ago
- Automated prompting and scoring framework to evaluate LLMs using updated human knowledge prompts☆109Updated 2 years ago
- Convenient wrapper for fine-tuning and inference of Large Language Models (LLMs) with several quantization techniques (GTPQ, bitsandbytes…☆146Updated 2 years ago
- Local LLM ReAct Agent with Guidance☆158Updated 2 years ago
- An Autonomous LLM Agent that runs on Wizcoder-15B☆334Updated last year
- An OpenAI-like LLaMA inference API☆113Updated 2 years ago
- Roy: A lightweight, model-agnostic framework for crafting advanced multi-agent systems using large language models.☆79Updated 2 years ago
- ☆276Updated 2 years ago
- oobaboga -text-generation-webui implementation of wafflecomposite - langchain-ask-pdf-local☆71Updated 2 years ago
- Unofficial python bindings for the rust llm library. 🐍❤️🦀☆76Updated 2 years ago
- Landmark Attention: Random-Access Infinite Context Length for Transformers QLoRA☆124Updated 2 years ago
- Visual Studio Code extension for WizardCoder☆149Updated 2 years ago
- An all-new Language Model That Processes Ultra-Long Sequences of 100,000+ Ultra-Fast☆150Updated last year
- Host the GPTQ model using AutoGPTQ as an API that is compatible with text generation UI API.☆91Updated 2 years ago
- LLaMA Server combines the power of LLaMA C++ with the beauty of Chatbot UI.☆130Updated 2 years ago
- A guidance compatibility layer for llama-cpp-python☆36Updated 2 years ago
- Lord of LLMS☆294Updated 2 months ago
- ☆54Updated 2 years ago
- Run inference on replit-3B code instruct model using CPU☆160Updated 2 years ago
- The code we currently use to fine-tune models.☆117Updated last year
- One Repo To Quickly Build One Docker File for HuggingChat Front and BackEnd☆26Updated 2 years ago
- Run any Large Language Model behind a unified API☆170Updated 2 years ago
- ☆168Updated 2 years ago
- Erudito: Easy API/CLI to ask questions about your documentation☆99Updated 2 years ago
- 💬 Chatbot web app + HTTP and Websocket endpoints for LLM inference with the Petals client☆317Updated last year