Maximilian-Winter / llama-cpp-agentLinks
The llama-cpp-agent framework is a tool designed for easy interaction with Large Language Models (LLMs). Allowing users to chat with LLM models, execute structured function calls and get structured output. Works also with models not fine-tuned to JSON output and function calls.
☆615Updated 11 months ago
Alternatives and similar repositories for llama-cpp-agent
Users that are interested in llama-cpp-agent are comparing it to the libraries listed below
Sorting:
- function calling-based LLM agents☆289Updated last year
- An application for running LLMs locally on your device, with your documents, facilitating detailed citations in generated responses.☆629Updated last year
- A fast batching API to serve LLM models☆189Updated last year
- A multimodal, function calling powered LLM webui.☆216Updated last year
- Web UI for ExLlamaV2☆513Updated last year
- An AI assistant beyond the chat box.☆329Updated last year
- Dataset Crafting w/ RAG/Wikipedia ground truth and Efficient Fine-Tuning Using MLX and Unsloth. Includes configurable dataset annotation …☆193Updated last year
- ☆1,193Updated last month
- Your Trusty Memory-enabled AI Companion - Simple RAG chatbot optimized for local LLMs | 12 Languages Supported | OpenAI API Compatible☆347Updated 11 months ago
- Convenience scripts to finetune (chat-)LLaMa3 and other models for any language☆314Updated last year
- Comparison of the output quality of quantization methods, using Llama 3, transformers, GGUF, EXL2.☆165Updated last year
- 🚀 Retrieval Augmented Generation (RAG) with txtai. Combine search and LLMs to find insights with your own data.☆438Updated 2 months ago
- The RunPod worker template for serving our large language model endpoints. Powered by vLLM.☆401Updated 2 weeks ago
- Software to implement GoT with a weviate vectorized database☆680Updated 10 months ago
- ☆161Updated 11 months ago
- ☆209Updated last month
- Efficient visual programming for AI language models☆360Updated 8 months ago
- Large-scale LLM inference engine☆1,647Updated 2 weeks ago
- This is our own implementation of 'Layer Selective Rank Reduction'☆240Updated last year
- Querying local documents, powered by LLM☆643Updated 3 weeks ago
- The easiest, and fastest way to run AI-generated Python code safely☆358Updated last year
- Self-evaluating interview for AI coders☆600Updated 7 months ago
- A python package for developing AI applications with local LLMs.☆150Updated last year
- A Python-based web-assisted large language model (LLM) search assistant using Llama.cpp☆367Updated last year
- A tool for generating function arguments and choosing what function to call with local LLMs☆436Updated last year
- An AI memory layer with short- and long-term storage, semantic clustering, and optional memory decay for context-aware applications.☆679Updated last year
- TheBloke's Dockerfiles☆308Updated last year
- An OpenAI API compatible API for chat with image input and questions about the images. aka Multimodal.☆266Updated 11 months ago
- SiLLM simplifies the process of training and running Large Language Models (LLMs) on Apple Silicon by leveraging the MLX framework.☆284Updated 7 months ago
- ☆166Updated 6 months ago