Maximilian-Winter / llama-cpp-agentLinks
The llama-cpp-agent framework is a tool designed for easy interaction with Large Language Models (LLMs). Allowing users to chat with LLM models, execute structured function calls and get structured output. Works also with models not fine-tuned to JSON output and function calls.
☆587Updated 6 months ago
Alternatives and similar repositories for llama-cpp-agent
Users that are interested in llama-cpp-agent are comparing it to the libraries listed below
Sorting:
- function calling-based LLM agents☆288Updated 11 months ago
- A fast batching API to serve LLM models☆185Updated last year
- An application for running LLMs locally on your device, with your documents, facilitating detailed citations in generated responses.☆606Updated 9 months ago
- A multimodal, function calling powered LLM webui.☆216Updated 11 months ago
- Web UI for ExLlamaV2☆506Updated 6 months ago
- An AI assistant beyond the chat box.☆328Updated last year
- ☆1,052Updated 11 months ago
- The RunPod worker template for serving our large language model endpoints. Powered by vLLM.☆355Updated this week
- Comparison of the output quality of quantization methods, using Llama 3, transformers, GGUF, EXL2.☆159Updated last year
- ☆209Updated last month
- Self-evaluating interview for AI coders☆594Updated 2 months ago
- Large-scale LLM inference engine☆1,524Updated this week
- Convenience scripts to finetune (chat-)LLaMa3 and other models for any language☆312Updated last year
- SiLLM simplifies the process of training and running Large Language Models (LLMs) on Apple Silicon by leveraging the MLX framework.☆278Updated 2 months ago
- Efficient visual programming for AI language models☆363Updated 3 months ago
- Dataset Crafting w/ RAG/Wikipedia ground truth and Efficient Fine-Tuning Using MLX and Unsloth. Includes configurable dataset annotation …☆184Updated last year
- Software to implement GoT with a weviate vectorized database☆675Updated 4 months ago
- Generate Synthetic Data Using OpenAI, MistralAI or AnthropicAI☆222Updated last year
- AlwaysReddy is a LLM voice assistant that is always just a hotkey away.☆749Updated 5 months ago
- 🚀 Retrieval Augmented Generation (RAG) with txtai. Combine search and LLMs to find insights with your own data.☆394Updated 3 months ago
- A tool for generating function arguments and choosing what function to call with local LLMs☆427Updated last year
- A python package for developing AI applications with local LLMs.☆152Updated 7 months ago
- Falcon LLM ggml framework with CPU and GPU support☆247Updated last year
- This is our own implementation of 'Layer Selective Rank Reduction'☆240Updated last year
- TheBloke's Dockerfiles☆307Updated last year
- Your Trusty Memory-enabled AI Companion - Simple RAG chatbot optimized for local LLMs | 12 Languages Supported | OpenAI API Compatible☆334Updated 5 months ago
- Pure C++ implementation of several models for real-time chatting on your computer (CPU & GPU)☆677Updated this week
- The easiest, and fastest way to run AI-generated Python code safely☆329Updated 8 months ago
- Task-based Agentic Framework using StrictJSON as the core☆457Updated last week
- llama.cpp with BakLLaVA model describes what does it see☆382Updated last year