Maximilian-Winter / llama-cpp-agentLinks
The llama-cpp-agent framework is a tool designed for easy interaction with Large Language Models (LLMs). Allowing users to chat with LLM models, execute structured function calls and get structured output. Works also with models not fine-tuned to JSON output and function calls.
☆592Updated 7 months ago
Alternatives and similar repositories for llama-cpp-agent
Users that are interested in llama-cpp-agent are comparing it to the libraries listed below
Sorting:
- An application for running LLMs locally on your device, with your documents, facilitating detailed citations in generated responses.☆612Updated 11 months ago
- function calling-based LLM agents☆287Updated last year
- A fast batching API to serve LLM models☆187Updated last year
- A multimodal, function calling powered LLM webui.☆216Updated last year
- ☆1,086Updated last year
- Web UI for ExLlamaV2☆512Updated 7 months ago
- Convenience scripts to finetune (chat-)LLaMa3 and other models for any language☆314Updated last year
- The RunPod worker template for serving our large language model endpoints. Powered by vLLM.☆371Updated last week
- An AI assistant beyond the chat box.☆328Updated last year
- Efficient visual programming for AI language models☆362Updated 4 months ago
- An OpenAI API compatible API for chat with image input and questions about the images. aka Multimodal.☆260Updated 6 months ago
- 🚀 Retrieval Augmented Generation (RAG) with txtai. Combine search and LLMs to find insights with your own data.☆411Updated 4 months ago
- Dataset Crafting w/ RAG/Wikipedia ground truth and Efficient Fine-Tuning Using MLX and Unsloth. Includes configurable dataset annotation …☆185Updated last year
- Large-scale LLM inference engine☆1,560Updated this week
- Your Trusty Memory-enabled AI Companion - Simple RAG chatbot optimized for local LLMs | 12 Languages Supported | OpenAI API Compatible☆339Updated 7 months ago
- Querying local documents, powered by LLM☆628Updated 2 months ago
- A python package for developing AI applications with local LLMs.☆151Updated 8 months ago
- SiLLM simplifies the process of training and running Large Language Models (LLMs) on Apple Silicon by leveraging the MLX framework.☆280Updated 3 months ago
- This is our own implementation of 'Layer Selective Rank Reduction'☆240Updated last year
- Comparison of the output quality of quantization methods, using Llama 3, transformers, GGUF, EXL2.☆163Updated last year
- ☆209Updated 3 weeks ago
- Software to implement GoT with a weviate vectorized database☆675Updated 6 months ago
- Pure C++ implementation of several models for real-time chatting on your computer (CPU & GPU)☆713Updated last week
- A tool for generating function arguments and choosing what function to call with local LLMs☆430Updated last year
- Generate Synthetic Data Using OpenAI, MistralAI or AnthropicAI☆222Updated last year
- A library for easily merging multiple LLM experts, and efficiently train the merged LLM.☆492Updated last year
- Self-evaluating interview for AI coders☆597Updated 3 months ago
- TheBloke's Dockerfiles☆307Updated last year
- A simple Jupyter Notebook for learning MLX text-completion fine-tuning!☆122Updated 10 months ago
- FastMLX is a high performance production ready API to host MLX models.☆331Updated 6 months ago