Maximilian-Winter / llama-cpp-agent
The llama-cpp-agent framework is a tool designed for easy interaction with Large Language Models (LLMs). Allowing users to chat with LLM models, execute structured function calls and get structured output. Works also with models not fine-tuned to JSON output and function calls.
☆551Updated 2 months ago
Alternatives and similar repositories for llama-cpp-agent:
Users that are interested in llama-cpp-agent are comparing it to the libraries listed below
- An application for running LLMs locally on your device, with your documents, facilitating detailed citations in generated responses.☆577Updated 5 months ago
- A fast batching API to serve LLM models☆182Updated 11 months ago
- function calling-based LLM agents☆285Updated 7 months ago
- ☆853Updated 7 months ago
- A multimodal, function calling powered LLM webui.☆214Updated 6 months ago
- Dataset Crafting w/ RAG/Wikipedia ground truth and Efficient Fine-Tuning Using MLX and Unsloth. Includes configurable dataset annotation …☆179Updated 8 months ago
- An AI assistant beyond the chat box.☆325Updated last year
- 🚀 Retrieval Augmented Generation (RAG) with txtai. Combine search and LLMs to find insights with your own data.☆352Updated 2 weeks ago
- Web UI for ExLlamaV2☆492Updated 2 months ago
- A library for easily merging multiple LLM experts, and efficiently train the merged LLM.☆470Updated 7 months ago
- Efficient visual programming for AI language models☆356Updated 7 months ago
- Large-scale LLM inference engine☆1,379Updated last week
- This is our own implementation of 'Layer Selective Rank Reduction'☆234Updated 10 months ago
- A python package for developing AI applications with local LLMs.☆147Updated 3 months ago
- SiLLM simplifies the process of training and running Large Language Models (LLMs) on Apple Silicon by leveraging the MLX framework.☆262Updated last week
- An OpenAI API compatible API for chat with image input and questions about the images. aka Multimodal.☆248Updated last month
- Generate Synthetic Data Using OpenAI, MistralAI or AnthropicAI☆223Updated 11 months ago
- Your Trusty Memory-enabled AI Companion - Simple RAG chatbot optimized for local LLMs | 12 Languages Supported | OpenAI API Compatible☆309Updated last month
- ☆153Updated 9 months ago
- An OAI compatible exllamav2 API that's both lightweight and fast☆907Updated this week
- ☆197Updated this week
- A tool for generating function arguments and choosing what function to call with local LLMs☆423Updated last year
- The RunPod worker template for serving our large language model endpoints. Powered by vLLM.☆306Updated this week
- FastMLX is a high performance production ready API to host MLX models.☆289Updated last month
- automatically quant GGUF models☆167Updated last week
- Convenience scripts to finetune (chat-)LLaMa3 and other models for any language☆302Updated 10 months ago
- Comparison of the output quality of quantization methods, using Llama 3, transformers, GGUF, EXL2.☆148Updated 11 months ago
- Open-source Perplexity app.☆120Updated 3 weeks ago
- Low-Rank adapter extraction for fine-tuned transformers models☆171Updated 11 months ago
- A simple Jupyter Notebook for learning MLX text-completion fine-tuning!☆116Updated 5 months ago