intel-staging / Langchain-ChatchatLinks
Knowledge Base QA using RAG pipeline on Intel CPU and GPU (e.g., local PC with iGPU, discrete GPU such as Arc, Flex and Max) with IPEX-LLM
☆17Updated 8 months ago
Alternatives and similar repositories for Langchain-Chatchat
Users that are interested in Langchain-Chatchat are comparing it to the libraries listed below
Sorting:
- Explore our open source AI portfolio! Develop, train, and deploy your AI solutions with performance- and productivity-optimized tools fro…☆62Updated 9 months ago
- KAN (Kolmogorov–Arnold Networks) in the MLX framework for Apple Silicon☆31Updated 6 months ago
- the small distributed language model toolkit; fine-tune state-of-the-art LLMs anywhere, rapidly☆32Updated last year
- Code generation with LLMs 🔗☆53Updated 2 years ago
- A python command-line tool to download & manage MLX AI models from Hugging Face.☆19Updated last year
- Deploy your autonomous agents to production grade environments with 99% Uptime Guarantee, Infinite Scalability, and self-healing.☆49Updated 2 months ago
- The Swarm Ecosystem☆26Updated last year
- Port of Facebook's LLaMA model in C/C++☆22Updated 2 years ago
- 🏥 Health monitor for a Petals swarm☆40Updated last year
- Examples for using the SiLLM framework for training and running Large Language Models (LLMs) on Apple Silicon☆16Updated 8 months ago
- A collection of notebooks for the Hugging Face blog series (https://huggingface.co/blog).☆46Updated last year
- Tool to download models from Huggingface Hub and convert them to GGML/GGUF for llama.cpp☆167Updated 8 months ago
- A Python library to orchestrate LLMs in a neural network-inspired structure☆52Updated last year
- Deploy your GGML models to HuggingFace Spaces with Docker and gradio☆38Updated 2 years ago
- 👩🤝🤖 A curated list of datasets for large language models (LLMs), RLHF and related resources (continually updated)☆24Updated 2 years ago
- llama.cpp fork used by GPT4All☆55Updated 10 months ago
- a suite of finetuned LLMs for atomically precise function calling 🧪☆17Updated last month
- A high-throughput and memory-efficient inference and serving engine for LLMs☆11Updated 2 years ago
- ☆15Updated last year
- ☆46Updated 2 years ago
- ☆12Updated 7 months ago
- Public reports detailing responses to sets of prompts by Large Language Models.☆32Updated last year
- Simple CogVLM client script☆14Updated 2 years ago
- ☆40Updated last year
- AI system powered by large language models.☆32Updated last month
- ☆36Updated last year
- Calling LLM APIs on a Raspberry Pi for lulz☆24Updated 2 years ago
- Adding NeMo Guardrails to a LlamaIndex RAG pipeline☆41Updated last year
- ☆41Updated last week
- Tools for formatting large language model prompts.☆13Updated 2 years ago