whylabs / langkitLinks
🔍 LangKit: An open-source toolkit for monitoring Large Language Models (LLMs). 📚 Extracts signals from prompts & responses, ensuring safety & security. 🛡️ Features include text quality, relevance metrics, & sentiment analysis. 📊 A comprehensive tool for LLM observability. 👀
☆964Updated last year
Alternatives and similar repositories for langkit
Users that are interested in langkit are comparing it to the libraries listed below
Sorting:
- A tool for evaluating LLMs☆428Updated last year
- LLM Prompt Injection Detector☆1,385Updated last year
- Evaluation and Tracking for LLM Experiments and AI Agents☆2,941Updated this week
- Automated Evaluation of RAG Systems☆674Updated 8 months ago
- The Security Toolkit for LLM Interactions☆2,280Updated 3 weeks ago
- Open-source tools for prompt testing and experimentation, with support for both LLMs (e.g. OpenAI, LLaMA) and vector databases (e.g. Chro…☆2,968Updated last year
- ☆778Updated 5 months ago
- Retrieval Augmented Generation (RAG) framework and context engine powered by Pinecone☆1,023Updated last year
- Metrics to evaluate the quality of responses of your Retrieval Augmented Generation (RAG) applications.☆319Updated 4 months ago
- Fine-Tuning Embedding for RAG with Synthetic Data☆518Updated 2 years ago
- ☆468Updated last year
- Fiddler Auditor is a tool to evaluate language models.☆188Updated last year
- Open Source LLM toolkit to build trustworthy LLM applications. TigerArmor (AI safety), TigerRAG (embedding, RAG), TigerTune (fine-tuning)☆398Updated 2 years ago
- ☆979Updated last week
- Toolkit for fine-tuning, ablating and unit-testing open-source LLMs.☆862Updated last year
- Open-source tool to visualise your RAG 🔮☆1,199Updated 10 months ago
- 🍰 PromptLayer - Maintain a log of your prompts and OpenAI API requests. Track, debug, and replay old completions.☆708Updated this week
- ☆902Updated last year
- Deliver safe & effective language models☆545Updated last month
- ☆508Updated last year
- Adala: Autonomous DAta (Labeling) Agent framework☆1,296Updated this week
- Automatically evaluate your LLMs in Google Colab☆671Updated last year
- An LLM-powered advanced RAG pipeline built from scratch☆854Updated last year
- LLM Comparator is an interactive data visualization tool for evaluating and analyzing LLM responses side-by-side, developed by the PAIR t…☆499Updated 9 months ago
- 🦙 Integrating LLMs into structured NLP pipelines☆1,349Updated 10 months ago
- Python SDK for running evaluations on LLM generated responses☆292Updated 5 months ago
- Evaluation tool for LLM QA chains☆1,088Updated 2 years ago
- Superfast AI decision making and intelligent processing of multi-modal data.☆2,917Updated last week
- 👩🏻🍳 A collection of example notebooks using Haystack☆513Updated this week
- A comprehensive guide to building RAG-based LLM applications for production.☆1,841Updated last year