whylabs / langkitLinks
๐ LangKit: An open-source toolkit for monitoring Large Language Models (LLMs). ๐ Extracts signals from prompts & responses, ensuring safety & security. ๐ก๏ธ Features include text quality, relevance metrics, & sentiment analysis. ๐ A comprehensive tool for LLM observability. ๐
โ975Updated last year
Alternatives and similar repositories for langkit
Users that are interested in langkit are comparing it to the libraries listed below
Sorting:
- LLM Prompt Injection Detectorโ1,415Updated last year
- A tool for evaluating LLMsโ428Updated last year
- The Security Toolkit for LLM Interactionsโ2,511Updated last month
- Toolkit for fine-tuning, ablating and unit-testing open-source LLMs.โ869Updated last year
- Open-source tool to visualise your RAG ๐ฎโ1,216Updated last year
- โ507Updated last year
- Evaluation and Tracking for LLM Experiments and AI Agentsโ3,082Updated this week
- โ1,006Updated 2 months ago
- Open Source LLM toolkit to build trustworthy LLM applications. TigerArmor (AI safety), TigerRAG (embedding, RAG), TigerTune (fine-tuning)โ399Updated 2 years ago
- Automated Evaluation of RAG Systemsโ689Updated 10 months ago
- โ907Updated last year
- Fine-Tuning Embedding for RAG with Synthetic Dataโ523Updated 2 years ago
- Fiddler Auditor is a tool to evaluate language models.โ189Updated last year
- โ779Updated 7 months ago
- Retrieval Augmented Generation (RAG) framework and context engine powered by Pineconeโ1,032Updated last year
- VectorHub is a free, open-source learning website for people (software developers to senior ML architects) interested in adding vector reโฆโ511Updated last week
- Metrics to evaluate the quality of responses of your Retrieval Augmented Generation (RAG) applications.โ323Updated 7 months ago
- ๐ฐ PromptLayer - Maintain a log of your prompts and OpenAI API requests. Track, debug, and replay old completions.โ734Updated 2 weeks ago
- Deliver safe & effective language modelsโ553Updated 3 weeks ago
- Sample notebooks and prompts for LLM evaluationโ159Updated 3 months ago
- Python SDK for running evaluations on LLM generated responsesโ295Updated 8 months ago
- Open-source tools for prompt testing and experimentation, with support for both LLMs (e.g. OpenAI, LLaMA) and vector databases (e.g. Chroโฆโ3,001Updated last year
- Evaluate your LLM's response with Prometheus and GPT4 ๐ฏโ1,043Updated 9 months ago
- โ472Updated 2 years ago
- wandbot is a technical support bot for Weights & Biases' AI developer tools that can run in Discord, Slack, ChatGPT and Zendeskโ309Updated 3 months ago
- A comprehensive guide to building RAG-based LLM applications for production.โ1,848Updated last year
- ๐ฆ Integrating LLMs into structured NLP pipelinesโ1,362Updated last year
- VectorFlow is a high volume vector embedding pipeline that ingests raw data, transforms it into vectors and writes it to a vector DB of yโฆโ701Updated last year
- An LLM-powered advanced RAG pipeline built from scratchโ856Updated 2 years ago
- Promptimize is a prompt engineering evaluation and testing toolkit.โ492Updated 3 weeks ago