automorphic-ai / aegisLinks
Self-hardening firewall for large language models
☆266Updated last year
Alternatives and similar repositories for aegis
Users that are interested in aegis are comparing it to the libraries listed below
Sorting:
- Fine-tuning and serving LLMs on any cloud☆90Updated 2 years ago
- LLM fine-tuning and eval☆346Updated last year
- A simple DAG for executing LLM calls and using tools.☆42Updated 2 years ago
- CLI to the Cedana Service☆57Updated 7 months ago
- Data-Driven Evaluation for LLM-Powered Applications☆515Updated 11 months ago
- Python SDK for running evaluations on LLM generated responses☆294Updated 6 months ago
- Legacy project of an analytics platform for LLM-generated content☆437Updated 5 months ago
- Enforce structured output from LLMs 100% of the time☆249Updated last year
- ☆78Updated 2 years ago
- Open source AI Agent evaluation framework for web tasks 🐒🍌☆327Updated last year
- Build elegant data pipelines☆323Updated last year
- A single API for product integrations☆611Updated last year
- Guards and protection agnostic to your model or provider☆40Updated last year
- Prompt engineering, automated.☆350Updated 8 months ago
- Comprehensive Vector Data Tooling. The universal interface for all vector database, datasets and RAG platforms. Easily export, import, ba…☆264Updated last week
- Synthetic Data for LLM Fine-Tuning☆120Updated 2 years ago
- A tool for evaluating LLMs☆428Updated last year
- pykoi: Active learning in one unified interface☆412Updated 3 months ago
- A lightweight logger for machine learning teams to log images and predictions in production.☆154Updated 2 years ago
- Open source fraud and abuse prevention tools☆214Updated last year
- Get 100% uptime, reliability from OpenAI. Handle Rate Limit, Timeout, API, Keys Errors☆689Updated 2 years ago
- Action library for AI Agent☆230Updated 9 months ago
- Exact structure out of any language model completion.☆514Updated 2 years ago
- AI-to-AI Testing | Simulation framework for LLM-based applications☆136Updated 2 years ago
- Open-source AI copilot that lets you chat with your observability data and code 🧙♂️☆355Updated 8 months ago
- Curated collection of AI dev tools from YC companies, aiming to serve as a reliable starting point for LLM/ML developers☆190Updated 2 years ago
- Promptimize is a prompt engineering evaluation and testing toolkit.☆487Updated 3 weeks ago
- VSCode extension of Quack Companion 💻 Turn your team insights into a portable plug-and-play context for code generation. Alternative to …☆233Updated last year
- Large language model evaluation and workflow framework from Phase AI.☆459Updated 11 months ago
- Fiddler Auditor is a tool to evaluate language models.☆188Updated last year