avidml / evaluating-LLMsLinks
Creating the tools and data sets necessary to evaluate vulnerabilities in LLMs.
β26Updated 9 months ago
Alternatives and similar repositories for evaluating-LLMs
Users that are interested in evaluating-LLMs are comparing it to the libraries listed below
Sorting:
- π A curated list of papers & technical articles on AI Quality & Safetyβ195Updated 8 months ago
- Annotated corpus + evaluation metrics for text anonymisationβ70Updated 5 months ago
- Stanford CRFM's initiative to assess potential compliance with the draft EU AI Actβ93Updated 2 years ago
- Find and fix bugs in natural language machine learning models using adaptive testing.β188Updated last year
- A python package for benchmarking interpretability techniques on Transformers.β214Updated last year
- codebase release for EMNLP2023 paper publicationβ19Updated 3 months ago
- Command Line Interface for Hugging Face Inference Endpointsβ66Updated last year
- β226Updated 4 years ago
- AI Data Management & Evaluation Platformβ216Updated 2 years ago
- Repository for research in the field of Responsible NLP at Meta.β204Updated 7 months ago
- The Foundation Model Transparency Indexβ84Updated 3 weeks ago
- This repo contains the code for generating the ToxiGen dataset, published at ACL 2022.β345Updated last year
- AuditNLG: Auditing Generative AI Language Modeling for Trustworthinessβ101Updated 11 months ago
- A Python library aimed at dissecting and augmenting NER training data.β59Updated 2 years ago
- [EMNLP 2023 Demo] fabricator - annotating and generating datasets with large language models.β111Updated last year
- This package features data-science related tasks for developing new recognizers for Presidio. It is used for the evaluation of the entireβ¦β253Updated 2 weeks ago
- Fiddler Auditor is a tool to evaluate language models.β188Updated last year
- The official code of LM-Debugger, an interactive tool for inspection and intervention in transformer-based language models.β180Updated 3 years ago
- Run safety benchmarks against AI models and view detailed reports showing how well they performed.β114Updated last week
- Code for the paper "Fishing for Magikarp"β177Updated 7 months ago
- A research python package for detecting, categorizing, and assessing the severity of personal identifiable information (PII)β94Updated last week
- π€ Disaggregators: Curated data labelers for in-depth analysis.β67Updated 2 years ago
- AI Verifyβ39Updated last week
- Contains all assets to run with Moonshot Library (Connectors, Datasets and Metrics)β39Updated last week
- A framework for few-shot evaluation of autoregressive language models.β105Updated 2 years ago
- A library to synthesize text datasets using Large Language Models (LLM)β152Updated 2 years ago
- Deliver safe & effective language modelsβ548Updated this week
- β260Updated 9 months ago
- Powerful unsupervised domain adaptation method for dense retrieval. Requires only unlabeled corpus and yields massive improvement: "GPL: β¦β338Updated 2 years ago
- Credo AI Lens is a comprehensive assessment framework for AI systems. Lens standardizes model and data assessment, and acts as a central β¦β47Updated last year