avidml / evaluating-LLMsLinks
Creating the tools and data sets necessary to evaluate vulnerabilities in LLMs.
β27Updated 10 months ago
Alternatives and similar repositories for evaluating-LLMs
Users that are interested in evaluating-LLMs are comparing it to the libraries listed below
Sorting:
- Find and fix bugs in natural language machine learning models using adaptive testing.β188Updated last year
- π A curated list of papers & technical articles on AI Quality & Safetyβ200Updated 9 months ago
- AI Data Management & Evaluation Platformβ215Updated 2 years ago
- π€ Disaggregators: Curated data labelers for in-depth analysis.β67Updated 2 years ago
- AuditNLG: Auditing Generative AI Language Modeling for Trustworthinessβ103Updated last year
- Stanford CRFM's initiative to assess potential compliance with the draft EU AI Actβ93Updated 2 years ago
- Notebooks for training universal 0-shot classifiers on many different tasksβ139Updated last year
- Credo AI Lens is a comprehensive assessment framework for AI systems. Lens standardizes model and data assessment, and acts as a central β¦β48Updated last year
- Command Line Interface for Hugging Face Inference Endpointsβ65Updated last year
- The official code of LM-Debugger, an interactive tool for inspection and intervention in transformer-based language models.β182Updated 3 years ago
- β261Updated 10 months ago
- This repo contains the code for generating the ToxiGen dataset, published at ACL 2022.β345Updated last year
- β153Updated 3 years ago
- Annotated corpus + evaluation metrics for text anonymisationβ70Updated 2 weeks ago
- β228Updated 4 years ago
- Repository for research in the field of Responsible NLP at Meta.β205Updated this week
- [EMNLP 2023 Demo] fabricator - annotating and generating datasets with large language models.β111Updated last year
- The Foundation Model Transparency Indexβ85Updated last month
- Contains all assets to run with Moonshot Library (Connectors, Datasets and Metrics)β39Updated 2 weeks ago
- Fiddler Auditor is a tool to evaluate language models.β188Updated last year
- Deliver safe & effective language modelsβ553Updated 2 weeks ago
- Run safety benchmarks against AI models and view detailed reports showing how well they performed.β117Updated this week
- A python package for benchmarking interpretability techniques on Transformers.β215Updated last year
- FastFit β‘ When LLMs are Unfit Use FastFit β‘ Fast and Effective Text Classification with Many Classesβ213Updated 4 months ago
- A library to synthesize text datasets using Large Language Models (LLM)β152Updated 3 years ago
- Official repository for the paper "ALERT: A Comprehensive Benchmark for Assessing Large Language Modelsβ Safety through Red Teaming"β54Updated last year
- RATransformers π- Make your transformer (like BERT, RoBERTa, GPT-2 and T5) Relation Aware!β42Updated 3 years ago
- This project studies the performance and robustness of language models and task-adaptation methods.β155Updated last year
- Datasets collection and preprocessings framework for NLP extreme multitask learningβ192Updated 6 months ago
- A framework for few-shot evaluation of autoregressive language models.β106Updated 2 years ago