[ICML 2024] TrustLLM: Trustworthiness in Large Language Models
☆624Jun 24, 2025Updated 9 months ago
Alternatives and similar repositories for TrustLLM
Users that are interested in TrustLLM are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- [ICLR'26, NAACL'25 Demo] Toolkit & Benchmark for evaluating the trustworthiness of generative foundation models.☆128Aug 22, 2025Updated 7 months ago
- [NeurIPS 2024] HonestLLM: Toward an Honest and Helpful Large Language Model☆29Jun 10, 2025Updated 10 months ago
- Can We Trust Large Language Models?: A Benchmark for Responsible Large Language Models via Toxicity, Bias, and Value-alignment Evaluation☆26Oct 12, 2023Updated 2 years ago
- A reading list for large models safety, security, and privacy (including Awesome LLM Security, Safety, etc.).☆1,926Apr 2, 2026Updated 2 weeks ago
- Awesome Large Reasoning Model(LRM) Safety.This repository is used to collect security-related research on large reasoning models such as …☆82Updated this week
- Deploy open-source AI quickly and easily - Bonus Offer • AdRunpod Hub is built for open source. One-click deployment and autoscaling endpoints without provisioning your own infrastructure.
- A curated list of safety-related papers, articles, and resources focused on Large Language Models (LLMs). This repository aims to provide…☆1,818Apr 3, 2026Updated last week
- We jailbreak GPT-3.5 Turbo’s safety guardrails by fine-tuning it on only 10 adversarially designed examples, at a cost of less than $0.20…☆346Feb 23, 2024Updated 2 years ago
- HarmBench: A Standardized Evaluation Framework for Automated Red Teaming and Robust Refusal☆915Aug 16, 2024Updated last year
- [ICLR'25] DataGen: Unified Synthetic Dataset Generation via Large Language Models☆66Mar 8, 2025Updated last year
- ☆24Dec 8, 2024Updated last year
- JailbreakBench: An Open Robustness Benchmark for Jailbreaking Language Models [NeurIPS 2024 Datasets and Benchmarks Track]☆573Apr 4, 2025Updated last year
- A Comprehensive Assessment of Trustworthiness in GPT Models☆314Sep 16, 2024Updated last year
- Papers and resources related to the security and privacy of LLMs 🤖☆571Jun 8, 2025Updated 10 months ago
- Accepted by ECCV 2024☆200Oct 15, 2024Updated last year
- Wordpress hosting with auto-scaling - Free Trial • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- This is the starter kit for the Trojan Detection Challenge 2023 (LLM Edition), a NeurIPS 2023 competition.☆91May 19, 2024Updated last year
- [ICML 2024] COLD-Attack: Jailbreaking LLMs with Stealthiness and Controllability☆175Dec 18, 2024Updated last year
- Representation Engineering: A Top-Down Approach to AI Transparency☆978Aug 14, 2024Updated last year
- [ICLR 2024] The official implementation of our ICLR2024 paper "AutoDAN: Generating Stealthy Jailbreak Prompts on Aligned Large Language M…☆436Jan 22, 2025Updated last year
- The papers are organized according to our survey: Evaluating Large Language Models: A Comprehensive Survey.☆799May 8, 2024Updated last year
- ☆14Feb 26, 2025Updated last year
- [NAACL2024] Attacks, Defenses and Evaluations for LLM Conversation Safety: A Survey☆111Aug 7, 2024Updated last year
- A curated list of trustworthy deep learning papers. Daily updating...☆384Apr 7, 2026Updated last week
- This repository provides an original implementation of Detecting Pretraining Data from Large Language Models by *Weijia Shi, *Anirudh Aji…☆244Nov 3, 2023Updated 2 years ago
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- Universal and Transferable Attacks on Aligned Language Models☆4,601Aug 2, 2024Updated last year
- ICCV 2023 - AdaptGuard: Defending Against Universal Attacks for Model Adaptation☆11Dec 23, 2023Updated 2 years ago
- [CVPR'24 Oral] Metacloak: Preventing Unauthorized Subject-driven Text-to-image Diffusion-based Synthesis via Meta-learning☆30Nov 19, 2024Updated last year
- RewardBench: the first evaluation tool for reward models.☆707Feb 16, 2026Updated 2 months ago
- [ICLR 2025] On Evluating the Durability of Safegurads for Open-Weight LLMs☆13Jun 20, 2025Updated 9 months ago
- Jailbreaking Leading Safety-Aligned LLMs with Simple Adaptive Attacks [ICLR 2025]☆381Jan 23, 2025Updated last year
- Repository for the Paper (AAAI 2024, Oral) --- Visual Adversarial Examples Jailbreak Large Language Models☆270May 13, 2024Updated last year
- TruthfulQA: Measuring How Models Imitate Human Falsehoods☆903Jan 16, 2025Updated last year
- Improving Alignment and Robustness with Circuit Breakers☆260Sep 24, 2024Updated last year
- Bare Metal GPUs on DigitalOcean Gradient AI • AdPurpose-built for serious AI teams training foundational models, running large-scale inference, and pushing the boundaries of what's possible.
- [AAAI'25 (Oral)] Jailbreaking Large Vision-language Models via Typographic Visual Prompts☆200Jun 26, 2025Updated 9 months ago
- [ICML 2024] Safety Fine-Tuning at (Almost) No Cost: A Baseline for Vision Large Language Models.☆86Jan 19, 2025Updated last year
- ☆128Feb 3, 2025Updated last year
- ☆198Nov 26, 2023Updated 2 years ago
- The repository for the survey paper <<Survey on Large Language Models Factuality: Knowledge, Retrieval and Domain-Specificity>>☆341Mar 28, 2026Updated 2 weeks ago
- Official implementation for the paper "DoLa: Decoding by Contrasting Layers Improves Factuality in Large Language Models"☆546Jan 17, 2025Updated last year
- Official Repository for The Paper: Safety Alignment Should Be Made More Than Just a Few Tokens Deep☆178Apr 23, 2025Updated 11 months ago