HydroXai / Enhancing-Safety-in-Large-Language-Models
View external linksLinks

Precision Knowledge Editing (PKE): A novel method to reduce toxicity in LLMs while preserving performance, with robust evaluations and hands-on demonstrations.
11Nov 26, 2024Updated last year

Alternatives and similar repositories for Enhancing-Safety-in-Large-Language-Models

Users that are interested in Enhancing-Safety-in-Large-Language-Models are comparing it to the libraries listed below

Sorting:

Are these results useful?