Jometeorie / KnowledgeSpreadLinks
☆33Updated 9 months ago
Alternatives and similar repositories for KnowledgeSpread
Users that are interested in KnowledgeSpread are comparing it to the libraries listed below
Sorting:
- ☆99Updated 3 months ago
- ☆47Updated 5 months ago
- ☆28Updated last year
- Awesome Large Reasoning Model(LRM) Safety.This repository is used to collect security-related research on large reasoning models such as …☆68Updated this week
- R-Judge: Benchmarking Safety Risk Awareness for LLM Agents (EMNLP Findings 2024)☆82Updated 2 months ago
- 【ACL 2024】 SALAD benchmark & MD-Judge☆156Updated 4 months ago
- In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation (ICML 2024)☆61Updated last year
- Safe Unlearning: A Surprisingly Effective and Generalizable Solution to Defend Against Jailbreak Attacks☆29Updated last year
- [ICLR'24] RAIN: Your Language Models Can Align Themselves without Finetuning☆96Updated last year
- RWKU: Benchmarking Real-World Knowledge Unlearning for Large Language Models. NeurIPS 2024☆77Updated 10 months ago
- The repository of the paper "REEF: Representation Encoding Fingerprints for Large Language Models," aims to protect the IP of open-source…☆58Updated 6 months ago
- ☆50Updated last year
- [ICLR 2025] Dissecting adversarial robustness of multimodal language model agents☆98Updated 5 months ago
- Official implementation of ICLR'24 paper, "Curiosity-driven Red Teaming for Large Language Models" (https://openreview.net/pdf?id=4KqkizX…☆78Updated last year
- Toolkit for evaluating the trustworthiness of generative foundation models.☆107Updated this week
- ☆20Updated 9 months ago
- ☆41Updated 10 months ago
- [FCS'24] LVLM Safety paper☆18Updated 7 months ago
- Official repository for "Safety in Large Reasoning Models: A Survey" - Exploring safety risks, attacks, and defenses for Large Reasoning …☆64Updated last month
- Official codebase for "STAIR: Improving Safety Alignment with Introspective Reasoning"☆65Updated 5 months ago
- [ACL 2024] Code and data for "Machine Unlearning of Pre-trained Large Language Models"☆59Updated 10 months ago
- [NAACL2024] Attacks, Defenses and Evaluations for LLM Conversation Safety: A Survey☆106Updated last year
- [ICML 2024] Assessing the Brittleness of Safety Alignment via Pruning and Low-Rank Modifications☆81Updated 4 months ago
- Implementation of the MATRIX framework (ICML 2024)☆57Updated last year
- ☆46Updated 2 months ago
- JAILJUDGE: A comprehensive evaluation benchmark which includes a wide range of risk scenarios with complex malicious prompts (e.g., synth…☆50Updated 7 months ago
- ☆21Updated 9 months ago
- ☆96Updated 6 months ago
- Official Repository for The Paper: Safety Alignment Should Be Made More Than Just a Few Tokens Deep☆143Updated 3 months ago
- ☆44Updated 5 months ago