HKUST-KnowComp / PrivaCI-BenchLinks
☆20Updated 9 months ago
Alternatives and similar repositories for PrivaCI-Bench
Users that are interested in PrivaCI-Bench are comparing it to the libraries listed below
Sorting:
- code repo for ICLR 2024 paper "Can LLMs Express Their Uncertainty? An Empirical Evaluation of Confidence Elicitation in LLMs"☆143Updated last year
- [EMNLP 2023] Poisoning Retrieval Corpora by Injecting Adversarial Passages https://arxiv.org/abs/2310.19156☆47Updated 2 years ago
- Can Knowledge Editing Really Correct Hallucinations? (ICLR 2025)☆27Updated 5 months ago
- Bag of Tricks: Benchmarking of Jailbreak Attacks on LLMs. Empirical tricks for LLM Jailbreaking. (NeurIPS 2024)☆160Updated last year
- LLM Unlearning☆181Updated 2 years ago
- Code release for "Debating with More Persuasive LLMs Leads to More Truthful Answers"☆124Updated last year
- [ICLR'24 Spotlight] A language model (LM)-based emulation framework for identifying the risks of LM agents with tool use☆183Updated last year
- TrustAgent: Towards Safe and Trustworthy LLM-based Agents☆56Updated 11 months ago
- Code and data for the paper: On the Resilience of LLM-Based Multi-Agent Collaboration with Faulty Agents☆41Updated last month
- Using Explanations as a Tool for Advanced LLMs☆69Updated last year
- [NAACL 25 main] Awesome LLM Causal Reasoning is a collection of LLM-based casual reasoning works, including papers, codes and datasets.☆113Updated 4 months ago
- ☆24Updated last year
- Revolve: Optimizing AI Systems by Tracking Response Evolution in Textual Optimization☆21Updated last year
- Official Implementation of Dynamic LLM-Agent Network: An LLM-agent Collaboration Framework with Agent Team Optimization☆192Updated last year
- ☆29Updated last year
- JAILJUDGE: A comprehensive evaluation benchmark which includes a wide range of risk scenarios with complex malicious prompts (e.g., synth…☆58Updated last year
- Open One-Stop Moderation Tools for Safety Risks, Jailbreaks, and Refusals of LLMs☆104Updated last year
- [ICLR'26, NAACL'25 Demo] Toolkit & Benchmark for evaluating the trustworthiness of generative foundation models.☆125Updated 5 months ago
- ☆89Updated 5 months ago
- The dataset and code for the ICLR 2024 paper "Can LLM-Generated Misinformation Be Detected?"☆80Updated last year
- AdaPlanner: Language Models for Decision Making via Adaptive Planning from Feedback☆126Updated 10 months ago
- [ICLR'24 Spotlight] DP-OPT: Make Large Language Model Your Privacy-Preserving Prompt Engineer☆46Updated last year
- [ICLR'24] RAIN: Your Language Models Can Align Themselves without Finetuning☆98Updated last year
- Code for paper Chain-of-Table: Evolving Tables in the Reasoning Chain for Table Understanding☆88Updated last year
- ☆158Updated 2 years ago
- ☆48Updated 11 months ago
- R-Judge: Benchmarking Safety Risk Awareness for LLM Agents (EMNLP Findings 2024)☆98Updated 3 weeks ago
- [ACL 2024] Shifting Attention to Relevance: Towards the Predictive Uncertainty Quantification of Free-Form Large Language Models☆60Updated last year
- LLM-Based Human-Agent Collaboration and Interaction Systems: A Survey | Awesome Human-Agent Collaboration | Human-AI Collaboration☆185Updated last week
- Data and code for the Corr2Cause paper (ICLR 2024)☆113Updated last year