thu-coai / Backdoor-Data-ExtractionLinks
☆29Updated 4 months ago
Alternatives and similar repositories for Backdoor-Data-Extraction
Users that are interested in Backdoor-Data-Extraction are comparing it to the libraries listed below
Sorting:
- Ferret: Faster and Effective Automated Red Teaming with Reward-Based Scoring Technique☆18Updated last year
- Automated Safety Testing of Large Language Models☆16Updated 7 months ago
- Codes for our paper "AgentMonitor: A Plug-and-Play Framework for Predictive and Secure Multi-Agent Systems"☆11Updated 9 months ago
- Multimodal Deepresearcher: Generating Text-Chart Interleaved Reports From Scratch with Agentic Framework☆19Updated last month
- ☆29Updated last year
- HelloBench: Evaluating Long Text Generation Capabilities of Large Language Models☆52Updated 9 months ago
- A project (LLM Sentinel) that showcases NVIDIA's NeMo-Guardrails and LangChain for improving LLM safety☆11Updated 8 months ago
- [NAACL'25] "Revealing the Barriers of Language Agents in Planning"☆12Updated 3 months ago
- A library for red-teaming LLM applications with LLMs.☆28Updated 11 months ago
- A prompt injection game to collect data for robust ML research☆63Updated 7 months ago
- The repository for papaer "Distance between Relevant Information Pieces Causes Bias in Long-Context LLMs"☆12Updated 9 months ago
- ☆105Updated 4 months ago
- Code repo for the paper: Attacking Vision-Language Computer Agents via Pop-ups☆43Updated 9 months ago
- The official implementation of Preference Data Reward-Augmentation.☆18Updated 4 months ago
- Official repository for Montessori-Instruct: Generate Influential Training Data Tailored for Student Learning [ICLR 2025]☆48Updated 7 months ago
- Official repo of Respond-and-Respond: data, code, and evaluation☆104Updated last year
- [ACL 2025] Knowledge Unlearning for Large Language Models☆41Updated 4 months ago
- This is the official code for the paper "Virus: Harmful Fine-tuning Attack for Large Language Models Bypassing Guardrail Moderation"☆50Updated 7 months ago
- ☆81Updated 10 months ago
- Improving Your Model Ranking on Chatbot Arena by Vote Rigging (ICML 2025)☆22Updated 6 months ago
- ☆50Updated 11 months ago
- Codes and datasets for the paper Measuring and Enhancing Trustworthiness of LLMs in RAG through Grounded Attributions and Learning to Ref…☆64Updated 6 months ago
- ☆49Updated 4 months ago
- Codebase accompanying the Summary of a Haystack paper.☆79Updated last year
- Official repo for the paper PHUDGE: Phi-3 as Scalable Judge. Evaluate your LLMs with or without custom rubric, reference answer, absolute…☆49Updated last year
- A novel approach to improve the safety of large language models, enabling them to transition effectively from unsafe to safe state.☆65Updated 4 months ago
- ☆35Updated 4 months ago
- Source code for the collaborative reasoner research project at Meta FAIR.☆103Updated 5 months ago
- ☆46Updated 7 months ago
- ☆40Updated 3 months ago