alipay / private_llmLinks
☆35Updated last year
Alternatives and similar repositories for private_llm
Users that are interested in private_llm are comparing it to the libraries listed below
Sorting:
- Hide and Seek (HaS): A Framework for Prompt Privacy Protection☆43Updated last year
- [ICLR'24 Spotlight] DP-OPT: Make Large Language Model Your Privacy-Preserving Prompt Engineer☆43Updated last year
- Shepherd: A foundational framework enabling federated instruction tuning for large language models☆235Updated last year
- R-Judge: Benchmarking Safety Risk Awareness for LLM Agents (EMNLP Findings 2024)☆78Updated last month
- Split Learning Simulation Framework for LLMs☆23Updated 9 months ago
- Official repo for the paper: Recovering Private Text in Federated Learning of Language Models (in NeurIPS 2022)☆56Updated 2 years ago
- The official implement of paper "Does Federated Learning Really Need Backpropagation?"☆23Updated 2 years ago
- Codebase for decoding compressed trust.☆24Updated last year
- Code for the paper "BadPrompt: Backdoor Attacks on Continuous Prompts"☆36Updated 11 months ago
- LAMP: Extracting Text from Gradients with Language Model Priors (NeurIPS '22)☆24Updated last month
- ☆12Updated 2 years ago
- Federated Learning for LLMs.☆219Updated 6 months ago
- ☆18Updated last year
- Federated Learning Framework Benchmark (UniFed)☆49Updated 2 years ago
- ☆26Updated last year
- A simple GPT-based evaluation tool for multi-aspect, interpretable assessment of LLMs.☆85Updated last year
- A curated list of trustworthy Generative AI papers. Daily updating...☆73Updated 9 months ago
- ☆18Updated 2 months ago
- A survey of privacy problems in Large Language Models (LLMs). Contains summary of the corresponding paper along with relevant code☆67Updated last year
- Official Code for ACL 2023 paper: "Ethicist: Targeted Training Data Extraction Through Loss Smoothed Soft Prompting and Calibrated Confid…☆23Updated 2 years ago
- ☆1Updated last year
- [ICLR'24] RAIN: Your Language Models Can Align Themselves without Finetuning☆94Updated last year
- Official implementation of Privacy Implications of Retrieval-Based Language Models (EMNLP 2023). https://arxiv.org/abs/2305.14888☆35Updated last year
- BeaverTails is a collection of datasets designed to facilitate research on safety alignment in large language models (LLMs).☆147Updated last year
- Implementation for PrE-Text: Training Language Models on Private Federated Data in the Age of LLMs☆23Updated last year
- A MoE impl for PyTorch, [ATC'23] SmartMoE☆64Updated last year
- ☆20Updated last year
- THU-AIR Vertical Federated Learning general, extensible and light-weight framework☆92Updated 11 months ago
- ☆56Updated 4 months ago
- ☆21Updated last year