alipay / private_llmLinks
☆35Updated last year
Alternatives and similar repositories for private_llm
Users that are interested in private_llm are comparing it to the libraries listed below
Sorting:
- Shepherd: A foundational framework enabling federated instruction tuning for large language models☆249Updated 2 years ago
- Federated Learning for LLMs.☆241Updated last month
- The official implement of paper "Does Federated Learning Really Need Backpropagation?"☆23Updated 2 years ago
- [ICLR'24 Spotlight] DP-OPT: Make Large Language Model Your Privacy-Preserving Prompt Engineer☆46Updated last year
- FedJudge: Federated Legal Large Language Model☆37Updated last year
- R-Judge: Benchmarking Safety Risk Awareness for LLM Agents (EMNLP Findings 2024)☆93Updated 7 months ago
- Implementation for PrE-Text: Training Language Models on Private Federated Data in the Age of LLMs☆24Updated last year
- BeaverTails is a collection of datasets designed to facilitate research on safety alignment in large language models (LLMs).☆170Updated 2 years ago
- This repository provides an original implementation of Detecting Pretraining Data from Large Language Models by *Weijia Shi, *Anirudh Aji…☆236Updated 2 years ago
- ShieldLM: Empowering LLMs as Aligned, Customizable and Explainable Safety Detectors [EMNLP 2024 Findings]☆218Updated last year
- Official repo for the paper: Recovering Private Text in Federated Learning of Language Models (in NeurIPS 2022)☆61Updated 2 years ago
- [ICLR'24] RAIN: Your Language Models Can Align Themselves without Finetuning☆98Updated last year
- Official implementation of Privacy Implications of Retrieval-Based Language Models (EMNLP 2023). https://arxiv.org/abs/2305.14888☆37Updated last year
- [EMNLP 2023] Lion: Adversarial Distillation of Proprietary Large Language Models☆212Updated last year
- The repository of the paper "REEF: Representation Encoding Fingerprints for Large Language Models," aims to protect the IP of open-source…☆70Updated 10 months ago
- ☆53Updated 9 months ago
- Codebase for decoding compressed trust.☆25Updated last year
- Flames is a highly adversarial benchmark in Chinese for LLM's harmlessness evaluation developed by Shanghai AI Lab and Fudan NLP Group.☆62Updated last year
- ☆19Updated 2 years ago
- Official github repo for AutoDetect, an automated weakness detection framework for LLMs.☆44Updated last year
- LLM Unlearning☆178Updated 2 years ago
- On Memorization of Large Language Models in Logical Reasoning☆72Updated 8 months ago
- ☆51Updated last year
- A MoE impl for PyTorch, [ATC'23] SmartMoE☆70Updated 2 years ago
- ☆29Updated 2 years ago
- [ACL 2024 Demo] Official GitHub repo for UltraEval: An open source framework for evaluating foundation models.☆253Updated last year
- ☆23Updated last year
- FamilyTool benchmark☆11Updated 3 months ago
- A survey of privacy problems in Large Language Models (LLMs). Contains summary of the corresponding paper along with relevant code☆68Updated last year
- [SIGIR'24] The official implementation code of MOELoRA.☆186Updated last year