ZzzzMe / GroupCoverLinks
icml24
☆14Updated 10 months ago
Alternatives and similar repositories for GroupCover
Users that are interested in GroupCover are comparing it to the libraries listed below
Sorting:
- ☆45Updated 9 months ago
- TAOISM: A TEE-based Confidential Heterogeneous Deployment Framework for DNN Models☆50Updated last year
- ☆29Updated 2 years ago
- Statistics of acceptance rate for the top conferences: Oakland, CCS, USENIX Security, NDSS.☆208Updated 3 months ago
- runs several layers of a deep learning model in TrustZone☆94Updated last year
- Goten: GPU-Outsourcing Trusted Execution of Neural Network Training and Prediction☆34Updated 5 years ago
- ☆10Updated 2 years ago
- Privacy-preserving Federated Learning with Trusted Execution Environments☆74Updated 6 months ago
- Open-source code and data for ShadowNet(S&P Oakland'23)☆11Updated last year
- ☆40Updated 2 years ago
- ☆14Updated last year
- This repo collects the best papers from top 4 computer security conferences, including IEEE S&P, ACM CCS, USENIX Security, and NDSS.☆131Updated 9 months ago
- ☆22Updated 3 years ago
- ☆49Updated 5 years ago
- Code for ML Doctor☆92Updated last year
- Code for paper "The Philosopher’s Stone: Trojaning Plugins of Large Language Models"☆26Updated last year
- Academic Cooperation Lab☆19Updated 3 months ago
- Private and Reliable Neural Network Inference (CCS '22)☆22Updated 2 years ago
- A curated list of Meachine learning Security & Privacy papers published in security top-4 conferences (IEEE S&P, ACM CCS, USENIX Security…☆326Updated 2 months ago
- Privacy-Preserving, Accurate and Efficient Inference for Transformers☆33Updated last year
- ☆34Updated 3 years ago
- ☆28Updated 2 years ago
- This repo summarizes papers for efficient PPML across protocol, model, and system levels.☆60Updated last month
- The artifact for NDSS '25 paper "ASGARD: Protecting On-Device Deep Neural Networks with Virtualization-Based Trusted Execution Environmen…☆14Updated 3 months ago
- ☆16Updated 2 years ago
- Code release for MPCViT accepted by ICCV 2023☆16Updated last year
- Fast, Verifiable and Private Execution of Neural Networks in Trusted Hardware☆171Updated 2 years ago
- PFI: Prompt Flow Integrity to Prevent Privilege Escalation in LLM Agents☆25Updated 9 months ago
- ☆26Updated last year
- ☆28Updated 10 months ago