facebookresearch / LWE-benchmarkingLinks
This repository contains code to generate and preprocess Learning with Errors (LWE) data and implementations of four LWE attacks uSVP, SALSA, Cool&Cruel, and Dual Hybrid Meet-in-the-Middle (MitM). We invite contributors to reproduce our results, improve on these methods, and/or suggest new concrete attacks on LWE.
☆54Updated 2 months ago
Alternatives and similar repositories for LWE-benchmarking
Users that are interested in LWE-benchmarking are comparing it to the libraries listed below
Sorting:
- ☆27Updated 3 months ago
- Public repository containing METR's DVC pipeline for eval data analysis☆91Updated 4 months ago
- BlindBox is a tool to isolate and deploy applications inside Trusted Execution Environments for privacy-by-design apps☆60Updated last year
- ☆18Updated this week
- Because it's there.☆16Updated 10 months ago
- Code for our paper PAPILLON: PrivAcy Preservation from Internet-based and Local Language MOdel ENsembles☆53Updated 3 months ago
- Zero-trust AI APIs for easy and private consumption of open-source LLMs☆40Updated last year
- This library supports evaluating disparities in generated image quality, diversity, and consistency between geographic regions.☆20Updated last year
- Rust Implementation of micrograd☆52Updated last year
- Open source platform for the privacy-preserving machine learning lifecycle☆17Updated last year
- A method for steering llms to better follow instructions☆48Updated this week
- Analysis code for paper "SciArena: An Open Evaluation Platform for Foundation Models in Scientific Literature Tasks"☆45Updated this week
- ☆68Updated 8 months ago
- The AILuminate v1.1 benchmark suite is an AI risk assessment benchmark developed with broad involvement from leading AI companies, academ…☆22Updated 2 months ago
- Code and data for the paper "Why think step by step? Reasoning emerges from the locality of experience"☆61Updated 4 months ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆11Updated last year
- An intelligent code optimization system leveraging AI analysis, automated refactoring, and test generation. Built with DSPy and Gradio, i…☆20Updated 6 months ago
- Transformer GPU VRAM estimator☆66Updated last year
- Losslessly encode text natively with arithmetic coding and HuggingFace Transformers☆76Updated last year
- A framework for pitting LLMs against each other in an evolving library of games ⚔☆34Updated 3 months ago
- A python command-line tool to download & manage MLX AI models from Hugging Face.☆18Updated 11 months ago
- Pivotal Token Search☆119Updated 3 weeks ago
- [ICML 2023] "Outline, Then Details: Syntactically Guided Coarse-To-Fine Code Generation", Wenqing Zheng, S P Sharan, Ajay Kumar Jaiswal, …☆40Updated last year
- ☆26Updated last year
- ☆41Updated last year
- Heavyweight Python dynamic analysis framework☆17Updated last year
- An example implementation of RLHF (or, more accurately, RLAIF) built on MLX and HuggingFace.☆32Updated last year
- Implementation of SelfExtend from the paper "LLM Maybe LongLM: Self-Extend LLM Context Window Without Tuning" from Pytorch and Zeta☆13Updated 9 months ago
- An automated tool for discovering insights from research papaer corpora☆138Updated last year
- DevQualityEval: An evaluation benchmark 📈 and framework to compare and evolve the quality of code generation of LLMs.☆179Updated 2 months ago