OSU-NLP-Group / SELM
Symmetric Encryption with Language Models
☆11Updated last year
Related projects ⓘ
Alternatives and complementary repositories for SELM
- ☆17Updated last week
- ☆14Updated last year
- Generating and validating natural-language explanations.☆42Updated last week
- Implementation of the model: "Reka Core, Flash, and Edge: A Series of Powerful Multimodal Language Models" in PyTorch☆29Updated this week
- code for paper "Accessing higher dimensions for unsupervised word translation"☆21Updated last year
- Repo for the research paper "Aligning LLMs to Be Robust Against Prompt Injection"☆19Updated 3 weeks ago
- ☆22Updated this week
- 📰 Computing the information content of trained neural networks☆21Updated 3 years ago
- A collection of papers tackling automatic fact-checking (particularly of AI-generated content)☆14Updated last year
- ☆43Updated 2 months ago
- This is the oficial repository for "Safer-Instruct: Aligning Language Models with Automated Preference Data"☆17Updated 9 months ago
- Implementation of SelfExtend from the paper "LLM Maybe LongLM: Self-Extend LLM Context Window Without Tuning" from Pytorch and Zeta☆13Updated 2 weeks ago
- Minimum Description Length probing for neural network representations☆16Updated this week
- ☆10Updated 2 years ago
- ☆21Updated 3 weeks ago
- Source code and data for ADEPT: A DEbiasing PrompT Framework (AAAI-23).☆14Updated last year
- Tasks for describing differences between text distributions.☆16Updated 3 months ago
- Code for our EMNLP '22 paper "Fixing Model Bugs with Natural Language Patches"☆19Updated last year
- The official repository of the paper "On the Exploitability of Instruction Tuning".☆57Updated 9 months ago
- ☆44Updated 5 months ago
- Code for "The Whole Truth and Nothing But the Truth: Faithful and Controllable Dialogue Response Generation with Dataflow Transduction an…☆10Updated 6 months ago
- ☆22Updated 2 years ago
- Finding semantically meaningful and accurate prompts.☆46Updated last year
- ☆18Updated 2 months ago
- Official repository for the paper "ALERT: A Comprehensive Benchmark for Assessing Large Language Models’ Safety through Red Teaming"☆33Updated 2 months ago
- Hrrformer: A Neuro-symbolic Self-attention Model (ICML23)☆47Updated last year
- Official Repository for Dataset Inference for LLMs☆23Updated 4 months ago
- Code for Preventing Language Models From Hiding Their Reasoning, which evaluates defenses against LLM steganography.☆13Updated 9 months ago
- PyTorch code for the RetoMaton paper: "Neuro-Symbolic Language Modeling with Automaton-augmented Retrieval" (ICML 2022)☆71Updated 2 years ago
- ☆75Updated last year