google-deepmind / synthid-textLinks
☆769Updated 6 months ago
Alternatives and similar repositories for synthid-text
Users that are interested in synthid-text are comparing it to the libraries listed below
Sorting:
- Open and efficient video and image watermarking☆582Updated last week
- Official implementation of the paper "The Stable Signature Rooting Watermarks in Latent Diffusion Models"☆494Updated last month
- Humanity's Last Exam☆1,352Updated 4 months ago
- [ICML 2024] Binoculars: Zero-Shot Detection of LLM-Generated Text☆344Updated last year
- Qwen3Guard is a multilingual guardrail model series developed by the Qwen team at Alibaba Cloud.☆411Updated 3 months ago
- LLM Comparator is an interactive data visualization tool for evaluating and analyzing LLM responses side-by-side, developed by the PAIR t…☆520Updated 11 months ago
- Model Activity Visualiser☆521Updated 10 months ago
- open source interpretability platform 🧠☆689Updated this week
- LiveBench: A Challenging, Contamination-Free LLM Benchmark☆1,032Updated this week
- ☆659Updated 4 months ago
- Gemma 2 optimized for your local machine.☆378Updated last year
- ☆2,577Updated this week
- Build datasets using natural language☆566Updated 4 months ago
- The code and data for "MMLU-Pro: A More Robust and Challenging Multi-Task Language Understanding Benchmark" [NeurIPS 2024]☆335Updated 2 months ago
- Official inference library for pre-processing of Mistral models☆849Updated last week
- Moonshot - A simple and modular tool to evaluate and red-team any LLM application.☆308Updated this week
- ☆237Updated 2 months ago
- Persuasive Jailbreaker: we can persuade LLMs to jailbreak them!☆349Updated 3 months ago
- DiffuCoder: Understanding and Improving Masked Diffusion Models for Code Generation☆791Updated 7 months ago
- code share for paper InvisMark: Invisible and Robust Watermarking for AI-generated Image Provenance☆46Updated 8 months ago
- HarmBench: A Standardized Evaluation Framework for Automated Red Teaming and Robust Refusal☆847Updated last year
- Prompt-to-Leaderboard☆271Updated 9 months ago
- All credits go to HuggingFace's Daily AI papers (https://huggingface.co/papers) and the research community. 🔉Audio summaries here (https…☆211Updated 3 months ago
- MLE-bench is a benchmark for measuring how well AI agents perform at machine learning engineering☆1,301Updated 3 weeks ago
- Open One-Stop Moderation Tools for Safety Risks, Jailbreaks, and Refusals of LLMs☆105Updated last year
- Collection of scripts and notebooks for OpenAI's latest GPT OSS models☆496Updated 5 months ago
- ☆259Updated last month
- JailbreakBench: An Open Robustness Benchmark for Jailbreaking Language Models [NeurIPS 2024 Datasets and Benchmarks Track]☆519Updated 10 months ago
- The Granite Guardian models are designed to detect risks in prompts and responses.☆130Updated 4 months ago
- This repository includes the official implementation of OpenScholar: Synthesizing Scientific Literature with Retrieval-augmented LMs.☆743Updated 5 months ago