haizelabs / sphynxLinks
Sphynx Hallucination Induction
☆53Updated 6 months ago
Alternatives and similar repositories for sphynx
Users that are interested in sphynx are comparing it to the libraries listed below
Sorting:
- A subset of jailbreaks automatically discovered by the Haize Labs haizing suite.☆95Updated 3 months ago
- Inference-time scaling for LLMs-as-a-judge.☆267Updated 3 weeks ago
- Red-Teaming Language Models with DSPy☆203Updated 5 months ago
- A DSPy-based implementation of the tree of thoughts method (Yao et al., 2023) for generating persuasive arguments☆87Updated 10 months ago
- Vivaria is METR's tool for running evaluations and conducting agent elicitation research.☆100Updated last week
- ☆24Updated 9 months ago
- Just a bunch of benchmark logs for different LLMs☆119Updated last year
- Use the OpenAI Batch tool to make async batch requests to the OpenAI API.☆99Updated last year
- ⚖️ Awesome LLM Judges ⚖️☆108Updated 3 months ago
- ☆64Updated 2 months ago
- A framework for optimizing DSPy programs with RL☆94Updated this week
- Chat Markup Language conversation library☆55Updated last year
- Small, simple agent task environments for training and evaluation☆18Updated 9 months ago
- Synthetic Data for LLM Fine-Tuning☆120Updated last year
- Verbosity control for AI agents☆64Updated last year
- ☆47Updated last year
- ☆87Updated 6 months ago
- A better way of testing, inspecting, and analyzing AI Agent traces.☆39Updated 3 weeks ago
- Optimizing Causal LMs through GRPO with weighted reward functions and automated hyperparameter tuning using Optuna☆55Updated 6 months ago
- Synthetic data derived by templating, few shot prompting, transformations on public domain corpora, and monte carlo tree search.☆32Updated 5 months ago
- 🦾💻🌐 distributed training & serverless inference at scale on RunPod☆18Updated last year
- Train your own SOTA deductive reasoning model☆103Updated 4 months ago
- smolLM with Entropix sampler on pytorch☆150Updated 9 months ago
- Functional Benchmarks and the Reasoning Gap☆88Updated 10 months ago
- j1-micro (1.7B) & j1-nano (600M) are absurdly tiny but mighty reward models.☆94Updated 2 weeks ago
- An attribution library for LLMs☆42Updated 10 months ago
- ☆130Updated 4 months ago
- Code for our paper PAPILLON: PrivAcy Preservation from Internet-based and Local Language MOdel ENsembles☆53Updated 2 months ago
- A framework for pitting LLMs against each other in an evolving library of games ⚔☆32Updated 3 months ago
- A framework for orchestrating AI agents using a mermaid graph☆77Updated last year