maxwellreuter / chatgpt-refusals
Datasets and code from our paper, where we use machine learning to predict if ChatGPT will refuse a given prompt.
☆34Updated last year
Alternatives and similar repositories for chatgpt-refusals:
Users that are interested in chatgpt-refusals are comparing it to the libraries listed below
- ☆22Updated last year
- Zeus LLM Trainer is a rewrite of Stanford Alpaca aiming to be the trainer for all Large Language Models☆70Updated last year
- Official repo for NAACL 2024 Findings paper "LeTI: Learning to Generate from Textual Interactions."☆63Updated last year
- ☆46Updated 2 months ago
- Just a bunch of benchmark logs for different LLMs☆116Updated 5 months ago
- ☆24Updated last year
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Updated last year
- a pipeline for using api calls to agnostically convert unstructured data into structured training data☆29Updated 3 months ago
- Multi-Domain Expert Learning☆67Updated 11 months ago
- ☆48Updated last year
- Official homepage for "Self-Harmonized Chain of Thought"☆88Updated last month
- High level library for batched embeddings generation, blazingly-fast web-based RAG and quantized indexes processing ⚡☆63Updated 2 months ago
- A library for squeakily cleaning and filtering language datasets.☆45Updated last year
- ☆27Updated 2 weeks ago
- Track the progress of LLM context utilisation☆53Updated 6 months ago
- Finding semantically meaningful and accurate prompts.☆46Updated last year
- A public implementation of the ReLoRA pretraining method, built on Lightning-AI's Pytorch Lightning suite.☆33Updated 10 months ago
- SCREWS: A Modular Framework for Reasoning with Revisions☆27Updated last year
- Using open source LLMs to build synthetic datasets for direct preference optimization☆49Updated 10 months ago
- Code for our paper PAPILLON: PrivAcy Preservation from Internet-based and Local Language MOdel ENsembles☆20Updated 3 weeks ago
- A repository of projects and datasets under active development by Alignment Lab AI☆22Updated last year
- A new way to generate large quantities of high quality synthetic data (on par with GPT-4), with better controllability, at a fraction of …☆21Updated 3 months ago
- Comprehensive analysis of difference in performance of QLora, Lora, and Full Finetunes.☆82Updated last year
- Testing paligemma2 finetuning on reasoning dataset☆18Updated 3 weeks ago
- Accompanying code and SEP dataset for the "Can LLMs Separate Instructions From Data? And What Do We Even Mean By That?" paper.☆46Updated 7 months ago
- Command-line script for inferencing from models such as falcon-7b-instruct☆76Updated last year
- Simple replication of [ColBERT-v1](https://arxiv.org/abs/2004.12832).☆79Updated 10 months ago
- LLM plugin for models hosted by Anyscale Endpoints☆32Updated 8 months ago
- Official repo for the paper PHUDGE: Phi-3 as Scalable Judge. Evaluate your LLMs with or without custom rubric, reference answer, absolute…☆48Updated 6 months ago
- Code and data for "StructLM: Towards Building Generalist Models for Structured Knowledge Grounding" (COLM 2024)☆74Updated 3 months ago