maxwellreuter / chatgpt-refusalsLinks
Datasets and code from our paper, where we use machine learning to predict if ChatGPT will refuse a given prompt.
☆38Updated last year
Alternatives and similar repositories for chatgpt-refusals
Users that are interested in chatgpt-refusals are comparing it to the libraries listed below
Sorting:
- Command-line script for inferencing from models such as falcon-7b-instruct☆75Updated 2 years ago
- Zeus LLM Trainer is a rewrite of Stanford Alpaca aiming to be the trainer for all Large Language Models☆69Updated last year
- Hosting the JSON for the GPT4 Tokenizer☆64Updated 2 years ago
- Official repo for NAACL 2024 Findings paper "LeTI: Learning to Generate from Textual Interactions."☆65Updated 2 years ago
- Multi-Domain Expert Learning☆67Updated last year
- a pipeline for using api calls to agnostically convert unstructured data into structured training data☆30Updated 9 months ago
- ☆47Updated last year
- ☆22Updated last year
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Updated 2 years ago
- A library for squeakily cleaning and filtering language datasets.☆47Updated 2 years ago
- Chat Markup Language conversation library☆55Updated last year
- Resources related to EACL 2023 paper "SwitchPrompt: Learning Domain-Specific Gated Soft Prompts for Classification in Low-Resource Domain…☆52Updated 2 years ago
- inference code for mixtral-8x7b-32kseqlen☆100Updated last year
- Track the progress of LLM context utilisation☆55Updated 3 months ago
- A new way to generate large quantities of high quality synthetic data (on par with GPT-4), with better controllability, at a fraction of …☆22Updated 9 months ago
- Reimplementation of the task generation part from the Alpaca paper☆119Updated 2 years ago
- ☆15Updated 3 months ago
- Documentation effort for the BookCorpus dataset☆34Updated 4 years ago
- ☆32Updated 2 years ago
- ☆94Updated 6 months ago
- ☆84Updated last year
- Accompanying code and SEP dataset for the "Can LLMs Separate Instructions From Data? And What Do We Even Mean By That?" paper.☆54Updated 4 months ago
- Doing simple retrieval from LLM models at various context lengths to measure accuracy☆101Updated last year
- Comprehensive analysis of difference in performance of QLora, Lora, and Full Finetunes.☆82Updated last year
- The data processing pipeline for the Koala chatbot language model☆117Updated 2 years ago
- minimal scripts for 24GB VRAM GPUs. training, inference, whatever☆41Updated last month
- A repository of projects and datasets under active development by Alignment Lab AI☆22Updated last year
- ☆57Updated last year
- 🐜🔧 A minimalistic tool to fine-tune your LLMs☆18Updated last year
- Code for "Utility Engineering: Analyzing and Controlling Emergent Value Systems in AIs"☆52Updated 4 months ago