ckkissane / rlhf-shakespeare
Shakespeare transformer fine-tuned to generate positive sentiment samples using RLHF
☆10Updated 2 years ago
Alternatives and similar repositories for rlhf-shakespeare:
Users that are interested in rlhf-shakespeare are comparing it to the libraries listed below
- Code for the examples presented in the talk "Training a Llama in your backyard: fine-tuning very large models on consumer hardware" given…☆14Updated last year
- Reference implementation for Reward-Augmented Decoding: Efficient Controlled Text Generation With a Unidirectional Reward Model☆42Updated last year
- ☆27Updated this week
- ☆24Updated last year
- Minimum Description Length probing for neural network representations☆18Updated 3 weeks ago
- ☆17Updated 4 months ago
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆54Updated 5 months ago
- Script for processing OpenAI's PRM800K process supervision dataset into an Alpaca-style instruction-response format☆27Updated last year
- The repository contains code for Adaptive Data Optimization☆20Updated 2 months ago
- ☆48Updated 3 months ago
- Repository for "I am a Strange Dataset: Metalinguistic Tests for Language Models"☆41Updated last year
- Understanding the correlation between different LLM benchmarks☆29Updated last year
- Code repo for "Model-Generated Pretraining Signals Improves Zero-Shot Generalization of Text-to-Text Transformers" (ACL 2023)☆22Updated last year
- ☆25Updated last year
- This is a new metric that can be used to evaluate faithfulness of text generated by LLMs. The work behind this repository can be found he…☆31Updated last year
- Q-Probe: A Lightweight Approach to Reward Maximization for Language Models☆40Updated 8 months ago
- Minimal implementation of the Self-Play Fine-Tuning Converts Weak Language Models to Strong Language Models paper (ArXiv 20232401.01335)☆29Updated 11 months ago
- See https://github.com/cuda-mode/triton-index/ instead!☆11Updated 9 months ago
- [ACL 2023] Gradient Ascent Post-training Enhances Language Model Generalization☆29Updated 5 months ago
- Large language models (LLMs) made easy, EasyLM is a one stop solution for pre-training, finetuning, evaluating and serving LLMs in JAX/Fl…☆65Updated 6 months ago
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Updated last year
- Yet another random morning idea to be quickly tried and architecture shared if it works; to allow the transformer to pause for any amount…☆53Updated last year
- Code for NeurIPS LLM Efficiency Challenge☆55Updated 10 months ago
- Embedding Recycling for Language models☆38Updated last year
- The code implementation of MAGDi: Structured Distillation of Multi-Agent Interaction Graphs Improves Reasoning in Smaller Language Models…☆31Updated last year
- The source code of our work "Prepacking: A Simple Method for Fast Prefilling and Increased Throughput in Large Language Models"☆59Updated 4 months ago
- ☆28Updated last year
- An unofficial implementation of SOLAR-10.7B model and the newly proposed interlocked-DUS(iDUS) implementation and experiment details.☆12Updated 11 months ago
- Plug-and-play Search Interfaces with Pyserini and Hugging Face☆32Updated last year