ethanyanjiali / minChatGPTLinks
A minimum example of aligning language models with RLHF similar to ChatGPT
☆225Updated 2 years ago
Alternatives and similar repositories for minChatGPT
Users that are interested in minChatGPT are comparing it to the libraries listed below
Sorting:
- A (somewhat) minimal library for finetuning language models with PPO on human feedback.☆89Updated 3 years ago
- A crude RLHF layer on top of nanoGPT with Gumbel-Softmax trick☆293Updated 2 years ago
- ☆98Updated 2 years ago
- Recurrent Memory Transformer☆155Updated 2 years ago
- Multipack distributed sampler for fast padding-free training of LLMs☆204Updated last year
- Implementation of Reinforcement Learning from Human Feedback (RLHF)☆173Updated 2 years ago
- Implementation of Recurrent Memory Transformer, Neurips 2022 paper, in Pytorch☆422Updated last year
- Pre-training code for Amber 7B LLM☆170Updated last year
- Scaling Data-Constrained Language Models☆342Updated 7 months ago
- Implementation of the conditionally routed attention in the CoLT5 architecture, in Pytorch☆231Updated last year
- Exploring finetuning public checkpoints on filter 8K sequences on Pile☆116Updated 2 years ago
- Implementation of MEGABYTE, Predicting Million-byte Sequences with Multiscale Transformers, in Pytorch☆654Updated last year
- ☆95Updated 2 years ago
- ☆457Updated 2 years ago
- RLHF implementation details of OAI's 2019 codebase☆197Updated 2 years ago
- Code for the paper "The Impact of Positional Encoding on Length Generalization in Transformers", NeurIPS 2023☆137Updated last year
- Official code for ReLoRA from the paper Stack More Layers Differently: High-Rank Training Through Low-Rank Updates☆473Updated last year
- Simple next-token-prediction for RLHF☆228Updated 2 years ago
- Due to restriction of LLaMA, we try to reimplement BLOOM-LoRA (much less restricted BLOOM license here https://huggingface.co/spaces/bigs…☆184Updated 2 years ago
- Official repository of NEFTune: Noisy Embeddings Improves Instruction Finetuning☆409Updated last year
- Repo for "Monarch Mixer: A Simple Sub-Quadratic GEMM-Based Architecture"☆562Updated last year
- A bagel, with everything.☆326Updated last year
- Open Instruction Generalist is an assistant trained on massive synthetic instructions to perform many millions of tasks☆209Updated 2 years ago
- Code for the ALiBi method for transformer language models (ICLR 2022)☆549Updated 2 years ago
- A full pipeline to finetune Vicuna LLM with LoRA and RLHF on consumer hardware. Implementation of RLHF (Reinforcement Learning with Human…☆221Updated last year
- Code for fine-tuning Platypus fam LLMs using LoRA☆630Updated last year
- JAX implementation of the Llama 2 model☆216Updated 2 years ago
- ModuleFormer is a MoE-based architecture that includes two different types of experts: stick-breaking attention heads and feedforward exp…☆226Updated 4 months ago
- batched loras☆349Updated 2 years ago
- DSIR large-scale data selection framework for language model training☆269Updated last year