ethanyanjiali / minChatGPTLinks
A minimum example of aligning language models with RLHF similar to ChatGPT
☆224Updated 2 years ago
Alternatives and similar repositories for minChatGPT
Users that are interested in minChatGPT are comparing it to the libraries listed below
Sorting:
- A (somewhat) minimal library for finetuning language models with PPO on human feedback.☆89Updated 3 years ago
- ☆98Updated 2 years ago
- Implementation of Reinforcement Learning from Human Feedback (RLHF)☆173Updated 2 years ago
- Due to restriction of LLaMA, we try to reimplement BLOOM-LoRA (much less restricted BLOOM license here https://huggingface.co/spaces/bigs…☆184Updated 2 years ago
- Recurrent Memory Transformer☆154Updated 2 years ago
- A crude RLHF layer on top of nanoGPT with Gumbel-Softmax trick☆293Updated 2 years ago
- Multipack distributed sampler for fast padding-free training of LLMs☆202Updated last year
- Exploring finetuning public checkpoints on filter 8K sequences on Pile☆116Updated 2 years ago
- Pre-training code for Amber 7B LLM☆170Updated last year
- Implementation of the conditionally routed attention in the CoLT5 architecture, in Pytorch☆231Updated last year
- Implementation of MEGABYTE, Predicting Million-byte Sequences with Multiscale Transformers, in Pytorch☆655Updated 11 months ago
- Implementation of Recurrent Memory Transformer, Neurips 2022 paper, in Pytorch☆422Updated 11 months ago
- Minimal code to train a Large Language Model (LLM).☆172Updated 3 years ago
- Official repository of NEFTune: Noisy Embeddings Improves Instruction Finetuning☆406Updated last year
- batched loras☆347Updated 2 years ago
- Scaling Data-Constrained Language Models☆342Updated 5 months ago
- Code for fine-tuning Platypus fam LLMs using LoRA☆631Updated last year
- Implementation of CALM from the paper "LLM Augmented LLMs: Expanding Capabilities through Composition", out of Google Deepmind☆179Updated last year
- RLHF implementation details of OAI's 2019 codebase☆197Updated last year
- Scalable PaLM implementation of PyTorch☆189Updated 3 years ago
- ☆457Updated 2 years ago
- Scripts for fine-tuning Llama2 via SFT and DPO.☆206Updated 2 years ago
- Open Instruction Generalist is an assistant trained on massive synthetic instructions to perform many millions of tasks☆210Updated last year
- Official code for ReLoRA from the paper Stack More Layers Differently: High-Rank Training Through Low-Rank Updates☆469Updated last year
- Implementation of the Llama architecture with RLHF + Q-learning☆168Updated 10 months ago
- An open collection of implementation tips, tricks and resources for training large language models☆490Updated 2 years ago
- Simple next-token-prediction for RLHF☆227Updated 2 years ago
- A bagel, with everything.☆325Updated last year
- DSIR large-scale data selection framework for language model training☆266Updated last year
- Large Scale Distributed Model Training strategy with Colossal AI and Lightning AI☆56Updated 2 years ago