ethanyanjiali / minChatGPTLinks
A minimum example of aligning language models with RLHF similar to ChatGPT
☆224Updated 2 years ago
Alternatives and similar repositories for minChatGPT
Users that are interested in minChatGPT are comparing it to the libraries listed below
Sorting:
- ☆98Updated 2 years ago
- A crude RLHF layer on top of nanoGPT with Gumbel-Softmax trick☆293Updated 2 years ago
- A (somewhat) minimal library for finetuning language models with PPO on human feedback.☆87Updated 3 years ago
- Multipack distributed sampler for fast padding-free training of LLMs☆202Updated last year
- Implementation of Reinforcement Learning from Human Feedback (RLHF)☆173Updated 2 years ago
- Recurrent Memory Transformer☆154Updated 2 years ago
- Pre-training code for Amber 7B LLM☆169Updated last year
- Implementation of Recurrent Memory Transformer, Neurips 2022 paper, in Pytorch☆420Updated 11 months ago
- Implementation of the conditionally routed attention in the CoLT5 architecture, in Pytorch☆230Updated last year
- Due to restriction of LLaMA, we try to reimplement BLOOM-LoRA (much less restricted BLOOM license here https://huggingface.co/spaces/bigs…☆184Updated 2 years ago
- ☆457Updated 2 years ago
- Exploring finetuning public checkpoints on filter 8K sequences on Pile☆116Updated 2 years ago
- RLHF implementation details of OAI's 2019 codebase☆196Updated last year
- Open Instruction Generalist is an assistant trained on massive synthetic instructions to perform many millions of tasks☆209Updated last year
- Scaling Data-Constrained Language Models☆342Updated 5 months ago
- A bagel, with everything.☆325Updated last year
- DSIR large-scale data selection framework for language model training☆266Updated last year
- Official repository of NEFTune: Noisy Embeddings Improves Instruction Finetuning☆405Updated last year
- This is the repo for the paper Shepherd -- A Critic for Language Model Generation☆219Updated 2 years ago
- Implementation of MEGABYTE, Predicting Million-byte Sequences with Multiscale Transformers, in Pytorch☆654Updated 11 months ago
- a Fine-tuned LLaMA that is Good at Arithmetic Tasks☆178Updated 2 years ago
- Minimal code to train a Large Language Model (LLM).☆172Updated 3 years ago
- batched loras☆347Updated 2 years ago
- ☆95Updated 2 years ago
- Micro Llama is a small Llama based model with 300M parameters trained from scratch with $500 budget☆162Updated 3 months ago
- NeurIPS Large Language Model Efficiency Challenge: 1 LLM + 1GPU + 1Day☆258Updated 2 years ago
- Simple next-token-prediction for RLHF☆227Updated 2 years ago
- A large-scale, fine-grained, diverse preference dataset (and models).☆356Updated last year
- Code for fine-tuning Platypus fam LLMs using LoRA☆630Updated last year
- MultilingualSIFT: Multilingual Supervised Instruction Fine-tuning☆94Updated 2 years ago