conceptofmind / LaMDA-rlhf-pytorch
Open-source pre-training implementation of Google's LaMDA in PyTorch. Adding RLHF similar to ChatGPT.
☆472Updated last year
Alternatives and similar repositories for LaMDA-rlhf-pytorch:
Users that are interested in LaMDA-rlhf-pytorch are comparing it to the libraries listed below
- Crosslingual Generalization through Multitask Finetuning☆532Updated 7 months ago
- ☆458Updated last year
- Due to restriction of LLaMA, we try to reimplement BLOOM-LoRA (much less restricted BLOOM license here https://huggingface.co/spaces/bigs…☆184Updated last year
- Implementation of the specific Transformer architecture from PaLM - Scaling Language Modeling with Pathways☆820Updated 2 years ago
- Repo for fine-tuning Casual LLMs☆454Updated last year
- Fast Inference Solutions for BLOOM☆561Updated 6 months ago
- Implementation of RETRO, Deepmind's Retrieval based Attention net, in Pytorch☆864Updated last year
- Code for "Learning to summarize from human feedback"☆1,022Updated last year
- Ask Me Anything language model prompting☆548Updated last year
- Implementation of Reinforcement Learning from Human Feedback (RLHF)☆173Updated 2 years ago
- [NIPS2023] RRHF & Wombat☆806Updated last year
- Implementation of ChatGPT RLHF (Reinforcement Learning with Human Feedback) on any generation model in huggingface's transformer (blommz-…☆558Updated 11 months ago
- Central place for the engineering/scaling WG: documentation, SLURM scripts and logs, compute environment and data.☆993Updated 9 months ago
- ☆1,513Updated last week
- OpenAlpaca: A Fully Open-Source Instruction-Following Model Based On OpenLLaMA☆302Updated last year
- Code for fine-tuning Platypus fam LLMs using LoRA☆629Updated last year
- A simulation framework for RLHF and alternatives. Develop your RLHF method without collecting human data.☆807Updated 10 months ago
- Implementation of MEGABYTE, Predicting Million-byte Sequences with Multiscale Transformers, in Pytorch☆641Updated 4 months ago
- Reproduce results and replicate training fo T0 (Multitask Prompted Training Enables Zero-Shot Task Generalization)☆463Updated 2 years ago
- Multi-language Enhanced LLaMA☆301Updated 2 years ago
- This repository contains code for extending the Stanford Alpaca synthetic instruction tuning to existing instruction-tuned models such as…☆351Updated last year
- Used for adaptive human in the loop evaluation of language and embedding models.☆309Updated 2 years ago
- LOMO: LOw-Memory Optimization☆985Updated 10 months ago
- LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale Instructions☆820Updated 2 years ago
- Guide: Finetune GPT2-XL (1.5 Billion Parameters) and finetune GPT-NEO (2.7 B) on a single GPU with Huggingface Transformers using DeepSpe…☆437Updated last year
- Expanding natural instructions☆996Updated last year
- ☆749Updated 10 months ago
- Open Instruction Generalist is an assistant trained on massive synthetic instructions to perform many millions of tasks☆208Updated last year
- A crude RLHF layer on top of nanoGPT with Gumbel-Softmax trick☆289Updated last year
- Salesforce open-source LLMs with 8k sequence length.☆717Updated 3 months ago