lucidrains / PaLM-pytorch
Implementation of the specific Transformer architecture from PaLM - Scaling Language Modeling with Pathways
β821Updated 2 years ago
Alternatives and similar repositories for PaLM-pytorch:
Users that are interested in PaLM-pytorch are comparing it to the libraries listed below
- Implementation of RETRO, Deepmind's Retrieval based Attention net, in Pytorchβ856Updated last year
- A modular RL library to fine-tune language models to human preferencesβ2,234Updated 9 months ago
- Implementation of 𦩠Flamingo, state-of-the-art few-shot visual question answering attention net out of Deepmind, in Pytorchβ1,223Updated 2 years ago
- Reproduce results and replicate training fo T0 (Multitask Prompted Training Enables Zero-Shot Task Generalization)β458Updated 2 years ago
- Cramming the training of a (BERT-type) language model into limited compute.β1,300Updated 6 months ago
- Implementation of Memorizing Transformers (ICLR 2022), attention net augmented with indexing and retrieval of memories using approximate β¦β627Updated last year
- Language Modeling with the H3 State Space Modelβ515Updated last year
- Code for the ALiBi method for transformer language models (ICLR 2022)β507Updated last year
- β1,484Updated last month
- Central place for the engineering/scaling WG: documentation, SLURM scripts and logs, compute environment and data.β981Updated 4 months ago
- Guide: Finetune GPT2-XL (1.5 Billion Parameters) and finetune GPT-NEO (2.7 B) on a single GPU with Huggingface Transformers using DeepSpeβ¦β434Updated last year
- Convolutions for Sequence Modelingβ870Updated 6 months ago
- Implementation of MEGABYTE, Predicting Million-byte Sequences with Multiscale Transformers, in Pytorchβ628Updated 3 months ago
- Open-source pre-training implementation of Google's LaMDA in PyTorch. Adding RLHF similar to ChatGPT.β467Updated 9 months ago
- A crude RLHF layer on top of nanoGPT with Gumbel-Softmax trickβ288Updated last year
- Implementation of ChatGPT RLHF (Reinforcement Learning with Human Feedback) on any generation model in huggingface's transformer (blommz-β¦β547Updated 7 months ago
- A simulation framework for RLHF and alternatives. Develop your RLHF method without collecting human data.β786Updated 5 months ago
- Code repository for supporting the paper "Atlas Few-shot Learning with Retrieval Augmented Language Models",(https//arxiv.org/abs/2208.03β¦β519Updated last year
- Crosslingual Generalization through Multitask Finetuningβ518Updated 2 months ago
- Task-based datasets, preprocessing, and evaluation for sequence models.β564Updated this week
- Ask Me Anything language model promptingβ542Updated last year
- Code for fine-tuning Platypus fam LLMs using LoRAβ623Updated 10 months ago
- LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale Instructionsβ812Updated last year
- Fast & Simple repository for pre-training and fine-tuning T5-style modelsβ980Updated 3 months ago
- Code for T-Few from "Few-Shot Parameter-Efficient Fine-Tuning is Better and Cheaper than In-Context Learning"β433Updated last year
- Expanding natural instructionsβ963Updated last year
- Implementation of Recurrent Memory Transformer, Neurips 2022 paper, in Pytorchβ395Updated 3 weeks ago
- Human preference data for "Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback"β1,641Updated last year
- A prize for finding tasks that cause large language models to show inverse scalingβ600Updated last year
- Used for adaptive human in the loop evaluation of language and embedding models.β305Updated last year