lucidrains / PaLM-pytorchLinks
Implementation of the specific Transformer architecture from PaLM - Scaling Language Modeling with Pathways
☆824Updated 3 years ago
Alternatives and similar repositories for PaLM-pytorch
Users that are interested in PaLM-pytorch are comparing it to the libraries listed below
Sorting:
- Implementation of RETRO, Deepmind's Retrieval based Attention net, in Pytorch☆875Updated 2 years ago
- Central place for the engineering/scaling WG: documentation, SLURM scripts and logs, compute environment and data.☆1,006Updated last year
- Code for the ALiBi method for transformer language models (ICLR 2022)☆543Updated 2 years ago
- Cramming the training of a (BERT-type) language model into limited compute.☆1,351Updated last year
- Open-source pre-training implementation of Google's LaMDA in PyTorch. Adding RLHF similar to ChatGPT.☆473Updated last year
- Implementation of Memorizing Transformers (ICLR 2022), attention net augmented with indexing and retrieval of memories using approximate …☆634Updated 2 years ago
- Original Implementation of Prompt Tuning from Lester, et al, 2021☆696Updated 8 months ago
- Code for "Learning to summarize from human feedback"☆1,052Updated 2 years ago
- Guide: Finetune GPT2-XL (1.5 Billion Parameters) and finetune GPT-NEO (2.7 B) on a single GPU with Huggingface Transformers using DeepSpe…☆436Updated 2 years ago
- ☆1,551Updated 2 weeks ago
- Task-based datasets, preprocessing, and evaluation for sequence models.☆587Updated 2 weeks ago
- Implementation of ChatGPT RLHF (Reinforcement Learning with Human Feedback) on any generation model in huggingface's transformer (blommz-…☆566Updated last year
- Reproduce results and replicate training fo T0 (Multitask Prompted Training Enables Zero-Shot Task Generalization)☆462Updated 3 years ago
- Language Modeling with the H3 State Space Model☆518Updated 2 years ago
- An open-source implementation of Google's PaLM models☆817Updated last year
- ☆1,246Updated 2 years ago
- Fast Inference Solutions for BLOOM☆564Updated last year
- Implementation of MEGABYTE, Predicting Million-byte Sequences with Multiscale Transformers, in Pytorch☆654Updated 10 months ago
- Crosslingual Generalization through Multitask Finetuning☆537Updated last year
- Code for T-Few from "Few-Shot Parameter-Efficient Fine-Tuning is Better and Cheaper than In-Context Learning"☆457Updated 2 years ago
- Public repo for the NeurIPS 2023 paper "Unlimiformer: Long-Range Transformers with Unlimited Length Input"☆1,063Updated last year
- Code for the paper Fine-Tuning Language Models from Human Preferences☆1,374Updated 2 years ago
- An open collection of implementation tips, tricks and resources for training large language models☆486Updated 2 years ago
- Ask Me Anything language model prompting☆545Updated 2 years ago
- The official implementation of “Sophia: A Scalable Stochastic Second-order Optimizer for Language Model Pre-training”☆977Updated last year
- Fast & Simple repository for pre-training and fine-tuning T5-style models☆1,013Updated last year
- A modular RL library to fine-tune language models to human preferences☆2,366Updated last year
- A prize for finding tasks that cause large language models to show inverse scaling☆616Updated 2 years ago
- A crude RLHF layer on top of nanoGPT with Gumbel-Softmax trick☆293Updated last year
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆1,427Updated last year