EleutherAI / gpt-neoLinks
An implementation of model parallel GPT-2 and GPT-3-style models using the mesh-tensorflow library.
☆8,285Updated 3 years ago
Alternatives and similar repositories for gpt-neo
Users that are interested in gpt-neo are comparing it to the libraries listed below
Sorting:
- An implementation of model parallel autoregressive transformers on GPUs, based on the Megatron and DeepSpeed libraries☆7,343Updated 2 months ago
- Model parallel transformers in JAX and Haiku☆6,355Updated 2 years ago
- Repo for external large-scale work☆6,548Updated last year
- GPT-3: Language Models are Few-Shot Learners☆15,778Updated 5 years ago
- Python package to easily retrain OpenAI's GPT-2 text-generating model on new texts☆3,405Updated 2 years ago
- Implementation of RLHF (Reinforcement Learning with Human Feedback) on top of the PaLM architecture. Basically ChatGPT but with PaLM☆7,875Updated last month
- Code for the paper "Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer"☆6,455Updated 3 weeks ago
- ☆1,616Updated 2 years ago
- A robust Python tool for text-based AI training and generation using GPT-2.☆1,843Updated 2 years ago
- A collection of libraries to optimise AI model performances☆8,363Updated last year
- Implementation / replication of DALL-E, OpenAI's Text to Image Transformer, in Pytorch☆5,631Updated last year
- Running large language models on a single GPU for throughput-oriented scenarios.☆9,379Updated last year
- 💥 Fast State-of-the-Art Tokenizers optimized for Research and Production☆10,252Updated this week
- StableLM: Stability AI Language Models☆15,787Updated last year
- ☆4,555Updated 2 years ago
- Dataset of GPT-2 outputs for research in detection, biases, and more☆2,005Updated last year
- RWKV (pronounced RwaKuv) is an RNN with great LLM performance, which can also be directly trained like a GPT transformer (parallelizable)…☆14,177Updated 2 weeks ago
- Large-scale pretraining for dialogue☆2,411Updated 3 years ago
- Instruct-tune LLaMA on consumer hardware☆18,983Updated last year
- LLaMA: Open and Efficient Foundation Language Models☆2,801Updated 2 years ago
- A repo for distributed training of language models with Reinforcement Learning via Human Feedback (RLHF)☆4,729Updated last year
- Code for the paper "Language Models are Unsupervised Multitask Learners"☆24,427Updated last year
- Code and documentation to train Stanford's Alpaca models, and generate the data.☆30,236Updated last year
- CodeGen is a family of open-source model for program synthesis. Trained on TPU-v4. Competitive with OpenAI Codex.☆5,154Updated last month
- Implementation of DALL-E 2, OpenAI's updated text-to-image synthesis neural network, in Pytorch☆11,338Updated last year
- ☆9,013Updated last year
- The goal of this project is to enable users to create cool web demos using the newly released OpenAI GPT-3 API with just a few lines of P…☆2,887Updated 2 years ago
- High-speed download of LLaMA, Facebook's 65B parameter GPT model☆4,154Updated 2 years ago
- The RedPajama-Data repository contains code for preparing large datasets for training large language models.☆4,861Updated 11 months ago
- ChatRWKV is like ChatGPT but powered by RWKV (100% RNN) language model, and open source.☆9,513Updated 2 months ago