EleutherAI / gpt-neoLinks
An implementation of model parallel GPT-2 and GPT-3-style models using the mesh-tensorflow library.
☆8,285Updated 3 years ago
Alternatives and similar repositories for gpt-neo
Users that are interested in gpt-neo are comparing it to the libraries listed below
Sorting:
- An implementation of model parallel autoregressive transformers on GPUs, based on the Megatron and DeepSpeed libraries☆7,354Updated 2 weeks ago
- Model parallel transformers in JAX and Haiku☆6,355Updated 2 years ago
- Repo for external large-scale work☆6,547Updated last year
- Code for the paper "Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer"☆6,462Updated last month
- ☆2,920Updated 2 weeks ago
- ☆1,622Updated 2 years ago
- GPT-3: Language Models are Few-Shot Learners☆15,775Updated 5 years ago
- Code for the paper "Language Models are Unsupervised Multitask Learners"☆24,495Updated last year
- Implementation / replication of DALL-E, OpenAI's Text to Image Transformer, in Pytorch☆5,631Updated last year
- ☆4,556Updated 2 years ago
- Code and documentation to train Stanford's Alpaca models, and generate the data.☆30,258Updated last year
- Running large language models on a single GPU for throughput-oriented scenarios.☆9,383Updated last year
- The goal of this project is to enable users to create cool web demos using the newly released OpenAI GPT-3 API with just a few lines of P…☆2,888Updated 2 years ago
- A repo for distributed training of language models with Reinforcement Learning via Human Feedback (RLHF)☆4,732Updated last year
- A robust Python tool for text-based AI training and generation using GPT-2.☆1,843Updated 2 years ago
- The implementation of DeBERTa☆2,182Updated 2 years ago
- ☆2,079Updated 3 years ago
- 💥 Fast State-of-the-Art Tokenizers optimized for Research and Production☆10,318Updated last week
- OpenLLaMA, a permissively licensed open source reproduction of Meta AI’s LLaMA 7B trained on the RedPajama dataset☆7,527Updated 2 years ago
- StableLM: Stability AI Language Models☆15,787Updated last year
- Python package to easily retrain OpenAI's GPT-2 text-generating model on new texts☆3,406Updated 3 years ago
- DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.☆41,069Updated this week
- Implementation of RLHF (Reinforcement Learning with Human Feedback) on top of the PaLM architecture. Basically ChatGPT but with PaLM☆7,870Updated 2 months ago
- Dataset of GPT-2 outputs for research in detection, biases, and more☆2,008Updated 2 years ago
- A minimal PyTorch re-implementation of the OpenAI GPT (Generative Pretrained Transformer) training☆23,160Updated last year
- Locally run an Instruction-Tuned Chat-Style LLM☆10,191Updated 2 years ago
- Beyond the Imitation Game collaborative benchmark for measuring and extrapolating the capabilities of language models☆3,172Updated last year
- min(DALL·E) is a fast, minimal port of DALL·E Mini to PyTorch☆3,492Updated 7 months ago
- GLM-130B: An Open Bilingual Pre-Trained Model (ICLR 2023)☆7,683Updated 2 years ago
- Facebook AI Research Sequence-to-Sequence Toolkit written in Python.☆32,042Updated 2 months ago