openai / gpt-2Links
Code for the paper "Language Models are Unsupervised Multitask Learners"
☆24,143Updated last year
Alternatives and similar repositories for gpt-2
Users that are interested in gpt-2 are comparing it to the libraries listed below
Sorting:
- GPT-3: Language Models are Few-Shot Learners☆15,781Updated 4 years ago
- Dataset of GPT-2 outputs for research in detection, biases, and more☆1,994Updated last year
- Code for the paper "Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer"☆6,418Updated 4 months ago
- 🤗 Transformers: the model-definition framework for state-of-the-art machine learning models in text, vision, audio, and multimodal model…☆149,100Updated this week
- Python package to easily retrain OpenAI's GPT-2 text-generating model on new texts☆3,406Updated 2 years ago
- Facebook AI Research Sequence-to-Sequence Toolkit written in Python.☆31,767Updated 2 months ago
- An implementation of model parallel GPT-2 and GPT-3-style models using the mesh-tensorflow library.☆8,292Updated 3 years ago
- Implementation of RLHF (Reinforcement Learning with Human Feedback) on top of the PaLM architecture. Basically ChatGPT but with PaLM☆7,865Updated this week
- TensorFlow code and pre-trained models for BERT☆39,493Updated last year
- Unsupervised text tokenizer for Neural Network-based text generation.☆11,221Updated this week
- A minimal PyTorch re-implementation of the OpenAI GPT (Generative Pretrained Transformer) training☆22,554Updated last year
- An implementation of model parallel autoregressive transformers on GPUs, based on the Megatron and DeepSpeed libraries☆7,289Updated last month
- Composable transformations of Python+NumPy programs: differentiate, vectorize, JIT to GPU/TPU, and more☆33,366Updated this week
- Model parallel transformers in JAX and Haiku☆6,355Updated 2 years ago
- Repo for external large-scale work☆6,542Updated last year
- DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.☆40,001Updated this week
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆44,098Updated 8 months ago
- XLNet: Generalized Autoregressive Pretraining for Language Understanding☆6,180Updated 2 years ago
- Ongoing research training transformer models at scale☆13,458Updated this week
- Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities☆21,702Updated 2 months ago
- Code and model for the paper "Improving Language Understanding by Generative Pre-Training"☆2,236Updated 6 years ago
- Code for the paper "Language Models are Unsupervised Multitask Learners"☆1,148Updated 2 years ago
- A toolkit for developing and comparing reinforcement learning algorithms.☆36,461Updated 10 months ago
- A framework for training and evaluating AI models on a variety of openly available dialogue datasets.☆10,617Updated last year
- Code and documentation to train Stanford's Alpaca models, and generate the data.☆30,141Updated last year
- Library for fast text representation and classification.☆26,337Updated last year
- LLM inference in C/C++☆86,121Updated this week
- 💥 Fast State-of-the-Art Tokenizers optimized for Research and Production☆10,053Updated last week
- High-Resolution Image Synthesis with Latent Diffusion Models☆41,701Updated 2 months ago
- Fast and memory-efficient exact attention☆19,385Updated this week