openai / gpt-2
Code for the paper "Language Models are Unsupervised Multitask Learners"
☆22,546Updated 3 months ago
Related projects ⓘ
Alternatives and complementary repositories for gpt-2
- Dataset of GPT-2 outputs for research in detection, biases, and more☆1,942Updated 11 months ago
- Facebook AI Research Sequence-to-Sequence Toolkit written in Python.☆30,552Updated last month
- GPT-3: Language Models are Few-Shot Learners☆15,692Updated 4 years ago
- Python package to easily retrain OpenAI's GPT-2 text-generating model on new texts☆3,398Updated last year
- 🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.☆135,166Updated this week
- 💫 Industrial-strength Natural Language Processing (NLP) in Python☆30,265Updated this week
- A very simple framework for state-of-the-art Natural Language Processing (NLP)☆13,946Updated this week
- A minimal PyTorch re-implementation of the OpenAI GPT (Generative Pretrained Transformer) training☆20,199Updated 3 months ago
- A framework for training and evaluating AI models on a variety of openly available dialogue datasets.☆10,494Updated last year
- Unsupervised text tokenizer for Neural Network-based text generation.☆10,295Updated 2 weeks ago
- State-of-the-Art Text Embeddings☆15,368Updated this week
- An open-source NLP research library, built on PyTorch.☆11,759Updated last year
- TensorFlow code and pre-trained models for BERT☆38,213Updated 3 months ago
- A scalable generative AI framework built for researchers and developers working on Large Language Models, Multimodal, and Speech AI (Auto…☆12,150Updated this week
- Build and share delightful machine learning apps, all in Python. 🌟 Star to support our work!☆34,030Updated this week
- Stanford NLP Python library for tokenization, sentence segmentation, NER, and parsing of many human languages☆7,296Updated this week
- 💥 Fast State-of-the-Art Tokenizers optimized for Research and Production☆9,052Updated this week
- An implementation of model parallel GPT-2 and GPT-3-style models using the mesh-tensorflow library.☆8,232Updated 2 years ago
- An implementation of model parallel autoregressive transformers on GPUs, based on the Megatron and DeepSpeed libraries☆6,947Updated this week
- Ongoing research training transformer models at scale☆10,595Updated this week
- XLNet: Generalized Autoregressive Pretraining for Language Understanding☆6,182Updated last year
- Trax — Deep Learning with Clear Code and Speed☆8,101Updated 2 months ago
- Composable transformations of Python+NumPy programs: differentiate, vectorize, JIT to GPU/TPU, and more☆30,532Updated this week
- DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.☆35,508Updated this week
- A library for efficient similarity search and clustering of dense vectors.☆31,488Updated this week
- ALBERT: A Lite BERT for Self-supervised Learning of Language Representations☆3,247Updated last year
- Library of deep learning models and datasets designed to make deep learning more accessible and accelerate ML research.☆15,560Updated last year
- Code for the paper "Language Models are Unsupervised Multitask Learners"☆1,147Updated 2 years ago
- An open source library for deep learning end-to-end dialog systems and chatbots.☆6,730Updated this week
- Code and documentation to train Stanford's Alpaca models, and generate the data.☆29,561Updated 4 months ago