openai / gpt-2Links
Code for the paper "Language Models are Unsupervised Multitask Learners"
☆24,411Updated last year
Alternatives and similar repositories for gpt-2
Users that are interested in gpt-2 are comparing it to the libraries listed below
Sorting:
- GPT-3: Language Models are Few-Shot Learners☆15,783Updated 5 years ago
- Dataset of GPT-2 outputs for research in detection, biases, and more☆2,002Updated last year
- Python package to easily retrain OpenAI's GPT-2 text-generating model on new texts☆3,404Updated 2 years ago
- 🤗 Transformers: the model-definition framework for state-of-the-art machine learning models in text, vision, audio, and multimodal model…☆152,901Updated this week
- Unsupervised text tokenizer for Neural Network-based text generation.☆11,460Updated last week
- A framework for training and evaluating AI models on a variety of openly available dialogue datasets.☆10,619Updated 2 years ago
- A library for efficient similarity search and clustering of dense vectors.☆38,093Updated this week
- Ongoing research training transformer models at scale☆14,301Updated this week
- Library of deep learning models and datasets designed to make deep learning more accessible and accelerate ML research.☆16,760Updated 2 years ago
- An implementation of model parallel GPT-2 and GPT-3-style models using the mesh-tensorflow library.☆8,286Updated 3 years ago
- Inference code for Llama models☆58,934Updated 10 months ago
- State-of-the-Art Deep Learning scripts organized by models - easy to train and deploy with reproducible accuracy and performance on enter…☆14,590Updated last year
- Trax — Deep Learning with Clear Code and Speed☆8,292Updated 2 months ago
- Low-code framework for building custom LLMs, neural networks, and other AI models☆11,619Updated 2 weeks ago
- 💥 Fast State-of-the-Art Tokenizers optimized for Research and Production☆10,232Updated last month
- XLNet: Generalized Autoregressive Pretraining for Language Understanding☆6,178Updated 2 years ago
- Code and model for the paper "Improving Language Understanding by Generative Pre-Training"☆2,261Updated 6 years ago
- Magenta: Music and Art Generation with Machine Intelligence☆19,732Updated 4 months ago
- An open-source NLP research library, built on PyTorch.☆11,881Updated 3 years ago
- An implementation of model parallel autoregressive transformers on GPUs, based on the Megatron and DeepSpeed libraries☆7,337Updated 2 months ago
- An open source AutoML toolkit for automate machine learning lifecycle, including feature engineering, neural architecture search, model c…☆14,300Updated last year
- Code for the paper "Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer"☆6,450Updated 2 weeks ago
- Facebook AI Research Sequence-to-Sequence Toolkit written in Python.☆31,983Updated last month
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆49,869Updated 2 weeks ago
- Composable transformations of Python+NumPy programs: differentiate, vectorize, JIT to GPU/TPU, and more☆34,051Updated this week
- TensorFlow code and pre-trained models for BERT☆39,672Updated last year
- RWKV (pronounced RwaKuv) is an RNN with great LLM performance, which can also be directly trained like a GPT transformer (parallelizable)…☆14,152Updated last week
- A toolkit for developing and comparing reinforcement learning algorithms.☆36,789Updated last year
- tiktoken is a fast BPE tokeniser for use with OpenAI's models.☆16,623Updated last month
- Chinese version of GPT2 training code, using BERT tokenizer.☆7,598Updated last year