EleutherAI / gpt-neo
An implementation of model parallel GPT-2 and GPT-3-style models using the mesh-tensorflow library.
☆8,270Updated 2 years ago
Alternatives and similar repositories for gpt-neo:
Users that are interested in gpt-neo are comparing it to the libraries listed below
- An implementation of model parallel autoregressive transformers on GPUs, based on the Megatron and DeepSpeed libraries☆7,101Updated this week
- Model parallel transformers in JAX and Haiku☆6,323Updated 2 years ago
- Repo for external large-scale work☆6,518Updated 9 months ago
- A minimal PyTorch re-implementation of the OpenAI GPT (Generative Pretrained Transformer) training☆21,388Updated 6 months ago
- GPT-3: Language Models are Few-Shot Learners☆15,737Updated 4 years ago
- Implementation / replication of DALL-E, OpenAI's Text to Image Transformer, in Pytorch☆5,599Updated last year
- A robust Python tool for text-based AI training and generation using GPT-2.☆1,843Updated last year
- The goal of this project is to enable users to create cool web demos using the newly released OpenAI GPT-3 API with just a few lines of P…☆2,888Updated last year
- ☆1,533Updated last year
- 💥 Fast State-of-the-Art Tokenizers optimized for Research and Production☆9,387Updated this week
- Implementation of the LLaMA language model based on nanoGPT. Supports flash attention, Int8 and GPTQ 4bit quantization, LoRA and LLaMA-Ad…☆6,032Updated 5 months ago
- Code and documentation to train Stanford's Alpaca models, and generate the data.☆29,827Updated 7 months ago
- Python package to easily retrain OpenAI's GPT-2 text-generating model on new texts☆3,399Updated 2 years ago
- Code for the paper "Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer"☆6,270Updated 5 months ago
- 🚀 A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (i…☆8,343Updated this week
- RWKV (pronounced RwaKuv) is an RNN with great LLM performance, which can also be directly trained like a GPT transformer (parallelizable)…☆13,167Updated this week
- State-of-the-Art Text Embeddings☆16,018Updated last week
- DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.☆36,866Updated this week
- Running large language models on a single GPU for throughput-oriented scenarios.☆9,261Updated 3 months ago
- A very simple framework for state-of-the-art Natural Language Processing (NLP)☆14,065Updated this week
- StableLM: Stability AI Language Models☆15,831Updated 10 months ago
- An open-source NLP research library, built on PyTorch.☆11,806Updated 2 years ago
- Code for the paper "Language Models are Unsupervised Multitask Learners"☆23,054Updated 6 months ago
- Facebook AI Research Sequence-to-Sequence Toolkit written in Python.☆31,018Updated last month
- A repo for distributed training of language models with Reinforcement Learning via Human Feedback (RLHF)☆4,584Updated last year
- CodeGen is a family of open-source model for program synthesis. Trained on TPU-v4. Competitive with OpenAI Codex.☆5,002Updated 2 weeks ago
- OpenLLaMA, a permissively licensed open source reproduction of Meta AI’s LLaMA 7B trained on the RedPajama dataset☆7,443Updated last year
- A scalable generative AI framework built for researchers and developers working on Large Language Models, Multimodal, and Speech AI (Auto…☆13,150Updated this week
- 🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.☆139,759Updated this week
- 🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.☆17,363Updated this week