EleutherAI / gpt-neoLinks
An implementation of model parallel GPT-2 and GPT-3-style models using the mesh-tensorflow library.
☆8,293Updated 3 years ago
Alternatives and similar repositories for gpt-neo
Users that are interested in gpt-neo are comparing it to the libraries listed below
Sorting:
- An implementation of model parallel autoregressive transformers on GPUs, based on the Megatron and DeepSpeed libraries☆7,295Updated last month
- Model parallel transformers in JAX and Haiku☆6,355Updated 2 years ago
- Repo for external large-scale work☆6,543Updated last year
- Implementation of RLHF (Reinforcement Learning with Human Feedback) on top of the PaLM architecture. Basically ChatGPT but with PaLM☆7,865Updated last week
- The goal of this project is to enable users to create cool web demos using the newly released OpenAI GPT-3 API with just a few lines of P…☆2,894Updated last year
- ☆4,558Updated 2 years ago
- GPT-3: Language Models are Few-Shot Learners☆15,780Updated 4 years ago
- Code for the paper "Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer"☆6,418Updated 4 months ago
- A robust Python tool for text-based AI training and generation using GPT-2.☆1,841Updated 2 years ago
- A repo for distributed training of language models with Reinforcement Learning via Human Feedback (RLHF)☆4,707Updated last year
- Trax — Deep Learning with Clear Code and Speed☆8,271Updated 2 weeks ago
- DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.☆40,058Updated this week
- ☆2,067Updated 3 years ago
- Implementation / replication of DALL-E, OpenAI's Text to Image Transformer, in Pytorch☆5,626Updated last year
- Python package to easily retrain OpenAI's GPT-2 text-generating model on new texts☆3,407Updated 2 years ago
- Running large language models on a single GPU for throughput-oriented scenarios.☆9,364Updated 10 months ago
- A collection of libraries to optimise AI model performances☆8,369Updated last year
- ☆2,880Updated last week
- Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities☆21,713Updated 2 months ago
- ☆1,600Updated 2 years ago
- 🌸 Run LLMs at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading☆9,791Updated last year
- PyTorch package for the discrete VAE used for DALL·E.☆10,872Updated last year
- 💥 Fast State-of-the-Art Tokenizers optimized for Research and Production☆10,060Updated 2 weeks ago
- Implementation of the LLaMA language model based on nanoGPT. Supports flash attention, Int8 and GPTQ 4bit quantization, LoRA and LLaMA-Ad…☆6,079Updated 2 months ago
- Simple command line tool for text to image generation using OpenAI's CLIP and Siren (Implicit neural representation network). Technique w…☆4,349Updated 3 years ago
- GLM-130B: An Open Bilingual Pre-Trained Model (ICLR 2023)☆7,682Updated 2 years ago
- ALBERT: A Lite BERT for Self-supervised Learning of Language Representations☆3,275Updated 2 years ago
- Code for the paper "Language Models are Unsupervised Multitask Learners"☆24,160Updated last year
- Large-scale pretraining for dialogue☆2,404Updated 2 years ago
- 🚀 A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (i…☆9,116Updated this week