labmlai / neox
Simple Annotated implementation of GPT-NeoX in PyTorch
☆110Updated 2 years ago
Alternatives and similar repositories for neox:
Users that are interested in neox are comparing it to the libraries listed below
- ☆128Updated 2 years ago
- Used for adaptive human in the loop evaluation of language and embedding models.☆306Updated 2 years ago
- DeepSpeed is a deep learning optimization library that makes distributed training easy, efficient, and effective.☆164Updated last week
- [WIP] A 🔥 interface for running code in the cloud☆86Updated 2 years ago
- Exploring finetuning public checkpoints on filter 8K sequences on Pile☆115Updated last year
- Experiments with generating opensource language model assistants☆97Updated last year
- ☆67Updated 2 years ago
- Pipeline for pulling and processing online language model pretraining data from the web☆175Updated last year
- Smol but mighty language model☆63Updated last year
- RWKV-v2-RNN trained on the Pile. See https://github.com/BlinkDL/RWKV-LM for details.☆67Updated 2 years ago
- Babysit your preemptible TPUs☆86Updated 2 years ago
- A Multilingual Dataset for Parsing Realistic Task-Oriented Dialogs☆114Updated 2 years ago
- Fine-tuning 6-Billion GPT-J (& other models) with LoRA and 8-bit compression☆66Updated 2 years ago
- One stop shop for all things carp☆59Updated 2 years ago
- Swarm training framework using Haiku + JAX + Ray for layer parallel transformer language models on unreliable, heterogeneous nodes☆236Updated last year
- Easily convert common crawl to a dataset of caption and document. Image/text Audio/text Video/text, ...☆316Updated last year
- XtremeDistil framework for distilling/compressing massive multilingual neural network models to tiny and efficient models for AI at scale☆153Updated last year
- Reimplementation of the task generation part from the Alpaca paper☆119Updated last year
- ☆147Updated 4 years ago
- [Added T5 support to TRLX] A repo for distributed training of language models with Reinforcement Learning via Human Feedback (RLHF)☆47Updated 2 years ago
- ☆89Updated 2 years ago
- Guide: Finetune GPT2-XL (1.5 Billion Parameters) and finetune GPT-NEO (2.7 B) on a single GPU with Huggingface Transformers using DeepSpe…☆437Updated last year
- Hidden Engrams: Long Term Memory for Transformer Model Inference☆35Updated 3 years ago
- Implementation of the specific Transformer architecture from PaLM - Scaling Language Modeling with Pathways - in Jax (Equinox framework)☆187Updated 2 years ago
- 🤗Transformers: State-of-the-art Natural Language Processing for Pytorch and TensorFlow 2.0.☆56Updated 3 years ago
- ☆34Updated last year
- Lite Inference Toolkit (LIT) for PyTorch☆161Updated 3 years ago
- Drop in replacement for OpenAI, but with Open models.☆153Updated last year
- ☆46Updated last year
- Tune MPTs☆84Updated last year