zphang / minimal-gpt-neox-20b
☆128Updated 2 years ago
Alternatives and similar repositories for minimal-gpt-neox-20b:
Users that are interested in minimal-gpt-neox-20b are comparing it to the libraries listed below
- Experiments with generating opensource language model assistants☆97Updated last year
- Exploring finetuning public checkpoints on filter 8K sequences on Pile☆115Updated last year
- DeepSpeed is a deep learning optimization library that makes distributed training easy, efficient, and effective.☆164Updated last month
- Used for adaptive human in the loop evaluation of language and embedding models.☆306Updated 2 years ago
- ☆67Updated 2 years ago
- Simple Annotated implementation of GPT-NeoX in PyTorch☆110Updated 2 years ago
- One stop shop for all things carp☆59Updated 2 years ago
- Inference code for LLaMA models in JAX☆115Updated 9 months ago
- Multi-Domain Expert Learning☆67Updated last year
- Pipeline for pulling and processing online language model pretraining data from the web☆175Updated last year
- Implementation of the specific Transformer architecture from PaLM - Scaling Language Modeling with Pathways - in Jax (Equinox framework)☆186Updated 2 years ago
- Guide: Finetune GPT2-XL (1.5 Billion Parameters) and finetune GPT-NEO (2.7 B) on a single GPU with Huggingface Transformers using DeepSpe…☆437Updated last year
- A Multilingual Dataset for Parsing Realistic Task-Oriented Dialogs☆114Updated last year
- RWKV-v2-RNN trained on the Pile. See https://github.com/BlinkDL/RWKV-LM for details.☆67Updated 2 years ago
- Blazing fast training of 🤗 Transformers on Graphcore IPUs☆85Updated 11 months ago
- ☆77Updated last year
- Tune MPTs☆84Updated last year
- Techniques used to run BLOOM at inference in parallel☆37Updated 2 years ago
- This project aims to make RWKV Accessible to everyone using a Hugging Face like interface, while keeping it close to the R and D RWKV bra…☆64Updated last year
- Framework agnostic python runtime for RWKV models☆145Updated last year
- ☆67Updated 2 years ago
- JAX implementation of the Llama 2 model☆216Updated last year
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.☆93Updated 2 years ago
- Fine-tuning 6-Billion GPT-J (& other models) with LoRA and 8-bit compression☆66Updated 2 years ago
- [WIP] A 🔥 interface for running code in the cloud☆86Updated 2 years ago
- See the issue board for the current status of active and prospective projects!☆65Updated 3 years ago
- ☆148Updated 3 years ago
- Babysit your preemptible TPUs☆85Updated 2 years ago
- Instruct-tuning LLaMA on consumer hardware☆66Updated last year
- Swarm training framework using Haiku + JAX + Ray for layer parallel transformer language models on unreliable, heterogeneous nodes☆236Updated last year