zphang / minimal-gpt-neox-20bLinks
☆130Updated 3 years ago
Alternatives and similar repositories for minimal-gpt-neox-20b
Users that are interested in minimal-gpt-neox-20b are comparing it to the libraries listed below
Sorting:
- Experiments with generating opensource language model assistants☆97Updated 2 years ago
- Exploring finetuning public checkpoints on filter 8K sequences on Pile☆115Updated 2 years ago
- Used for adaptive human in the loop evaluation of language and embedding models.☆309Updated 2 years ago
- ☆67Updated 2 years ago
- One stop shop for all things carp☆59Updated 2 years ago
- DeepSpeed is a deep learning optimization library that makes distributed training easy, efficient, and effective.☆167Updated 3 weeks ago
- ☆78Updated last year
- Simple Annotated implementation of GPT-NeoX in PyTorch☆110Updated 2 years ago
- Implementation of the specific Transformer architecture from PaLM - Scaling Language Modeling with Pathways - in Jax (Equinox framework)☆187Updated 3 years ago
- RWKV-v2-RNN trained on the Pile. See https://github.com/BlinkDL/RWKV-LM for details.☆67Updated 2 years ago
- ☆67Updated 2 years ago
- Pipeline for pulling and processing online language model pretraining data from the web☆178Updated last year
- Inference code for LLaMA models in JAX☆118Updated last year
- See the issue board for the current status of active and prospective projects!☆65Updated 3 years ago
- Tune MPTs☆84Updated 2 years ago
- Framework agnostic python runtime for RWKV models☆146Updated last year
- ☆44Updated 7 months ago
- A Multilingual Dataset for Parsing Realistic Task-Oriented Dialogs☆114Updated 2 years ago
- 🤗Transformers: State-of-the-art Natural Language Processing for Pytorch and TensorFlow 2.0.☆56Updated 3 years ago
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.☆93Updated 2 years ago
- ☆95Updated last year
- Code repository for the c-BTM paper☆106Updated last year
- Multi-Domain Expert Learning☆67Updated last year
- QLoRA with Enhanced Multi GPU Support☆37Updated last year
- Guide: Finetune GPT2-XL (1.5 Billion Parameters) and finetune GPT-NEO (2.7 B) on a single GPU with Huggingface Transformers using DeepSpe…☆437Updated 2 years ago
- Adversarial Training and SFT for Bot Safety Models☆40Updated 2 years ago
- GPTQLoRA: Efficient Finetuning of Quantized LLMs with GPTQ☆103Updated 2 years ago
- some common Huggingface transformers in maximal update parametrization (µP)☆80Updated 3 years ago
- An experimental implementation of the retrieval-enhanced language model☆74Updated 2 years ago
- Image Diffusion block merging technique applied to transformers based Language Models.☆54Updated 2 years ago