zphang / minimal-gpt-neox-20b
☆130Updated 2 years ago
Alternatives and similar repositories for minimal-gpt-neox-20b
Users that are interested in minimal-gpt-neox-20b are comparing it to the libraries listed below
Sorting:
- Experiments with generating opensource language model assistants☆97Updated 2 years ago
- Exploring finetuning public checkpoints on filter 8K sequences on Pile☆115Updated 2 years ago
- DeepSpeed is a deep learning optimization library that makes distributed training easy, efficient, and effective.☆167Updated last month
- ☆67Updated 2 years ago
- One stop shop for all things carp☆59Updated 2 years ago
- Simple Annotated implementation of GPT-NeoX in PyTorch☆110Updated 2 years ago
- Used for adaptive human in the loop evaluation of language and embedding models.☆309Updated 2 years ago
- Implementation of the specific Transformer architecture from PaLM - Scaling Language Modeling with Pathways - in Jax (Equinox framework)☆188Updated 2 years ago
- RWKV-v2-RNN trained on the Pile. See https://github.com/BlinkDL/RWKV-LM for details.☆67Updated 2 years ago
- Pipeline for pulling and processing online language model pretraining data from the web☆177Updated last year
- HomebrewNLP in JAX flavour for maintable TPU-Training☆50Updated last year
- ☆67Updated 2 years ago
- See the issue board for the current status of active and prospective projects!☆65Updated 3 years ago
- [WIP] A 🔥 interface for running code in the cloud☆85Updated 2 years ago
- Tune MPTs☆84Updated last year
- Inference code for LLaMA models in JAX☆118Updated 11 months ago
- This project aims to make RWKV Accessible to everyone using a Hugging Face like interface, while keeping it close to the R and D RWKV bra…☆64Updated 2 years ago
- Multi-Domain Expert Learning☆67Updated last year
- Train very large language models in Jax.☆204Updated last year
- Babysit your preemptible TPUs☆85Updated 2 years ago
- A search engine for ParlAI's BlenderBot project (and probably other ones as well)☆131Updated 3 years ago
- Techniques used to run BLOOM at inference in parallel☆37Updated 2 years ago
- Code repository for the c-BTM paper☆106Updated last year
- Reimplementation of the task generation part from the Alpaca paper☆119Updated 2 years ago
- JAX implementation of the Llama 2 model☆218Updated last year
- Comprehensive analysis of difference in performance of QLora, Lora, and Full Finetunes.☆82Updated last year
- Landmark Attention: Random-Access Infinite Context Length for Transformers QLoRA☆123Updated last year
- ☆78Updated last year
- Blazing fast training of 🤗 Transformers on Graphcore IPUs☆85Updated last year
- Guide: Finetune GPT2-XL (1.5 Billion Parameters) and finetune GPT-NEO (2.7 B) on a single GPU with Huggingface Transformers using DeepSpe…☆437Updated last year