VE-FORBRYDERNE / mesh-transformer-jaxLinks
Fork of kingoflolz/mesh-transformer-jax with memory usage optimizations and support for GPT-Neo, GPT-NeoX, BLOOM, OPT and fairseq dense LM. Primarily used by KoboldAI and mtj-softtuner.
☆22Updated 3 years ago
Alternatives and similar repositories for mesh-transformer-jax
Users that are interested in mesh-transformer-jax are comparing it to the libraries listed below
Sorting:
- ☆131Updated 3 years ago
- Framework agnostic python runtime for RWKV models☆147Updated 2 years ago
- This project aims to make RWKV Accessible to everyone using a Hugging Face like interface, while keeping it close to the R and D RWKV bra…☆65Updated 2 years ago
- Simple Annotated implementation of GPT-NeoX in PyTorch☆110Updated 3 years ago
- Hidden Engrams: Long Term Memory for Transformer Model Inference☆35Updated 4 years ago
- Experiments with generating opensource language model assistants☆97Updated 2 years ago
- One stop shop for all things carp☆59Updated 3 years ago
- A repository to run gpt-j-6b on low vram machines (4.2 gb minimum vram for 2000 token context, 3.5 gb for 1000 token context). Model load…☆113Updated 3 years ago
- Create soft prompts for fairseq 13B dense, GPT-J-6B and GPT-Neo-2.7B for free in a Google Colab TPU instance☆28Updated 2 years ago
- RWKV-v2-RNN trained on the Pile. See https://github.com/BlinkDL/RWKV-LM for details.☆67Updated 3 years ago
- ☆27Updated 2 years ago
- A ready-to-deploy container for implementing an easy to use REST API to access Language Models.☆66Updated 2 years ago
- Multi-Domain Expert Learning☆67Updated last year
- Exploring finetuning public checkpoints on filter 8K sequences on Pile☆116Updated 2 years ago
- A library for squeakily cleaning and filtering language datasets.☆49Updated 2 years ago
- Anh - LAION's multilingual assistant datasets and models☆27Updated 2 years ago
- **ARCHIVED** Filesystem interface to 🤗 Hub☆58Updated 2 years ago
- Auxiliary tasks for task-oriented dialogue systems. Published in ICNLSP'22 and indexed in the ACL Anthology.☆17Updated 2 years ago
- Code for the paper "SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot" with LLaMA implementation.☆71Updated 2 years ago
- Tutorial to pretrain & fine-tune a 🤗 Flax T5 model on a TPUv3-8 with GCP☆58Updated 3 years ago
- OpenAI API webserver☆190Updated 3 years ago
- A Multilingual Dataset for Parsing Realistic Task-Oriented Dialogs☆115Updated 2 years ago
- Full finetuning of large language models without large memory requirements☆94Updated 2 months ago
- rwkv_chatbot☆62Updated 2 years ago
- Pipeline for pulling and processing online language model pretraining data from the web☆178Updated 2 years ago
- Patch for MPT-7B which allows using and training a LoRA☆58Updated 2 years ago
- Babysit your preemptible TPUs☆86Updated 2 years ago
- Used for adaptive human in the loop evaluation of language and embedding models.☆308Updated 2 years ago
- [WIP] A 🔥 interface for running code in the cloud☆86Updated 2 years ago
- ☆92Updated 3 years ago