VE-FORBRYDERNE / mesh-transformer-jax
Fork of kingoflolz/mesh-transformer-jax with memory usage optimizations and support for GPT-Neo, GPT-NeoX, BLOOM, OPT and fairseq dense LM. Primarily used by KoboldAI and mtj-softtuner.
☆22Updated 2 years ago
Alternatives and similar repositories for mesh-transformer-jax:
Users that are interested in mesh-transformer-jax are comparing it to the libraries listed below
- Create soft prompts for fairseq 13B dense, GPT-J-6B and GPT-Neo-2.7B for free in a Google Colab TPU instance☆27Updated 2 years ago
- One stop shop for all things carp☆59Updated 2 years ago
- ☆129Updated 2 years ago
- Code for the paper "SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot" with LLaMA implementation.☆71Updated last year
- ☆28Updated last year
- Hidden Engrams: Long Term Memory for Transformer Model Inference☆35Updated 3 years ago
- A basic ui for running gpt neo 2.7B on low vram (3 gb Vram minimum)☆36Updated 3 years ago
- Experiments with generating opensource language model assistants☆97Updated last year
- Framework agnostic python runtime for RWKV models☆145Updated last year
- RWKV-v2-RNN trained on the Pile. See https://github.com/BlinkDL/RWKV-LM for details.☆67Updated 2 years ago
- A library for squeakily cleaning and filtering language datasets.☆46Updated last year
- This project aims to make RWKV Accessible to everyone using a Hugging Face like interface, while keeping it close to the R and D RWKV bra…☆64Updated last year
- A ready-to-deploy container for implementing an easy to use REST API to access Language Models.☆64Updated 2 years ago
- Multi-Domain Expert Learning☆67Updated last year
- 🤗Transformers: State-of-the-art Natural Language Processing for Pytorch and TensorFlow 2.0.☆56Updated 3 years ago
- Patch for MPT-7B which allows using and training a LoRA☆58Updated last year
- Conversational Language model toolkit for training against human preferences.☆42Updated 11 months ago
- An open-source replication and extension of the Meta AI's LLAMA dataset☆24Updated 2 years ago
- Landmark Attention: Random-Access Infinite Context Length for Transformers QLoRA☆123Updated last year
- A client library for LAION's effort to filter CommonCrawl with CLIP, building a large scale image-text dataset.☆32Updated 2 years ago
- URL downloader supporting checkpointing and continuous checksumming.☆19Updated last year
- A repository to run gpt-j-6b on low vram machines (4.2 gb minimum vram for 2000 token context, 3.5 gb for 1000 token context). Model load…☆115Updated 3 years ago
- Training a model similar to OpenAI DALL-E with volunteers from all over the Internet using hivemind and dalle-pytorch (NeurIPS 2021 demo)☆26Updated last year
- Experimental sampler to make LLMs more creative☆30Updated last year
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Updated last year
- Exploring finetuning public checkpoints on filter 8K sequences on Pile☆115Updated 2 years ago
- Finetune Falcon, LLaMA, MPT, and RedPajama on consumer hardware using PEFT LoRA☆102Updated 7 months ago
- [WIP] A 🔥 interface for running code in the cloud☆86Updated 2 years ago
- ☆22Updated last year
- ☆33Updated last year