VE-FORBRYDERNE / mtj-softtuner
Create soft prompts for fairseq 13B dense, GPT-J-6B and GPT-Neo-2.7B for free in a Google Colab TPU instance
β28Updated 2 years ago
Alternatives and similar repositories for mtj-softtuner:
Users that are interested in mtj-softtuner are comparing it to the libraries listed below
- Hidden Engrams: Long Term Memory for Transformer Model Inferenceβ35Updated 3 years ago
- Experimental sampler to make LLMs more creativeβ30Updated last year
- π€Transformers: State-of-the-art Natural Language Processing for Pytorch and TensorFlow 2.0.β56Updated 3 years ago
- Fine-tuning 6-Billion GPT-J (& other models) with LoRA and 8-bit compressionβ66Updated 2 years ago
- One stop shop for all things carpβ59Updated 2 years ago
- Prompt tuning toolkit for GPT-2 and GPT-Neoβ88Updated 3 years ago
- Experiments with generating opensource language model assistantsβ97Updated last year
- β9Updated 3 years ago
- Fast inference of Instruct tuned LLaMa on your personal devices.β22Updated 2 years ago
- Conversational Language model toolkit for training against human preferences.β42Updated last year
- Steer LLM outputs towards a certain topic/subject and enhance response capabilities using activation engineering by adding steering vectoβ¦β43Updated last year
- BIG: Back In the Game of Creative AIβ27Updated 2 years ago
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limitβ63Updated last year
- Multi-Domain Expert Learningβ67Updated last year
- Fork of kingoflolz/mesh-transformer-jax with memory usage optimizations and support for GPT-Neo, GPT-NeoX, BLOOM, OPT and fairseq dense Lβ¦β22Updated 2 years ago
- π€ Disaggregators: Curated data labelers for in-depth analysis.β65Updated 2 years ago
- β28Updated last year
- A library for squeakily cleaning and filtering language datasets.β47Updated last year
- Training & Implementation of chatbots leveraging GPT-like architecture with the aitextgen package to enable dynamic conversations.β49Updated 2 years ago
- β44Updated 5 months ago
- β130Updated 2 years ago
- Plug-and-play Search Interfaces with Pyserini and Hugging Faceβ31Updated last year
- β32Updated last year
- Command-line script for inferencing from models such as LLaMA, in a chat scenario, with LoRA adaptationsβ33Updated last year
- A basic ui for running gpt neo 2.7B on low vram (3 gb Vram minimum)β36Updated 3 years ago
- [WIP] A π₯ interface for running code in the cloudβ85Updated 2 years ago
- Where we keep our notes about model training runs.β16Updated 2 years ago
- β40Updated 2 years ago
- Text-writing denoising diffusion (and much more)β30Updated last year
- This repository contains code for cleaning your training data of benchmark data to help combat data snooping.β25Updated last year