VE-FORBRYDERNE / mtj-softtunerLinks
Create soft prompts for fairseq 13B dense, GPT-J-6B and GPT-Neo-2.7B for free in a Google Colab TPU instance
☆28Updated 2 years ago
Alternatives and similar repositories for mtj-softtuner
Users that are interested in mtj-softtuner are comparing it to the libraries listed below
Sorting:
- One stop shop for all things carp☆59Updated 3 years ago
- Hidden Engrams: Long Term Memory for Transformer Model Inference☆35Updated 4 years ago
- 🤗Transformers: State-of-the-art Natural Language Processing for Pytorch and TensorFlow 2.0.☆56Updated 3 years ago
- A ready-to-deploy container for implementing an easy to use REST API to access Language Models.☆66Updated 2 years ago
- Experiments with generating opensource language model assistants☆97Updated 2 years ago
- ☆33Updated 2 years ago
- Experimental sampler to make LLMs more creative☆31Updated 2 years ago
- ☆27Updated 2 years ago
- Fine-tuning 6-Billion GPT-J (& other models) with LoRA and 8-bit compression☆67Updated 2 years ago
- Training & Implementation of chatbots leveraging GPT-like architecture with the aitextgen package to enable dynamic conversations.☆48Updated 3 years ago
- Text-writing denoising diffusion (and much more)☆30Updated 2 years ago
- A client library for LAION's effort to filter CommonCrawl with CLIP, building a large scale image-text dataset.☆32Updated 2 years ago
- ☆43Updated 2 years ago
- ☆50Updated 2 years ago
- Smol but mighty language model☆63Updated 2 years ago
- A basic ui for running gpt neo 2.7B on low vram (3 gb Vram minimum)☆36Updated 4 years ago
- A library for squeakily cleaning and filtering language datasets.☆47Updated 2 years ago
- Reimplementation of the task generation part from the Alpaca paper☆119Updated 2 years ago
- A TextTiling-based algorithm for text segmentation (aka topic segmentation) that uses neural sentence encoders, as well as extractive sum…☆49Updated 2 years ago
- [WIP] A 🔥 interface for running code in the cloud☆85Updated 2 years ago
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Updated 2 years ago
- Conversational Language model toolkit for training against human preferences.☆41Updated last year
- Multi-Domain Expert Learning☆67Updated last year
- Simple Annotated implementation of GPT-NeoX in PyTorch☆110Updated 3 years ago
- RWKV-v2-RNN trained on the Pile. See https://github.com/BlinkDL/RWKV-LM for details.☆67Updated 3 years ago
- Prompt tuning toolkit for GPT-2 and GPT-Neo☆88Updated 3 years ago
- An open-source replication and extension of the Meta AI's LLAMA dataset☆24Updated 2 years ago
- ☆131Updated 3 years ago
- 🤗 Disaggregators: Curated data labelers for in-depth analysis.☆66Updated 2 years ago
- ☆40Updated 2 years ago