finetunej / transformers
🤗Transformers: State-of-the-art Natural Language Processing for Pytorch and TensorFlow 2.0.
☆56Updated 3 years ago
Alternatives and similar repositories for transformers:
Users that are interested in transformers are comparing it to the libraries listed below
- Create soft prompts for fairseq 13B dense, GPT-J-6B and GPT-Neo-2.7B for free in a Google Colab TPU instance☆27Updated 2 years ago
- ☆28Updated last year
- A basic ui for running gpt neo 2.7B on low vram (3 gb Vram minimum)☆36Updated 3 years ago
- Colab notebooks to run a basic AI Dungeon clone using gpt-neo-2.7B☆64Updated 3 years ago
- Tools with GUI for GPT finetune data preparation☆23Updated 3 years ago
- ☆128Updated 2 years ago
- Experimental sampler to make LLMs more creative☆30Updated last year
- Experiments with generating opensource language model assistants☆97Updated last year
- Conversational Language model toolkit for training against human preferences.☆41Updated 11 months ago
- Hidden Engrams: Long Term Memory for Transformer Model Inference☆35Updated 3 years ago
- Patch for MPT-7B which allows using and training a LoRA☆58Updated last year
- RWKV-v2-RNN trained on the Pile. See https://github.com/BlinkDL/RWKV-LM for details.☆67Updated 2 years ago
- ☆9Updated 3 years ago
- One stop shop for all things carp☆59Updated 2 years ago
- A notebook that runs GPT-Neo with low vram (6 gb) and cuda acceleration by loading it into gpu memory in smaller parts.☆14Updated 3 years ago
- A repository to run gpt-j-6b on low vram machines (4.2 gb minimum vram for 2000 token context, 3.5 gb for 1000 token context). Model load…☆115Updated 3 years ago
- Fine-tuning 6-Billion GPT-J (& other models) with LoRA and 8-bit compression☆66Updated 2 years ago
- Prompt tuning toolkit for GPT-2 and GPT-Neo☆88Updated 3 years ago
- 4 bits quantization of SantaCoder using GPTQ☆51Updated last year
- Python tools for processing the stackexchange data dumps into a text dataset for Language Models☆81Updated last year
- ☆32Updated last year
- Fine-tuning GPT-J-6B on colab or equivalent PC GPU with your custom datasets: 8-bit weights with low-rank adaptors (LoRA)☆74Updated 2 years ago
- A ready-to-deploy container for implementing an easy to use REST API to access Language Models.☆64Updated 2 years ago
- ☆33Updated last year
- DeepSpeed is a deep learning optimization library that makes distributed training easy, efficient, and effective.☆164Updated this week
- Script for downloading GitHub.☆91Updated 8 months ago
- An experimental implementation of the retrieval-enhanced language model☆74Updated 2 years ago
- Used for adaptive human in the loop evaluation of language and embedding models.☆306Updated 2 years ago
- Multi-Domain Expert Learning☆67Updated last year
- Training & Implementation of chatbots leveraging GPT-like architecture with the aitextgen package to enable dynamic conversations.☆49Updated 2 years ago