finetunej / transformers
🤗Transformers: State-of-the-art Natural Language Processing for Pytorch and TensorFlow 2.0.
☆55Updated 2 years ago
Related projects ⓘ
Alternatives and complementary repositories for transformers
- Create soft prompts for fairseq 13B dense, GPT-J-6B and GPT-Neo-2.7B for free in a Google Colab TPU instance☆27Updated last year
- A basic ui for running gpt neo 2.7B on low vram (3 gb Vram minimum)☆35Updated 3 years ago
- Conversational Language model toolkit for training against human preferences.☆41Updated 7 months ago
- ☆28Updated last year
- ☆128Updated 2 years ago
- Hidden Engrams: Long Term Memory for Transformer Model Inference☆34Updated 3 years ago
- An open-source replication and extension of the Meta AI's LLAMA dataset☆24Updated last year
- DeepSpeed is a deep learning optimization library that makes distributed training easy, efficient, and effective.☆164Updated 6 months ago
- One stop shop for all things carp☆59Updated 2 years ago
- Just a simple HowTo for https://github.com/johnsmith0031/alpaca_lora_4bit☆31Updated last year
- Experiments with generating opensource language model assistants☆97Updated last year
- Fine-tuning 6-Billion GPT-J (& other models) with LoRA and 8-bit compression☆65Updated 2 years ago
- Tools with GUI for GPT finetune data preparation☆23Updated 3 years ago
- Framework agnostic python runtime for RWKV models☆145Updated last year
- Colab notebooks to run a basic AI Dungeon clone using gpt-neo-2.7B☆64Updated 3 years ago
- A ready-to-deploy container for implementing an easy to use REST API to access Language Models.☆64Updated last year
- ☆33Updated 3 years ago
- ☆9Updated 3 years ago
- RWKV-v2-RNN trained on the Pile. See https://github.com/BlinkDL/RWKV-LM for details.☆66Updated 2 years ago
- Experimental sampler to make LLMs more creative☆30Updated last year
- Exploring finetuning public checkpoints on filter 8K sequences on Pile☆115Updated last year
- ☆40Updated last year
- ☆86Updated 2 years ago
- Simple Annotated implementation of GPT-NeoX in PyTorch☆111Updated 2 years ago
- ☆50Updated last year
- ☆32Updated last year
- A repository to run gpt-j-6b on low vram machines (4.2 gb minimum vram for 2000 token context, 3.5 gb for 1000 token context). Model load…☆114Updated 2 years ago
- Where we keep our notes about model training runs.☆15Updated last year
- A library for squeakily cleaning and filtering language datasets.☆45Updated last year
- Fork of kingoflolz/mesh-transformer-jax with memory usage optimizations and support for GPT-Neo, GPT-NeoX, BLOOM, OPT and fairseq dense L…☆22Updated 2 years ago