finetunej / transformers
π€Transformers: State-of-the-art Natural Language Processing for Pytorch and TensorFlow 2.0.
β56Updated 3 years ago
Alternatives and similar repositories for transformers:
Users that are interested in transformers are comparing it to the libraries listed below
- Create soft prompts for fairseq 13B dense, GPT-J-6B and GPT-Neo-2.7B for free in a Google Colab TPU instanceβ27Updated 2 years ago
- A basic ui for running gpt neo 2.7B on low vram (3 gb Vram minimum)β36Updated 3 years ago
- β28Updated last year
- Fine-tuning 6-Billion GPT-J (& other models) with LoRA and 8-bit compressionβ66Updated 2 years ago
- One stop shop for all things carpβ59Updated 2 years ago
- β129Updated 2 years ago
- DeepSpeed is a deep learning optimization library that makes distributed training easy, efficient, and effective.β164Updated last week
- Hidden Engrams: Long Term Memory for Transformer Model Inferenceβ35Updated 3 years ago
- Code for the paper-"Mirostat: A Perplexity-Controlled Neural Text Decoding Algorithm" (https://arxiv.org/abs/2007.14966).β58Updated 3 years ago
- β9Updated 3 years ago
- Tools with GUI for GPT finetune data preparationβ23Updated 3 years ago
- Experimental sampler to make LLMs more creativeβ30Updated last year
- Conversational Language model toolkit for training against human preferences.β42Updated 11 months ago
- RWKV-v2-RNN trained on the Pile. See https://github.com/BlinkDL/RWKV-LM for details.β67Updated 2 years ago
- Simple Annotated implementation of GPT-NeoX in PyTorchβ110Updated 2 years ago
- Experiments with generating opensource language model assistantsβ97Updated last year
- Python tools for processing the stackexchange data dumps into a text dataset for Language Modelsβ81Updated last year
- β89Updated 2 years ago
- Patch for MPT-7B which allows using and training a LoRAβ58Updated last year
- Framework agnostic python runtime for RWKV modelsβ145Updated last year
- β49Updated 2 years ago
- A repository to run gpt-j-6b on low vram machines (4.2 gb minimum vram for 2000 token context, 3.5 gb for 1000 token context). Model loadβ¦β115Updated 3 years ago
- URL downloader supporting checkpointing and continuous checksumming.β19Updated last year
- [WIP] A π₯ interface for running code in the cloudβ86Updated 2 years ago
- An experimental implementation of the retrieval-enhanced language modelβ74Updated 2 years ago
- Techniques used to run BLOOM at inference in parallelβ37Updated 2 years ago
- This project aims to make RWKV Accessible to everyone using a Hugging Face like interface, while keeping it close to the R and D RWKV braβ¦β64Updated last year
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limitβ63Updated last year
- Instruct-tuning LLaMA on consumer hardwareβ66Updated 2 years ago
- Fork of kingoflolz/mesh-transformer-jax with memory usage optimizations and support for GPT-Neo, GPT-NeoX, BLOOM, OPT and fairseq dense Lβ¦β22Updated 2 years ago