finetunej / transformersLinks
π€Transformers: State-of-the-art Natural Language Processing for Pytorch and TensorFlow 2.0.
β56Updated 3 years ago
Alternatives and similar repositories for transformers
Users that are interested in transformers are comparing it to the libraries listed below
Sorting:
- Create soft prompts for fairseq 13B dense, GPT-J-6B and GPT-Neo-2.7B for free in a Google Colab TPU instanceβ28Updated 2 years ago
- A basic ui for running gpt neo 2.7B on low vram (3 gb Vram minimum)β36Updated 4 years ago
- Simple Annotated implementation of GPT-NeoX in PyTorchβ110Updated 3 years ago
- β50Updated 2 years ago
- Colab notebooks to run a basic AI Dungeon clone using gpt-neo-2.7Bβ61Updated 4 years ago
- β131Updated 3 years ago
- Fine-tuning 6-Billion GPT-J (& other models) with LoRA and 8-bit compressionβ67Updated 2 years ago
- Conversational Language model toolkit for training against human preferences.β41Updated last year
- A repository to run gpt-j-6b on low vram machines (4.2 gb minimum vram for 2000 token context, 3.5 gb for 1000 token context). Model loadβ¦β114Updated 3 years ago
- Framework agnostic python runtime for RWKV modelsβ146Updated 2 years ago
- DeepSpeed is a deep learning optimization library that makes distributed training easy, efficient, and effective.β168Updated last month
- RWKV-v2-RNN trained on the Pile. See https://github.com/BlinkDL/RWKV-LM for details.β67Updated 3 years ago
- β27Updated 2 years ago
- β90Updated 3 years ago
- A ready-to-deploy container for implementing an easy to use REST API to access Language Models.β66Updated 2 years ago
- A GPT-J API to use with python3 to generate text, blogs, code, and moreβ204Updated 2 years ago
- Tools with GUI for GPT finetune data preparationβ22Updated 4 years ago
- Drop in replacement for OpenAI, but with Open models.β152Updated 2 years ago
- β33Updated 2 years ago
- This project aims to make RWKV Accessible to everyone using a Hugging Face like interface, while keeping it close to the R and D RWKV braβ¦β65Updated 2 years ago
- Hidden Engrams: Long Term Memory for Transformer Model Inferenceβ35Updated 4 years ago
- Experiments with generating opensource language model assistantsβ97Updated 2 years ago
- One stop shop for all things carpβ59Updated 3 years ago
- Experimental sampler to make LLMs more creativeβ31Updated 2 years ago
- GPT2Explorer is bringing GPT2 OpenAI langage models playground to run locally on standard windows computers.β28Updated 3 years ago
- Code for the paper "SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot" with LLaMA implementation.β71Updated 2 years ago
- β33Updated 2 years ago
- A search engine for ParlAI's BlenderBot project (and probably other ones as well)β130Updated 3 years ago
- llama-4bit-colabβ63Updated 2 years ago
- 4 bits quantization of SantaCoder using GPTQβ51Updated 2 years ago