paulcjh / gpt-j-6bLinks
☆50Updated 2 years ago
Alternatives and similar repositories for gpt-j-6b
Users that are interested in gpt-j-6b are comparing it to the libraries listed below
Sorting:
- Simple Annotated implementation of GPT-NeoX in PyTorch☆110Updated 3 years ago
- ☆131Updated 3 years ago
- DeepSpeed is a deep learning optimization library that makes distributed training easy, efficient, and effective.☆170Updated last month
- 🤗Transformers: State-of-the-art Natural Language Processing for Pytorch and TensorFlow 2.0.☆55Updated 3 years ago
- One stop shop for all things carp☆59Updated 3 years ago
- Used for adaptive human in the loop evaluation of language and embedding models.☆307Updated 2 years ago
- ☆34Updated 2 years ago
- Simple Python client for the Hugging Face Inference API☆75Updated 5 years ago
- Smol but mighty language model☆62Updated 2 years ago
- ☆32Updated 2 years ago
- The GeoV model is a large langauge model designed by Georges Harik and uses Rotary Positional Embeddings with Relative distances (RoPER).…☆121Updated 2 years ago
- Experiments with generating opensource language model assistants☆97Updated 2 years ago
- ☆92Updated 3 years ago
- ☆33Updated 2 years ago
- ☆44Updated 2 years ago
- A library for squeakily cleaning and filtering language datasets.☆47Updated 2 years ago
- RWKV-v2-RNN trained on the Pile. See https://github.com/BlinkDL/RWKV-LM for details.☆66Updated 3 years ago
- Evaluation suite for large-scale language models.☆128Updated 4 years ago
- ☆95Updated 10 months ago
- Create soft prompts for fairseq 13B dense, GPT-J-6B and GPT-Neo-2.7B for free in a Google Colab TPU instance☆28Updated 2 years ago
- Finetune Falcon, LLaMA, MPT, and RedPajama on consumer hardware using PEFT LoRA☆103Updated 5 months ago
- A Multilingual Dataset for Parsing Realistic Task-Oriented Dialogs☆115Updated 2 years ago
- Hidden Engrams: Long Term Memory for Transformer Model Inference☆35Updated 4 years ago
- XtremeDistil framework for distilling/compressing massive multilingual neural network models to tiny and efficient models for AI at scale☆156Updated last year
- Exploring finetuning public checkpoints on filter 8K sequences on Pile☆115Updated 2 years ago
- Fine-tuning GPT-J-6B on colab or equivalent PC GPU with your custom datasets: 8-bit weights with low-rank adaptors (LoRA)☆74Updated 3 years ago
- ☆127Updated 2 years ago
- Open source library for few shot NLP☆78Updated 2 years ago
- Fine tune GPT-2 with your favourite authors☆70Updated last year
- Source codes for the paper "Bounding the Capabilities of Large Language Models in Open Text Generation with Prompt Constraints"☆27Updated 2 years ago