practical-dreamer / vicuna_to_alpacanLinks
Conversion script adapting vicuna dataset into alpaca format for use with oobabooga's trainer
☆12Updated 2 years ago
Alternatives and similar repositories for vicuna_to_alpacan
Users that are interested in vicuna_to_alpacan are comparing it to the libraries listed below
Sorting:
- Landmark Attention: Random-Access Infinite Context Length for Transformers QLoRA☆124Updated 2 years ago
- Train Llama Loras Easily☆31Updated 2 years ago
- Model REVOLVER, a human in the loop model mixing system.☆33Updated 2 years ago
- 4 bits quantization of SantaCoder using GPTQ☆51Updated 2 years ago
- Code for the paper "SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot" with LLaMA implementation.☆71Updated 2 years ago
- QLoRA: Efficient Finetuning of Quantized LLMs☆78Updated last year
- Image Diffusion block merging technique applied to transformers based Language Models.☆56Updated 2 years ago
- Full finetuning of large language models without large memory requirements☆94Updated last year
- Multi-Domain Expert Learning☆67Updated last year
- Patch for MPT-7B which allows using and training a LoRA☆58Updated 2 years ago
- Comprehensive analysis of difference in performance of QLora, Lora, and Full Finetunes.☆83Updated 2 years ago
- ☆96Updated 2 years ago
- Tune MPTs☆84Updated 2 years ago
- ☆74Updated 2 years ago
- Merge Transformers language models by use of gradient parameters.☆208Updated last year
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Updated 2 years ago
- ☆27Updated 2 years ago
- Low-Rank adapter extraction for fine-tuned transformers models☆176Updated last year
- GPTQLoRA: Efficient Finetuning of Quantized LLMs with GPTQ☆103Updated 2 years ago
- An unsupervised model merging algorithm for Transformers-based language models.☆107Updated last year
- Command-line script for inferencing from models such as MPT-7B-Chat☆100Updated 2 years ago
- Instruct-tuning LLaMA on consumer hardware☆66Updated 2 years ago
- The GeoV model is a large langauge model designed by Georges Harik and uses Rotary Positional Embeddings with Relative distances (RoPER).…☆121Updated 2 years ago
- Automated prompting and scoring framework to evaluate LLMs using updated human knowledge prompts☆110Updated 2 years ago
- Simple and fast server for GPTQ-quantized LLaMA inference☆24Updated 2 years ago
- Convenient wrapper for fine-tuning and inference of Large Language Models (LLMs) with several quantization techniques (GTPQ, bitsandbytes…☆146Updated last year
- Finetune Falcon, LLaMA, MPT, and RedPajama on consumer hardware using PEFT LoRA☆104Updated 3 months ago
- fastLLaMa: An experimental high-performance framework for running Decoder-only LLMs with 4-bit quantization in Python using a C/C++ backe…☆413Updated 2 years ago
- 4 bits quantization of LLaMa using GPTQ☆12Updated 2 years ago
- Zeus LLM Trainer is a rewrite of Stanford Alpaca aiming to be the trainer for all Large Language Models☆70Updated 2 years ago