KohakuBlueleaf / guanaco-loraLinks
Instruct-tune LLaMA on consumer hardware
☆72Updated 2 years ago
Alternatives and similar repositories for guanaco-lora
Users that are interested in guanaco-lora are comparing it to the libraries listed below
Sorting:
- Image Diffusion block merging technique applied to transformers based Language Models.☆56Updated 2 years ago
- 8-bit CUDA functions for PyTorch☆42Updated 2 years ago
- RWKV infctx trainer, for training arbitary context sizes, to 10k and beyond!☆148Updated last year
- QLoRA: Efficient Finetuning of Quantized LLMs☆79Updated last year
- ☆81Updated last year
- 4 bits quantization of SantaCoder using GPTQ☆51Updated 2 years ago
- Framework agnostic python runtime for RWKV models☆147Updated 2 years ago
- ChatGPT-like Web UI for RWKVstic☆100Updated 2 years ago
- Tune MPTs☆84Updated 2 years ago
- ☆27Updated 2 years ago
- 4 bits quantization of LLaMa using GPTQ☆131Updated 2 years ago
- 📖 — Notebooks related to RWKV☆58Updated 2 years ago
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Updated 2 years ago
- An unsupervised model merging algorithm for Transformers-based language models.☆108Updated last year
- Train Llama Loras Easily☆31Updated 2 years ago
- Our data munging code.☆34Updated 2 months ago
- BigKnow2022: Bringing Language Models Up to Speed☆16Updated 2 years ago
- MultilingualShareGPT, the free multi-language corpus for LLM training☆73Updated 2 years ago
- ☆27Updated 2 years ago
- Merge LLM that are split in to parts☆27Updated 5 months ago
- Load multiple LoRA modules simultaneously and automatically switch the appropriate combination of LoRA modules to generate the best answe…☆157Updated last year
- ☆82Updated 2 years ago
- 8-bit CUDA functions for PyTorch in Windows 10☆68Updated 2 years ago
- This is a Gradio WebUI working with the Diffusers format of Stable Diffusion☆82Updated 3 years ago
- ☆33Updated 2 years ago
- Zeus LLM Trainer is a rewrite of Stanford Alpaca aiming to be the trainer for all Large Language Models☆70Updated 2 years ago
- RWKV is a RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best …☆412Updated 2 years ago
- Inference script for Meta's LLaMA models using Hugging Face wrapper☆110Updated 2 years ago
- ☆34Updated last year
- Instruct-tune LLaMA on consumer hardware with shareGPT data☆125Updated 2 years ago