s4rduk4r / alpaca_lora_4bit_readme
Just a simple HowTo for https://github.com/johnsmith0031/alpaca_lora_4bit
☆31Updated last year
Alternatives and similar repositories for alpaca_lora_4bit_readme:
Users that are interested in alpaca_lora_4bit_readme are comparing it to the libraries listed below
- An Extension for oobabooga/text-generation-webui☆36Updated last year
- Landmark Attention: Random-Access Infinite Context Length for Transformers QLoRA☆123Updated last year
- An unsupervised model merging algorithm for Transformers-based language models.☆101Updated 8 months ago
- A Qt GUI for large language models☆40Updated last year
- A KoboldAI-like memory extension for oobabooga's text-generation-webui☆107Updated 2 months ago
- Model REVOLVER, a human in the loop model mixing system.☆33Updated last year
- ☆27Updated last year
- Experimental sampler to make LLMs more creative☆30Updated last year
- Text WebUI extension to add clever Notebooks to Chat mode☆140Updated last year
- ☆74Updated last year
- Automated prompting and scoring framework to evaluate LLMs using updated human knowledge prompts☆112Updated last year
- Accepts a Hugging Face model URL, automatically downloads and quantizes it using Bits and Bytes.☆38Updated 10 months ago
- A prompt/context management system☆167Updated last year
- oobabooga extension - Experimental sampler to make LLMs more creative☆23Updated last year
- Creates an Langchain Agent which uses the WebUI's API and Wikipedia to work☆72Updated last year
- A fork of textgen that kept some things like Exllama and old GPTQ.☆22Updated 4 months ago
- 4 bits quantization of SantaCoder using GPTQ☆53Updated last year
- GPT-2 small trained on phi-like data☆65Updated 11 months ago
- CHAracter State Management - a generative text adventure (engine)☆61Updated 2 months ago
- Harnessing the Memory Power of the Camelids☆146Updated last year
- Low-Rank adapter extraction for fine-tuned transformers models☆165Updated 8 months ago
- Code for the paper "SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot" with LLaMA implementation.☆71Updated last year
- Dynamic parameter modulation for oobabooga's text-generation-webui that adjusts generation parameters to better mirror user affect.☆34Updated last year
- Traing PRO extension for oobabooga WebUI - recent dev version☆47Updated last week
- Train Llama Loras Easily☆30Updated last year
- Simple and fast server for GPTQ-quantized LLaMA inference☆24Updated last year
- A simple batch file to make the oobabooga one click installer compatible with llama 4bit models and able to run on cuda☆21Updated last year
- Simple extension for text-generation-webui that injects recent conversation history into the negative prompt with the goal of minimizing …☆33Updated last year