hunar4321 / reweight-gpt
Reweight GPT - a simple neural network using transformer architecture for next character prediction
☆48Updated last year
Related projects: ⓘ
- GPT-2 small trained on phi-like data☆65Updated 7 months ago
- Comprehensive analysis of difference in performance of QLora, Lora, and Full Finetunes.☆81Updated last year
- Let's create synthetic textbooks together :)☆70Updated 7 months ago
- ☆71Updated last year
- Experimental sampler to make LLMs more creative☆29Updated last year
- Landmark Attention: Random-Access Infinite Context Length for Transformers QLoRA☆123Updated last year
- Patch for MPT-7B which allows using and training a LoRA☆58Updated last year
- Full finetuning of large language models without large memory requirements☆94Updated 8 months ago
- MiniHF is an inference, human preference data collection, and fine-tuning tool for local language models. It is intended to help the user…☆143Updated this week
- Some simple scripts that I use day-to-day when working with LLMs and Huggingface Hub☆154Updated 11 months ago
- Low-Rank adapter extraction for fine-tuned transformers model☆154Updated 4 months ago
- Train your own small bitnet model☆47Updated 3 months ago
- QLoRA with Enhanced Multi GPU Support☆36Updated last year
- Python examples using the bigcode/tiny_starcoder_py 159M model to generate code☆43Updated last year
- An unsupervised model merging algorithm for Transformers-based language models.☆96Updated 4 months ago
- Small finetuned LLMs for a diverse set of useful tasks☆119Updated last year
- ☆48Updated 11 months ago
- An all-new Language Model That Processes Ultra-Long Sequences of 100,000+ Ultra-Fast☆131Updated 2 weeks ago
- Merge Transformers language models by use of gradient parameters.☆193Updated last month
- Multi-Domain Expert Learning☆67Updated 7 months ago
- Train Large Language Models (LLM) using LoRA☆21Updated last year
- Instruct-tuning LLaMA on consumer hardware☆66Updated last year
- 5X faster 60% less memory QLoRA finetuning☆21Updated 3 months ago
- ☆75Updated 3 weeks ago
- Fully fine-tune large models like Mistral, Llama-2-13B, or Qwen-14B completely for free☆217Updated 6 months ago
- An easy-to-use LLMs quantization package with user-friendly apis, based on GPTQ algorithm.☆37Updated 10 months ago
- ☆22Updated last year
- ☆40Updated 7 months ago
- ☆64Updated 3 months ago
- Just a simple HowTo for https://github.com/johnsmith0031/alpaca_lora_4bit☆30Updated last year