hunar4321 / reweight-gptLinks
Reweight GPT - a simple neural network using transformer architecture for next character prediction
☆56Updated last year
Alternatives and similar repositories for reweight-gpt
Users that are interested in reweight-gpt are comparing it to the libraries listed below
Sorting:
- Patch for MPT-7B which allows using and training a LoRA☆58Updated 2 years ago
- Multi-Domain Expert Learning☆67Updated last year
- An unsupervised model merging algorithm for Transformers-based language models.☆105Updated last year
- ☆73Updated last year
- Low-Rank adapter extraction for fine-tuned transformers models☆173Updated last year
- Some simple scripts that I use day-to-day when working with LLMs and Huggingface Hub☆162Updated last year
- Finetune Falcon, LLaMA, MPT, and RedPajama on consumer hardware using PEFT LoRA☆103Updated last month
- entropix style sampling + GUI☆26Updated 7 months ago
- Merge Transformers language models by use of gradient parameters.☆206Updated 10 months ago
- This is our own implementation of 'Layer Selective Rank Reduction'☆239Updated last year
- Python bindings for llama.cpp☆65Updated last year
- GPT-2 small trained on phi-like data☆66Updated last year
- An easy-to-understand framework for LLM samplers that rewind and revise generated tokens☆140Updated 4 months ago
- Landmark Attention: Random-Access Infinite Context Length for Transformers QLoRA☆123Updated 2 years ago
- ☆47Updated last year
- Automated prompting and scoring framework to evaluate LLMs using updated human knowledge prompts☆110Updated last year
- Let's create synthetic textbooks together :)☆75Updated last year
- Simple Model Similarities Analysis☆21Updated last year
- Comprehensive analysis of difference in performance of QLora, Lora, and Full Finetunes.☆82Updated last year
- Experiments with generating opensource language model assistants☆97Updated 2 years ago
- ☆38Updated last year
- Using open source LLMs to build synthetic datasets for direct preference optimization☆64Updated last year
- Experimental sampler to make LLMs more creative☆31Updated last year
- ☆66Updated last year
- Supervised instruction finetuning for LLM with HF trainer and Deepspeed☆35Updated last year
- Train Large Language Models (LLM) using LoRA☆25Updated 2 years ago
- An all-new Language Model That Processes Ultra-Long Sequences of 100,000+ Ultra-Fast☆149Updated 9 months ago
- Merge LLM that are split in to parts☆26Updated last year
- Full finetuning of large language models without large memory requirements☆94Updated last year
- ☆130Updated 3 years ago