Silver267 / pytorch-to-safetensor-converterLinks
A simple converter which converts pytorch bin files to safetensor, intended to be used for LLM conversion.
☆71Updated last year
Alternatives and similar repositories for pytorch-to-safetensor-converter
Users that are interested in pytorch-to-safetensor-converter are comparing it to the libraries listed below
Sorting:
- A pipeline parallel training script for LLMs.☆157Updated 5 months ago
- An unsupervised model merging algorithm for Transformers-based language models.☆106Updated last year
- Load multiple LoRA modules simultaneously and automatically switch the appropriate combination of LoRA modules to generate the best answe…☆158Updated last year
- Merge Transformers language models by use of gradient parameters.☆208Updated last year
- automatically quant GGUF models☆210Updated last week
- QLoRA: Efficient Finetuning of Quantized LLMs☆76Updated last year
- A high-throughput and memory-efficient inference and serving engine for LLMs☆131Updated last year
- Low-Rank adapter extraction for fine-tuned transformers models☆178Updated last year
- GPTQLoRA: Efficient Finetuning of Quantized LLMs with GPTQ☆102Updated 2 years ago
- Convenient wrapper for fine-tuning and inference of Large Language Models (LLMs) with several quantization techniques (GTPQ, bitsandbytes…☆146Updated last year
- Easily convert HuggingFace models to GGUF-format for llama.cpp☆23Updated last year
- Train llama with lora on one 4090 and merge weight of lora to work as stanford alpaca.☆52Updated 2 years ago
- ☆51Updated last year
- Synthetic Role-Play Conversation Dataset Generation☆47Updated 2 years ago
- Embed arbitrary modalities (images, audio, documents, etc) into large language models.☆186Updated last year
- Model REVOLVER, a human in the loop model mixing system.☆32Updated 2 years ago
- Implementation of DoRA☆301Updated last year
- Spherical Merge Pytorch/HF format Language Models with minimal feature loss.☆138Updated 2 years ago
- A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.☆63Updated last year
- 4 bits quantization of LLaMa using GPTQ☆130Updated 2 years ago
- FuseAI Project☆87Updated 8 months ago
- Generate multi-round conversation roleplay data based on self-instruct and evol-instruct.☆134Updated 9 months ago
- Automated Identification of Redundant Layer Blocks for Pruning in Large Language Models☆247Updated last year
- Some simple scripts that I use day-to-day when working with LLMs and Huggingface Hub☆160Updated 2 years ago
- ☆80Updated last year
- This is our own implementation of 'Layer Selective Rank Reduction'☆239Updated last year
- Instruct-tune LLaMA on consumer hardware☆73Updated 2 years ago
- Micro Llama is a small Llama based model with 300M parameters trained from scratch with $500 budget☆161Updated 2 months ago
- A benchmark for role-playing language models☆105Updated 4 months ago
- Make abliterated models with transformers, easy and fast☆89Updated 5 months ago