Silver267 / pytorch-to-safetensor-converterLinks
A simple converter which converts pytorch bin files to safetensor, intended to be used for LLM conversion.
☆69Updated last year
Alternatives and similar repositories for pytorch-to-safetensor-converter
Users that are interested in pytorch-to-safetensor-converter are comparing it to the libraries listed below
Sorting:
- 8-bit CUDA functions for PyTorch in Windows 10☆69Updated last year
- An unsupervised model merging algorithm for Transformers-based language models.☆105Updated last year
- ☆81Updated last year
- A high-throughput and memory-efficient inference and serving engine for LLMs☆131Updated last year
- Merge Transformers language models by use of gradient parameters.☆206Updated 10 months ago
- GPTQLoRA: Efficient Finetuning of Quantized LLMs with GPTQ☆103Updated 2 years ago
- Low-Rank adapter extraction for fine-tuned transformers models☆173Updated last year
- Load multiple LoRA modules simultaneously and automatically switch the appropriate combination of LoRA modules to generate the best answe…☆155Updated last year
- A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.☆64Updated last year
- 4 bits quantization of LLaMa using GPTQ☆129Updated 2 years ago
- Spherical Merge Pytorch/HF format Language Models with minimal feature loss.☆129Updated last year
- Train llama with lora on one 4090 and merge weight of lora to work as stanford alpaca.☆51Updated 2 years ago
- Train Llama Loras Easily☆31Updated last year
- ☆27Updated last year
- Instruct-tune LLaMA on consumer hardware☆74Updated 2 years ago
- QLoRA: Efficient Finetuning of Quantized LLMs☆78Updated last year
- A pipeline parallel training script for LLMs.☆150Updated last month
- Model REVOLVER, a human in the loop model mixing system.☆33Updated last year
- Implementation of DoRA☆294Updated last year
- Efficient 3bit/4bit quantization of LLaMA models☆19Updated 2 years ago
- Synthetic Role-Play Conversation Dataset Generation☆43Updated last year
- Convenient wrapper for fine-tuning and inference of Large Language Models (LLMs) with several quantization techniques (GTPQ, bitsandbytes…☆147Updated last year
- 4 bits quantization of SantaCoder using GPTQ☆51Updated 2 years ago
- SparseGPT + GPTQ Compression of LLMs like LLaMa, OPT, Pythia☆41Updated 2 years ago
- ☆53Updated last year
- ☆82Updated last year
- A benchmark for role-playing language models☆99Updated last month
- Evaluating LLMs with Dynamic Data☆93Updated last month
- Finetune any model on HF in less than 30 seconds☆57Updated 2 months ago
- This project is established for real-time training of the RWKV model.☆49Updated last year