Silver267 / pytorch-to-safetensor-converterLinks
A simple converter which converts pytorch bin files to safetensor, intended to be used for LLM conversion.
☆72Updated last year
Alternatives and similar repositories for pytorch-to-safetensor-converter
Users that are interested in pytorch-to-safetensor-converter are comparing it to the libraries listed below
Sorting:
- An unsupervised model merging algorithm for Transformers-based language models.☆108Updated last year
- Load multiple LoRA modules simultaneously and automatically switch the appropriate combination of LoRA modules to generate the best answe…☆157Updated last year
- A pipeline parallel training script for LLMs.☆164Updated 7 months ago
- Merge Transformers language models by use of gradient parameters.☆209Updated last year
- 4 bits quantization of LLaMa using GPTQ☆131Updated 2 years ago
- GPTQLoRA: Efficient Finetuning of Quantized LLMs with GPTQ☆101Updated 2 years ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆132Updated last year
- Low-Rank adapter extraction for fine-tuned transformers models☆180Updated last year
- QLoRA: Efficient Finetuning of Quantized LLMs☆79Updated last year
- 8-bit CUDA functions for PyTorch in Windows 10☆68Updated 2 years ago
- Train llama with lora on one 4090 and merge weight of lora to work as stanford alpaca.☆52Updated 2 years ago
- A bagel, with everything.☆326Updated last year
- Train Llama Loras Easily☆31Updated 2 years ago
- Make abliterated models with transformers, easy and fast☆110Updated 2 weeks ago
- Convenient wrapper for fine-tuning and inference of Large Language Models (LLMs) with several quantization techniques (GTPQ, bitsandbytes…☆146Updated 2 years ago
- Synthetic Role-Play Conversation Dataset Generation☆48Updated 2 years ago
- Spherical Merge Pytorch/HF format Language Models with minimal feature loss.☆141Updated 2 years ago
- Implementation of DoRA☆306Updated last year
- Model REVOLVER, a human in the loop model mixing system.☆33Updated 2 years ago
- Automated Identification of Redundant Layer Blocks for Pruning in Large Language Models☆258Updated last year
- [ACL 2024] Progressive LLaMA with Block Expansion.☆514Updated last year
- automatically quant GGUF models☆219Updated 2 months ago
- This is our own implementation of 'Layer Selective Rank Reduction'☆240Updated last year
- ☆82Updated 2 years ago
- Official code for ReLoRA from the paper Stack More Layers Differently: High-Rank Training Through Low-Rank Updates☆469Updated last year
- Q-GaLore: Quantized GaLore with INT4 Projection and Layer-Adaptive Low-Rank Gradients.☆202Updated last year
- Instruct-tune LLaMA on consumer hardware☆72Updated 2 years ago
- A benchmark for role-playing language models☆112Updated 7 months ago
- Code for the paper "SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot" with LLaMA implementation.☆71Updated 2 years ago
- A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.☆64Updated 2 years ago