Silver267 / pytorch-to-safetensor-converterLinks
A simple converter which converts pytorch bin files to safetensor, intended to be used for LLM conversion.
☆72Updated last year
Alternatives and similar repositories for pytorch-to-safetensor-converter
Users that are interested in pytorch-to-safetensor-converter are comparing it to the libraries listed below
Sorting:
- Load multiple LoRA modules simultaneously and automatically switch the appropriate combination of LoRA modules to generate the best answe…☆157Updated last year
- An unsupervised model merging algorithm for Transformers-based language models.☆108Updated last year
- A pipeline parallel training script for LLMs.☆166Updated 9 months ago
- Merge Transformers language models by use of gradient parameters.☆213Updated last year
- 8-bit CUDA functions for PyTorch in Windows 10☆68Updated 2 years ago
- Make abliterated models with transformers, easy and fast☆113Updated last month
- ☆81Updated 2 years ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆132Updated last year
- Implementation of DoRA☆306Updated last year
- 4 bits quantization of LLaMa using GPTQ☆131Updated 2 years ago
- Low-Rank adapter extraction for fine-tuned transformers models☆180Updated last year
- QLoRA: Efficient Finetuning of Quantized LLMs☆79Updated last year
- Train llama with lora on one 4090 and merge weight of lora to work as stanford alpaca.☆52Updated 2 years ago
- automatically quant GGUF models☆219Updated last month
- Spherical Merge Pytorch/HF format Language Models with minimal feature loss.☆143Updated 2 years ago
- A bagel, with everything.☆326Updated last year
- GPTQLoRA: Efficient Finetuning of Quantized LLMs with GPTQ☆101Updated 2 years ago
- Train Llama Loras Easily☆31Updated 2 years ago
- Instruct-tune LLaMA on consumer hardware☆72Updated 2 years ago
- Text WebUI extension to add clever Notebooks to Chat mode☆145Updated 5 months ago
- A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.☆64Updated 2 years ago
- FuseAI Project☆87Updated last year
- Inference code for Mistral and Mixtral hacked up into original Llama implementation☆371Updated 2 years ago
- [ACL 2024] Progressive LLaMA with Block Expansion.☆514Updated last year
- Micro Llama is a small Llama based model with 300M parameters trained from scratch with $500 budget☆169Updated 5 months ago
- Convenient wrapper for fine-tuning and inference of Large Language Models (LLMs) with several quantization techniques (GTPQ, bitsandbytes…☆146Updated 2 years ago
- Synthetic Role-Play Conversation Dataset Generation☆48Updated 2 years ago
- This is our own implementation of 'Layer Selective Rank Reduction'☆240Updated last year
- Q-GaLore: Quantized GaLore with INT4 Projection and Layer-Adaptive Low-Rank Gradients.☆201Updated last year
- A benchmark for role-playing language models☆115Updated 8 months ago