Silver267 / pytorch-to-safetensor-converterLinks
A simple converter which converts pytorch bin files to safetensor, intended to be used for LLM conversion.
☆72Updated last year
Alternatives and similar repositories for pytorch-to-safetensor-converter
Users that are interested in pytorch-to-safetensor-converter are comparing it to the libraries listed below
Sorting:
- A high-throughput and memory-efficient inference and serving engine for LLMs☆132Updated last year
- A pipeline parallel training script for LLMs.☆164Updated 7 months ago
- An unsupervised model merging algorithm for Transformers-based language models.☆108Updated last year
- Load multiple LoRA modules simultaneously and automatically switch the appropriate combination of LoRA modules to generate the best answe…☆157Updated last year
- Merge Transformers language models by use of gradient parameters.☆209Updated last year
- Low-Rank adapter extraction for fine-tuned transformers models☆179Updated last year
- 8-bit CUDA functions for PyTorch in Windows 10☆68Updated 2 years ago
- Synthetic Role-Play Conversation Dataset Generation☆48Updated 2 years ago
- automatically quant GGUF models☆218Updated last month
- 4 bits quantization of LLaMa using GPTQ☆131Updated 2 years ago
- Implementation of DoRA☆307Updated last year
- QLoRA: Efficient Finetuning of Quantized LLMs☆79Updated last year
- A bagel, with everything.☆325Updated last year
- RWKV infctx trainer, for training arbitary context sizes, to 10k and beyond!☆147Updated last year
- Make abliterated models with transformers, easy and fast☆101Updated this week
- Train llama with lora on one 4090 and merge weight of lora to work as stanford alpaca.☆52Updated 2 years ago
- [ACL 2024] Progressive LLaMA with Block Expansion.☆514Updated last year
- Train Llama Loras Easily☆31Updated 2 years ago
- GPTQLoRA: Efficient Finetuning of Quantized LLMs with GPTQ☆101Updated 2 years ago
- ☆81Updated last year
- This is our own implementation of 'Layer Selective Rank Reduction'☆240Updated last year
- Generate multi-round conversation roleplay data based on self-instruct and evol-instruct.☆137Updated 11 months ago
- A benchmark for role-playing language models☆112Updated 6 months ago
- A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.☆64Updated 2 years ago
- Micro Llama is a small Llama based model with 300M parameters trained from scratch with $500 budget☆163Updated 4 months ago
- Model REVOLVER, a human in the loop model mixing system.☆33Updated 2 years ago
- Automated Identification of Redundant Layer Blocks for Pruning in Large Language Models☆257Updated last year
- ☆157Updated 3 weeks ago
- Convenient wrapper for fine-tuning and inference of Large Language Models (LLMs) with several quantization techniques (GTPQ, bitsandbytes…☆146Updated 2 years ago
- Q-GaLore: Quantized GaLore with INT4 Projection and Layer-Adaptive Low-Rank Gradients.☆202Updated last year