AlekseyKorshuk / role-play-syntheticLinks
Synthetic Role-Play Conversation Dataset Generation
☆43Updated last year
Alternatives and similar repositories for role-play-synthetic
Users that are interested in role-play-synthetic are comparing it to the libraries listed below
Sorting:
- Load multiple LoRA modules simultaneously and automatically switch the appropriate combination of LoRA modules to generate the best answe…☆156Updated last year
- QLoRA: Efficient Finetuning of Quantized LLMs☆78Updated last year
- Implementation of the LongRoPE: Extending LLM Context Window Beyond 2 Million Tokens Paper☆147Updated 11 months ago
- Merge Transformers language models by use of gradient parameters.☆206Updated 11 months ago
- Spherical Merge Pytorch/HF format Language Models with minimal feature loss.☆132Updated last year
- A self-ailgnment method for role-play. Benchmark for role-play. Resources for "Large Language Models are Superpositions of All Characters…☆199Updated last year
- Generate multi-round conversation roleplay data based on self-instruct and evol-instruct.☆130Updated 6 months ago
- An unsupervised model merging algorithm for Transformers-based language models.☆106Updated last year
- Automated Identification of Redundant Layer Blocks for Pruning in Large Language Models☆240Updated last year
- Low-Rank adapter extraction for fine-tuned transformers models☆173Updated last year
- ☆310Updated last year
- A pipeline parallel training script for LLMs.☆153Updated 2 months ago
- GPTQLoRA: Efficient Finetuning of Quantized LLMs with GPTQ☆102Updated 2 years ago
- ☆76Updated last year
- Scripts for fine-tuning Llama2 via SFT and DPO.☆200Updated last year
- A bagel, with everything.☆322Updated last year
- A high-throughput and memory-efficient inference and serving engine for LLMs☆131Updated last year
- ☆52Updated last year
- MultilingualSIFT: Multilingual Supervised Instruction Fine-tuning☆94Updated last year
- Official repo for "Make Your LLM Fully Utilize the Context"☆252Updated last year
- EvolKit is an innovative framework designed to automatically enhance the complexity of instructions used for fine-tuning Large Language M…☆229Updated 8 months ago
- Positional Skip-wise Training for Efficient Context Window Extension of LLMs to Extremely Length (ICLR 2024)☆205Updated last year
- Sakura-SOLAR-DPO: Merge, SFT, and DPO☆116Updated last year
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks☆145Updated 9 months ago
- PyTorch implementation of Infini-Transformer from "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention…☆290Updated last year
- Unofficial implementation of AlpaGasus☆92Updated last year
- A simple converter which converts pytorch bin files to safetensor, intended to be used for LLM conversion.☆69Updated last year
- A benchmark for role-playing language models☆99Updated last month
- A pipeline for LLM knowledge distillation☆105Updated 3 months ago
- Implementation of DoRA☆297Updated last year