AlekseyKorshuk / role-play-syntheticLinks
Synthetic Role-Play Conversation Dataset Generation
☆48Updated 2 years ago
Alternatives and similar repositories for role-play-synthetic
Users that are interested in role-play-synthetic are comparing it to the libraries listed below
Sorting:
- Load multiple LoRA modules simultaneously and automatically switch the appropriate combination of LoRA modules to generate the best answe…☆157Updated last year
- A benchmark for role-playing language models☆113Updated 7 months ago
- An unsupervised model merging algorithm for Transformers-based language models.☆108Updated last year
- Generate multi-round conversation roleplay data based on self-instruct and evol-instruct.☆136Updated last year
- Merge Transformers language models by use of gradient parameters.☆212Updated last year
- A self-ailgnment method for role-play. Benchmark for role-play. Resources for "Large Language Models are Superpositions of All Characters…☆209Updated last year
- Spherical Merge Pytorch/HF format Language Models with minimal feature loss.☆142Updated 2 years ago
- Low-Rank adapter extraction for fine-tuned transformers models☆180Updated last year
- Codebase for LLM story generation; updated version of https//github.com/yangkevin2/doc-story-generation☆87Updated 2 years ago
- Implementation of the LongRoPE: Extending LLM Context Window Beyond 2 Million Tokens Paper☆151Updated last year
- QLoRA: Efficient Finetuning of Quantized LLMs☆79Updated last year
- Notus is a collection of fine-tuned LLMs using SFT, DPO, SFT+DPO, and/or any other RLHF techniques, while always keeping a data-first app…☆169Updated last year
- ☆78Updated 2 years ago
- Sakura-SOLAR-DPO: Merge, SFT, and DPO☆116Updated 2 years ago
- A bagel, with everything.☆325Updated last year
- A benchmark for emotional intelligence in large language models☆396Updated last year
- GPTQLoRA: Efficient Finetuning of Quantized LLMs with GPTQ☆101Updated 2 years ago
- Official repo for "Make Your LLM Fully Utilize the Context"☆260Updated last year
- Official codebase for "SelFee: Iterative Self-Revising LLM Empowered by Self-Feedback Generation"☆228Updated 2 years ago
- MultilingualSIFT: Multilingual Supervised Instruction Fine-tuning☆96Updated 2 years ago
- Code for paper titled "Towards the Law of Capacity Gap in Distilling Language Models"☆102Updated last year
- A pipeline parallel training script for LLMs.☆165Updated 8 months ago
- EvolKit is an innovative framework designed to automatically enhance the complexity of instructions used for fine-tuning Large Language M…☆244Updated last year
- NexusRaven-13B, a new SOTA Open-Source LLM for function calling. This repo contains everything for reproducing our evaluation on NexusRav…☆318Updated 2 years ago
- Easy to use, High Performant Knowledge Distillation for LLMs☆96Updated 8 months ago
- This is our own implementation of 'Layer Selective Rank Reduction'☆240Updated last year
- Let's build better datasets, together!☆267Updated last year
- ☆51Updated last year
- Model REVOLVER, a human in the loop model mixing system.☆33Updated 2 years ago
- ☆313Updated last year