KyujinHan / Sakura-SOLAR-DPOLinks
Sakura-SOLAR-DPO: Merge, SFT, and DPO
☆116Updated 2 years ago
Alternatives and similar repositories for Sakura-SOLAR-DPO
Users that are interested in Sakura-SOLAR-DPO are comparing it to the libraries listed below
Sorting:
- evolve llm training instruction, from english instruction to any language.☆119Updated 2 years ago
- manage histories of LLM applied applications☆91Updated 2 years ago
- 1-Click is all you need.☆63Updated last year
- ☆37Updated 2 years ago
- Alpaca-lora for huggingface implementation using Deepspeed and FullyShardedDataParallel☆24Updated 2 years ago
- Spherical Merge Pytorch/HF format Language Models with minimal feature loss.☆141Updated 2 years ago
- Efficient fine-tuning for ko-llm models☆184Updated last year
- ☆32Updated 2 years ago
- ☆78Updated 2 years ago
- Official code for ACL 2023 (short, findings) paper "Recursion of Thought: A Divide and Conquer Approach to Multi-Context Reasoning with L…☆45Updated 2 years ago
- Finetune Falcon, LLaMA, MPT, and RedPajama on consumer hardware using PEFT LoRA☆104Updated 7 months ago
- A hackable, simple, and reseach-friendly GRPO Training Framework with high speed weight synchronization in a multinode environment.☆35Updated 4 months ago
- [NAACL 2024] Official repository for "KTRL+F: Knowledge-Augmented In-Document Search"☆23Updated last year
- Minimal implementation of the Self-Play Fine-Tuning Converts Weak Language Models to Strong Language Models paper (ArXiv 20232401.01335)☆29Updated last year
- ☆37Updated last year
- [ICLR 2023] Guess the Instruction! Flipped Learning Makes Language Models Stronger Zero-Shot Learners☆116Updated 6 months ago
- KoCommonGEN v2: A Benchmark for Navigating Korean Commonsense Reasoning Challenges in Large Language Models☆25Updated last year
- Small and Efficient Mathematical Reasoning LLMs☆73Updated last year
- The Universe of Evaluation. All about the evaluation for LLMs.☆230Updated last year
- ☆16Updated last year
- This is the official repository for Inheritune.☆118Updated 10 months ago
- Lightweight demos for finetuning LLMs. Powered by 🤗 transformers and open-source datasets.☆78Updated last year
- Code for NeurIPS LLM Efficiency Challenge☆59Updated last year
- Official repo for "Make Your LLM Fully Utilize the Context"☆261Updated last year
- ☆12Updated last year
- Implementation of stop sequencer for Huggingface Transformers☆16Updated 2 years ago
- [NeurIPS 2024] Train LLMs with diverse system messages reflecting individualized preferences to generalize to unseen system messages☆52Updated 4 months ago
- QLoRA: Efficient Finetuning of Quantized LLMs☆79Updated last year
- MultilingualSIFT: Multilingual Supervised Instruction Fine-tuning☆96Updated 2 years ago
- [AAAI 2024] Investigating the Effectiveness of Task-Agnostic Prefix Prompt for Instruction Following☆78Updated last year