l294265421 / alpaca-rlhf
Finetuning LLaMA with RLHF (Reinforcement Learning with Human Feedback) based on DeepSpeed Chat
☆113Updated last year
Alternatives and similar repositories for alpaca-rlhf:
Users that are interested in alpaca-rlhf are comparing it to the libraries listed below
- ☆84Updated last year
- ☆133Updated 10 months ago
- InsTag: A Tool for Data Analysis in LLM Supervised Fine-tuning☆243Updated last year
- 🐋 An unofficial implementation of Self-Alignment with Instruction Backtranslation.☆137Updated 8 months ago
- train llama on a single A100 80G node using 🤗 transformers and 🚀 Deepspeed Pipeline Parallelism☆215Updated last year
- Clustering and Ranking: Diversity-preserved Instruction Selection through Expert-aligned Quality Estimation☆75Updated 4 months ago
- 怎么训练一个LLM分词器☆142Updated last year
- MEASURING MASSIVE MULTITASK CHINESE UNDERSTANDING☆87Updated 11 months ago
- Naive Bayes-based Context Extension☆320Updated 3 months ago
- CLongEval: A Chinese Benchmark for Evaluating Long-Context Large Language Models☆40Updated last year
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆69Updated last year
- ☆160Updated last year
- [NAACL'24] Self-data filtering of LLM instruction-tuning data using a novel perplexity-based difficulty score, without using any other mo…☆342Updated 6 months ago
- [ACL 2024] FollowBench: A Multi-level Fine-grained Constraints Following Benchmark for Large Language Models☆97Updated 3 months ago
- [EMNLP 2023] Lion: Adversarial Distillation of Proprietary Large Language Models☆204Updated last year
- ☆95Updated last year
- 中文大语言模型评测第一期☆107Updated last year
- Dataset and evaluation script for "Evaluating Hallucinations in Chinese Large Language Models"☆120Updated 9 months ago
- ☆164Updated last year
- ☆97Updated 11 months ago
- [ACL'24] Superfiltering: Weak-to-Strong Data Filtering for Fast Instruction-Tuning☆144Updated 6 months ago
- Train llm (bloom, llama, baichuan2-7b, chatglm3-6b) with deepspeed pipeline mode. Faster than zero/zero++/fsdp.☆94Updated last year
- deepspeed+trainer简单高效实现多卡微调大模型☆123Updated last year
- The complete training code of the open-source high-performance Llama model, including the full process from pre-training to RLHF.☆65Updated last year
- ☆141Updated 8 months ago
- NTK scaled version of ALiBi position encoding in Transformer.☆66Updated last year
- A full pipeline to finetune ChatGLM LLM with LoRA and RLHF on consumer hardware. Implementation of RLHF (Reinforcement Learning with Huma…☆134Updated last year
- ☆278Updated 10 months ago
- 大语言模型指令调优工具(支持 FlashAttention)☆171Updated last year
- 中文 Instruction tuning datasets☆128Updated 11 months ago