git-cloner / llama-lora-fine-tuning
llama fine-tuning with lora
β140Updated 9 months ago
Alternatives and similar repositories for llama-lora-fine-tuning:
Users that are interested in llama-lora-fine-tuning are comparing it to the libraries listed below
- llama2 finetuning with deepspeed and loraβ172Updated last year
- π An unofficial implementation of Self-Alignment with Instruction Backtranslation.β136Updated 7 months ago
- Naive Bayes-based Context Extensionβ320Updated 2 months ago
- Large Language Models Are Reasoning Teachers (ACL 2023)β320Updated last year
- β94Updated last year
- MEASURING MASSIVE MULTITASK CHINESE UNDERSTANDINGβ88Updated 10 months ago
- Large language Model fintuning bloom , opt , gpt, gpt2 ,llama,llama-2,cpmant and so onβ96Updated 9 months ago
- Deita: Data-Efficient Instruction Tuning for Alignment [ICLR2024]β534Updated 2 months ago
- A full pipeline to finetune Vicuna LLM with LoRA and RLHF on consumer hardware. Implementation of RLHF (Reinforcement Learning with Humanβ¦β211Updated 9 months ago
- β130Updated 10 months ago
- Finetuning LLaMA with RLHF (Reinforcement Learning with Human Feedback) based on DeepSpeed Chatβ112Updated last year
- Codes and Data for Scaling Relationship on Learning Mathematical Reasoning with Large Language Modelsβ245Updated 5 months ago
- Collaborative Training of Large Language Models in an Efficient Wayβ411Updated 5 months ago
- An opensource ChatBot built with ExpertPrompting which achieves 96% of ChatGPT's capability.β298Updated last year
- FireAct: Toward Language Agent Fine-tuningβ265Updated last year
- A large-scale, fine-grained, diverse preference dataset (and models).β329Updated last year
- [SIGIR'24] The official implementation code of MOELoRA.β145Updated 7 months ago
- This repository contains code to quantitatively evaluate instruction-tuned models such as Alpaca and Flan-T5 on held-out tasks.β541Updated 11 months ago
- β139Updated 7 months ago
- [ACL 2024] An Easy-to-use Instruction Processing Framework for LLMs.β394Updated last month
- ToolQA, a new dataset to evaluate the capabilities of LLMs in answering challenging questions with external tools. It offers two levels β¦β251Updated last year
- [ACL 2024 Demo] Official GitHub repo for UltraEval: An open source framework for evaluating foundation models.β233Updated 3 months ago
- Evaluating LLMs' multi-round chatting capability via assessing conversations generated by two LLM instances.β144Updated last year
- a Fine-tuned LLaMA that is Good at Arithmetic Tasksβ177Updated last year
- All available datasets for Instruction Tuning of Large Language Modelsβ242Updated last year
- Generative Judge for Evaluating Alignmentβ228Updated last year
- Data and Code for Program of Thoughts (TMLR 2023)β259Updated 9 months ago
- [EMNLP 2023] Lion: Adversarial Distillation of Proprietary Large Language Modelsβ203Updated last year
- Dataset and evaluation script for "Evaluating Hallucinations in Chinese Large Language Models"β118Updated 8 months ago
- β271Updated last year