lxe / llama-tuneLinks
LLaMa Tuning with Stanford Alpaca Dataset using Deepspeed and Transformers
☆51Updated 2 years ago
Alternatives and similar repositories for llama-tune
Users that are interested in llama-tune are comparing it to the libraries listed below
Sorting:
- ☆98Updated 2 years ago
- ☆105Updated 2 years ago
- [ICLR 2023] Codebase for Copy-Generator model, including an implementation of kNN-LM☆190Updated 9 months ago
- A dataset for training/evaluating Question Answering Retrieval models on ChatGPT responses with the possibility to training/evaluating on…☆141Updated last year
- The aim of this repository is to utilize LLaMA to reproduce and enhance the Stanford Alpaca☆98Updated 2 years ago
- A Multi-Turn Dialogue Corpus based on Alpaca Instructions☆175Updated 2 years ago
- Inference script for Meta's LLaMA models using Hugging Face wrapper☆109Updated 2 years ago
- Unofficial implementation of AlpaGasus☆93Updated 2 years ago
- ☆179Updated 2 years ago
- Implementation of Reinforcement Learning from Human Feedback (RLHF)☆173Updated 2 years ago
- ☆173Updated 2 years ago
- ☆123Updated last year
- MultilingualShareGPT, the free multi-language corpus for LLM training☆73Updated 2 years ago
- An experimental implementation of the retrieval-enhanced language model☆75Updated 2 years ago
- ☆162Updated 2 years ago
- MultilingualSIFT: Multilingual Supervised Instruction Fine-tuning☆94Updated 2 years ago
- Open Source WizardCoder Dataset☆161Updated 2 years ago
- Datasets for Instruction Tuning of Large Language Models☆258Updated last year
- ☆68Updated 2 years ago
- Open Instruction Generalist is an assistant trained on massive synthetic instructions to perform many millions of tasks☆209Updated last year
- realize the reinforcement learning training for gpt2 llama bloom and so on llm model☆26Updated 2 years ago
- MEASURING MASSIVE MULTITASK CHINESE UNDERSTANDING☆89Updated last year
- An Experiment on Dynamic NTK Scaling RoPE☆64Updated last year
- OPD: Chinese Open-Domain Pre-trained Dialogue Model☆75Updated 2 years ago
- Source codes and datasets for How well do Large Language Models perform in Arithmetic tasks?☆56Updated 2 years ago
- train llama on a single A100 80G node using 🤗 transformers and 🚀 Deepspeed Pipeline Parallelism☆225Updated last year
- 百川Dynamic NTK-ALiBi的代码实现:无需微调即可推理更长文本☆49Updated 2 years ago
- Source code for ACL 2023 paper Decoder Tuning: Efficient Language Understanding as Decoding☆51Updated 2 years ago
- YuLan-IR: Information Retrieval Boosted LMs☆221Updated last year
- code for Scaling Laws of RoPE-based Extrapolation