lxe / llama-tune
LLaMa Tuning with Stanford Alpaca Dataset using Deepspeed and Transformers
☆51Updated last year
Alternatives and similar repositories for llama-tune:
Users that are interested in llama-tune are comparing it to the libraries listed below
- Source code for ACL 2023 paper Decoder Tuning: Efficient Language Understanding as Decoding☆48Updated last year
- MultilingualShareGPT, the free multi-language corpus for LLM training☆72Updated last year
- realize the reinforcement learning training for gpt2 llama bloom and so on llm model☆26Updated last year
- Inference script for Meta's LLaMA models using Hugging Face wrapper☆110Updated last year
- The aim of this repository is to utilize LLaMA to reproduce and enhance the Stanford Alpaca☆97Updated last year
- Unofficial implementation of AlpaGasus☆90Updated last year
- ☆105Updated last year
- code for Scaling Laws of RoPE-based Extrapolation☆70Updated last year
- An experimental implementation of the retrieval-enhanced language model☆74Updated 2 years ago
- CLongEval: A Chinese Benchmark for Evaluating Long-Context Large Language Models☆40Updated last year
- Official implementation for 'Extending LLMs’ Context Window with 100 Samples'☆75Updated last year
- 百川Dynamic NTK-ALiBi的代码实现:无需微调即可推理更长文本☆47Updated last year
- OPD: Chinese Open-Domain Pre-trained Dialogue Model☆75Updated last year
- ☆96Updated last year
- An Experiment on Dynamic NTK Scaling RoPE☆62Updated last year
- ☆68Updated last year
- MEASURING MASSIVE MULTITASK CHINESE UNDERSTANDING☆88Updated 11 months ago
- 用于微调LLM的中文指令数据集☆27Updated last year
- ☆36Updated 10 months ago
- A dataset for training/evaluating Question Answering Retrieval models on ChatGPT responses with the possibility to training/evaluating on…☆140Updated last year
- NTK scaled version of ALiBi position encoding in Transformer.☆66Updated last year
- Finetuning LLaMA with RLHF (Reinforcement Learning with Human Feedback) based on DeepSpeed Chat☆113Updated last year
- The complete training code of the open-source high-performance Llama model, including the full process from pre-training to RLHF.☆65Updated last year
- train llama on a single A100 80G node using 🤗 transformers and 🚀 Deepspeed Pipeline Parallelism☆215Updated last year
- Source codes and datasets for How well do Large Language Models perform in Arithmetic tasks?☆56Updated last year
- ☆124Updated last year
- ☆93Updated last year
- [NAACL 2024] Enhancing Chain-of-Thoughts Prompting with Iterative Bootstrapping in Large Language Models☆82Updated 11 months ago
- Prompt Fine-tuning on GLM, BART and Flan-T5.☆20Updated 2 years ago
- Official code for "MAmmoTH2: Scaling Instructions from the Web" [NeurIPS 2024]☆136Updated 4 months ago