lxe / llama-tune
LLaMa Tuning with Stanford Alpaca Dataset using Deepspeed and Transformers
☆50Updated last year
Alternatives and similar repositories for llama-tune:
Users that are interested in llama-tune are comparing it to the libraries listed below
- Unofficial implementation of AlpaGasus☆90Updated last year
- MultilingualShareGPT, the free multi-language corpus for LLM training☆72Updated last year
- ☆96Updated last year
- ☆105Updated last year
- An Experiment on Dynamic NTK Scaling RoPE☆62Updated last year
- The aim of this repository is to utilize LLaMA to reproduce and enhance the Stanford Alpaca☆96Updated last year
- Inference script for Meta's LLaMA models using Hugging Face wrapper☆111Updated last year
- Official implementation for 'Extending LLMs’ Context Window with 100 Samples'☆76Updated last year
- Source codes and datasets for How well do Large Language Models perform in Arithmetic tasks?☆56Updated last year
- Source code for ACL 2023 paper Decoder Tuning: Efficient Language Understanding as Decoding☆48Updated last year
- code for Scaling Laws of RoPE-based Extrapolation☆70Updated last year
- Retrieves parquet files from Hugging Face, identifies and quantifies junky data, duplication, contamination, and biased content in datase…☆51Updated last year
- ☆69Updated last year
- Open Source WizardCoder Dataset☆156Updated last year
- realize the reinforcement learning training for gpt2 llama bloom and so on llm model☆26Updated last year
- Spherical Merge Pytorch/HF format Language Models with minimal feature loss.☆115Updated last year
- train llama on a single A100 80G node using 🤗 transformers and 🚀 Deepspeed Pipeline Parallelism☆215Updated last year
- 百川Dynamic NTK-ALiBi的代码实现:无需微调即可推理更长文本☆47Updated last year
- Pre-training code for CrystalCoder 7B LLM☆55Updated 9 months ago
- [ICLR 2023] Codebase for Copy-Generator model, including an implementation of kNN-LM☆185Updated 3 weeks ago
- Open Instruction Generalist is an assistant trained on massive synthetic instructions to perform many millions of tasks☆208Updated last year
- The complete training code of the open-source high-performance Llama model, including the full process from pre-training to RLHF.☆64Updated last year
- Positional Skip-wise Training for Efficient Context Window Extension of LLMs to Extremely Length (ICLR 2024)☆204Updated 9 months ago
- Finetuning LLaMA with RLHF (Reinforcement Learning with Human Feedback) based on DeepSpeed Chat☆112Updated last year
- A self-ailgnment method for role-play. Benchmark for role-play. Resources for "Large Language Models are Superpositions of All Characters…☆183Updated 8 months ago
- LMTuner: Make the LLM Better for Everyone☆33Updated last year
- An experimental implementation of the retrieval-enhanced language model☆74Updated 2 years ago
- OPD: Chinese Open-Domain Pre-trained Dialogue Model☆74Updated last year
- MEASURING MASSIVE MULTITASK CHINESE UNDERSTANDING☆88Updated 10 months ago
- ☆178Updated last year