Longyichen / Alpaca-family-library
Summarize all open source Large Languages Models and low-cost replication methods for Chatgpt.
☆135Updated last year
Alternatives and similar repositories for Alpaca-family-library:
Users that are interested in Alpaca-family-library are comparing it to the libraries listed below
- MEASURING MASSIVE MULTITASK CHINESE UNDERSTANDING☆87Updated 11 months ago
- Light local website for displaying performances from different chat models.☆85Updated last year
- Silk Road will be the dataset zoo for Luotuo(骆驼). Luotuo is an open sourced Chinese-LLM project founded by 陈启源 @ 华中师范大学 & 李鲁鲁 @ 商汤科技 & 冷子…☆38Updated last year
- 中文大语言模型评测第二期☆70Updated last year
- 中文图书语料MD5链接☆217Updated last year
- ☆172Updated last year
- The complete training code of the open-source high-performance Llama model, including the full process from pre-training to RLHF.☆65Updated last year
- ☆128Updated last year
- 中文大语言模型评测第一期☆107Updated last year
- MultilingualShareGPT, the free multi-language corpus for LLM training☆72Updated last year
- moss chat finetuning☆50Updated 10 months ago
- Source code for ACL 2023 paper Decoder Tuning: Efficient Language Understanding as Decoding☆48Updated last year
- ☆97Updated last year
- make LLM easier to use☆59Updated last year
- 文本去重☆69Updated 9 months ago
- CLongEval: A Chinese Benchmark for Evaluating Long-Context Large Language Models☆40Updated last year
- OPD: Chinese Open-Domain Pre-trained Dialogue Model☆75Updated last year
- deep learning☆150Updated last week
- 大语言模型指令调优工具(支持 FlashAttention)☆171Updated last year
- 使用qlora对中文大语言模型进行微调,包含ChatGLM、Chinese-LLaMA-Alpaca、BELLE☆85Updated last year
- Dataset and evaluation script for "Evaluating Hallucinations in Chinese Large Language Models"☆123Updated 9 months ago
- chatglm_rlhf_finetuning☆28Updated last year
- ☆95Updated last year
- 百川Dynamic NTK-ALiBi的代码实现:无需微调即可推理更长文本☆47Updated last year
- 1.4B sLLM for Chinese and English - HammerLLM🔨☆44Updated 11 months ago
- An open-source conversational language model developed by the Knowledge Works Research Laboratory at Fudan University.☆64Updated last year
- code for Scaling Laws of RoPE-based Extrapolation☆70Updated last year
- 中文 Instruction tuning datasets☆129Updated 11 months ago
- Finetuning LLaMA with RLHF (Reinforcement Learning with Human Feedback) based on DeepSpeed Chat☆114Updated last year
- deepspeed+trainer简单高效实现多卡微调大模型☆123Updated last year