seanzhang-zhichen / Qwen-WisdomVast
Qwen-WisdomVast is a large model trained on 1 million high-quality Chinese multi-turn SFT data, 200,000 English multi-turn SFT data, and 2,000 single-turn self-cognition data, using the training methods of DORA and LORA+ based on Qwen1.5-7B as the base. Compared to Qwen1.5-7B-Chat, it has improved mathematical abilities by 5.16%, 12.8% on the Hu…
☆18Updated 6 months ago
Related projects ⓘ
Alternatives and complementary repositories for Qwen-WisdomVast
- the newest version of llama3,source code explained line by line using Chinese☆22Updated 6 months ago
- 百川Dynamic NTK-ALiBi的代码实现:无需微调即可推理更长文本☆46Updated last year
- This is a personal reimplementation of Google's Infini-transformer, utilizing a small 2b model. The project includes both model and train…☆52Updated 6 months ago
- (NBCE)Naive Bayes-based Context Extension on ChatGLM-6b☆14Updated last year
- 介绍docker、docker compose的使用。☆20Updated 2 months ago
- 文本去重☆67Updated 5 months ago
- NTK scaled version of ALiBi position encoding in Transformer.☆66Updated last year
- SuperCLUE-Math6:新一代中文原生多轮多步数学推理数据集的探索之旅☆43Updated 9 months ago
- ☆37Updated 4 months ago
- Imitate OpenAI with Local Models☆85Updated 2 months ago
- ☆90Updated 5 months ago
- ☆129Updated 4 months ago
- CLongEval: A Chinese Benchmark for Evaluating Long-Context Large Language Models☆38Updated 8 months ago
- code for Scaling Laws of RoPE-based Extrapolation☆70Updated last year
- 多轮共情对话模型PICA☆85Updated last year
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆19Updated last year
- ☆22Updated last week
- The complete training code of the open-source high-performance Llama model, including the full process from pre-training to RLHF.☆60Updated last year
- ☆24Updated last year
- Clustering and Ranking: Diversity-preserved Instruction Selection through Expert-aligned Quality Estimation☆65Updated last month
- chatglm_rlhf_finetuning☆27Updated last year
- A more efficient GLM implementation!☆55Updated last year
- ☆93Updated 7 months ago
- ChatGLM2-6B微调, SFT/LoRA, instruction finetune☆106Updated last year
- 基于 LoRA 和 P-Tuning v2 的 ChatGLM-6B 高效参数微调☆54Updated last year
- 使用单个24G显卡,从0开始训练LLM☆49Updated 2 weeks ago
- qwen-7b and qwen-14b finetuning☆82Updated 6 months ago
- 大型语言模型实战指南:应用实践与场景落地☆35Updated last month
- ☆95Updated last year