genggui001 / Megatron-DeepSpeed-Llama
☆82Updated last year
Related projects ⓘ
Alternatives and complementary repositories for Megatron-DeepSpeed-Llama
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆69Updated last year
- Train llm (bloom, llama, baichuan2-7b, chatglm3-6b) with deepspeed pipeline mode. Faster than zero/zero++/fsdp.☆90Updated 9 months ago
- NTK scaled version of ALiBi position encoding in Transformer.☆66Updated last year
- 怎么训练一个LLM分词器☆129Updated last year
- Finetuning LLaMA with RLHF (Reinforcement Learning with Human Feedback) based on DeepSpeed Chat☆106Updated last year
- 百川Dynamic NTK-ALiBi的代码实现:无需微调即可推理更长文本☆46Updated last year
- 中文 Instruction tuning datasets☆118Updated 7 months ago
- The complete training code of the open-source high-performance Llama model, including the full process from pre-training to RLHF.☆60Updated last year
- 文本去重☆67Updated 5 months ago
- Model Compression for Big Models☆151Updated last year
- CLongEval: A Chinese Benchmark for Evaluating Long-Context Large Language Models☆38Updated 8 months ago
- Clustering and Ranking: Diversity-preserved Instruction Selection through Expert-aligned Quality Estimation☆65Updated last month
- ☆118Updated 6 months ago
- train llama on a single A100 80G node using 🤗 transformers and 🚀 Deepspeed Pipeline Parallelism☆207Updated 11 months ago
- ☆158Updated 11 months ago
- ☆157Updated last year
- Dataset and evaluation script for "Evaluating Hallucinations in Chinese Large Language Models"☆109Updated 5 months ago
- code for Scaling Laws of RoPE-based Extrapolation☆70Updated last year
- SuperCLUE-Math6:新一代中文原生多轮多步数学推理数据集的探索之旅☆43Updated 9 months ago
- text embedding☆138Updated last year
- Efficient, Low-Resource, Distributed transformer implementation based on BMTrain☆238Updated 11 months ago
- MEASURING MASSIVE MULTITASK CHINESE UNDERSTANDING☆87Updated 7 months ago
- A more efficient GLM implementation!☆55Updated last year
- 使用sentencepiece中BPE训练中文词表,并在transformers中进行使用。☆109Updated last year
- deepspeed+trainer简单高效实现多卡微调大模型☆116Updated last year
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆19Updated last year
- ☆81Updated 3 months ago
- “悟道”数据☆39Updated 3 years ago
- 使用单个24G显卡,从0开始训练LLM☆49Updated 2 weeks ago
- ☆62Updated last year