DreamerGPT / DreamerGPTLinks
🌱 梦想家(DreamerGPT):中文大语言模型指令精调
☆51Updated 2 years ago
Alternatives and similar repositories for DreamerGPT
Users that are interested in DreamerGPT are comparing it to the libraries listed below
Sorting:
- A MoE impl for PyTorch, [ATC'23] SmartMoE☆64Updated 2 years ago
- Train llm (bloom, llama, baichuan2-7b, chatglm3-6b) with deepspeed pipeline mode. Faster than zero/zero++/fsdp.☆97Updated last year
- ☆83Updated last year
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆69Updated last year
- 百川Dynamic NTK-ALiBi的代码实现:无需微调即可推理更长文本☆47Updated last year
- TencentLLMEval is a comprehensive and extensive benchmark for artificial evaluation of large models that includes task trees, standards, …☆38Updated 4 months ago
- [EMNLP 2023] Lion: Adversarial Distillation of Proprietary Large Language Models☆209Updated last year
- Large language Model fintuning bloom , opt , gpt, gpt2 ,llama,llama-2,cpmant and so on☆97Updated last year
- This PyTorch package implements MoEBERT: from BERT to Mixture-of-Experts via Importance-Guided Adaptation (NAACL 2022).☆108Updated 3 years ago
- This is a personal reimplementation of Google's Infini-transformer, utilizing a small 2b model. The project includes both model and train…☆58Updated last year
- 基于Gated Attention Unit的Transformer模型(尝鲜版)☆97Updated 2 years ago
- Model Compression for Big Models☆163Updated 2 years ago
- A more efficient GLM implementation!☆55Updated 2 years ago
- ☆14Updated last year
- ☆52Updated 2 years ago
- 怎么训练一个LLM分词器☆151Updated 2 years ago
- ☆79Updated last year
- CLongEval: A Chinese Benchmark for Evaluating Long-Context Large Language Models☆40Updated last year
- deepspeed+trainer简单高效实现多卡微调大模型☆126Updated 2 years ago
- Scripts of LLM pre-training and fine-tuning (w/wo LoRA, DeepSpeed)☆82Updated last year
- The complete training code of the open-source high-performance Llama model, including the full process from pre-training to RLHF.☆66Updated 2 years ago
- GoGPT:基于Llama/Llama 2训练的中英文增强大模型|Chinese-Llama2☆78Updated last year
- ChatGPT相关资源汇总☆55Updated 2 years ago
- code for Scaling Laws of RoPE-based Extrapolation☆73Updated last year
- Naive Bayes-based Context Extension☆326Updated 7 months ago
- NTK scaled version of ALiBi position encoding in Transformer.☆68Updated last year
- train llama on a single A100 80G node using 🤗 transformers and 🚀 Deepspeed Pipeline Parallelism☆223Updated last year
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆19Updated last year
- Models and examples built with OneFlow☆97Updated 9 months ago
- Collaborative Training of Large Language Models in an Efficient Way☆416Updated 10 months ago