DreamerGPT / DreamerGPTLinks
🌱 梦想家(DreamerGPT):中文大语言模型指令精调
☆51Updated 2 years ago
Alternatives and similar repositories for DreamerGPT
Users that are interested in DreamerGPT are comparing it to the libraries listed below
Sorting:
- A MoE impl for PyTorch, [ATC'23] SmartMoE☆71Updated 2 years ago
- [EMNLP 2023] Lion: Adversarial Distillation of Proprietary Large Language Models☆212Updated last year
- ChatGPT相关资源汇总☆56Updated 2 years ago
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆69Updated 2 years ago
- Train llm (bloom, llama, baichuan2-7b, chatglm3-6b) with deepspeed pipeline mode. Faster than zero/zero++/fsdp.☆98Updated last year
- 基于Gated Attention Unit的Transformer模型(尝鲜版)☆98Updated 2 years ago
- Model Compression for Big Models☆167Updated 2 years ago
- ☆84Updated 2 years ago
- TencentLLMEval is a comprehensive and extensive benchmark for artificial evaluation of large models that includes task trees, standards, …☆41Updated 9 months ago
- 百川Dynamic NTK-ALiBi的代码实现:无需微调即可推理更长文本☆49Updated 2 years ago
- The complete training code of the open-source high-performance Llama model, including the full process from pre-training to RLHF.☆67Updated 2 years ago
- Scripts of LLM pre-training and fine-tuning (w/wo LoRA, DeepSpeed)☆87Updated last year
- code for Scaling Laws of RoPE-based Extrapolation☆73Updated 2 years ago
- SuperCLUE-Math6:新一代中文原生多轮多步数学推理数据集的探索之旅☆60Updated last year
- Naive Bayes-based Context Extension☆326Updated last year
- NTK scaled version of ALiBi position encoding in Transformer.☆69Updated 2 years ago
- 怎么训练一个LLM分词器☆154Updated 2 years ago
- ☆15Updated 2 years ago
- A more efficient GLM implementation!☆54Updated 2 years ago
- Large language Model fintuning bloom , opt , gpt, gpt2 ,llama,llama-2,cpmant and so on☆99Updated last year
- ☆51Updated 2 years ago
- This is a personal reimplementation of Google's Infini-transformer, utilizing a small 2b model. The project includes both model and train…☆58Updated last year
- This PyTorch package implements MoEBERT: from BERT to Mixture-of-Experts via Importance-Guided Adaptation (NAACL 2022).☆113Updated 3 years ago
- A paper list of pre-trained language models (PLMs).☆81Updated 4 years ago
- ☆59Updated 2 years ago
- Lion and Adam optimization comparison☆64Updated 2 years ago
- deepspeed+trainer简单高效实现多卡微调大模型☆130Updated 2 years ago
- A LLaMA1/LLaMA12 Megatron implement.☆28Updated 2 years ago
- Finetuning LLaMA with RLHF (Reinforcement Learning with Human Feedback) based on DeepSpeed Chat☆116Updated 2 years ago
- ☆43Updated 2 years ago