cauyxy / bilivideosLinks
☆52Updated 2 years ago
Alternatives and similar repositories for bilivideos
Users that are interested in bilivideos are comparing it to the libraries listed below
Sorting:
- 怎么训练一个LLM分词器☆151Updated 2 years ago
- ☆83Updated last year
- 欢迎来到 "LLM-travel" 仓库!探索大语言模型(LLM)的奥秘 🚀。致力于深入理解、探讨以及实现与大模型相关的各种技术、原理和应用。☆328Updated 11 months ago
- Inference code for LLaMA models☆122Updated last year
- 大语言模型指令调优工具(支持 FlashAttention)☆174Updated last year
- 使用单个24G显卡,从0开始训练LLM☆56Updated last week
- an implementation of transformer, bert, gpt, and diffusion models for learning purposes☆155Updated 9 months ago
- ☆111Updated 8 months ago
- deepspeed+trainer简单高效实现多卡微调大模型☆126Updated 2 years ago
- 使用sentencepiece中BPE训练中文词表,并在transformers中进行使用。☆118Updated 2 years ago
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆19Updated last year
- OpenLLMWiki: Docs of OpenLLMAI. Survey, reproduction and domain/task adaptation of open source chatgpt alternatives/implementations. PiXi…☆260Updated 7 months ago
- ☆307Updated 2 years ago
- Baichuan2代码的逐行解析版本,适合小白☆214Updated last year
- 专注于中文领域大语言模型,落地到某个行业某个领域,成为一个行业大模型、公司级别或行业级别领域大模型。☆119Updated 4 months ago
- Official Repository for SIGIR2024 Demo Paper "An Integrated Data Processing Framework for Pretraining Foundation Models"☆81Updated 10 months ago
- ☆90Updated 2 years ago
- 《ChatGPT原理与实战:大型语言模型的算法、技术和私有化》☆362Updated last year
- ☆36Updated 6 months ago
- 一个基于HuggingFace开发的大语言模型训练、测试工具。支持各模型的webui、终端预测,低参数量及全参数模型训练(预训练、SFT、RM、PPO、DPO)和融合、量化。☆217Updated last year
- ☆230Updated last year
- 更纯粹、更高压缩率的Tokenizer☆480Updated 7 months ago
- train llama on a single A100 80G node using 🤗 transformers and 🚀 Deepspeed Pipeline Parallelism☆223Updated last year
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆69Updated last year
- The Roadmap for LLMs☆85Updated last year
- NTK scaled version of ALiBi position encoding in Transformer.☆68Updated last year
- Collaborative Training of Large Language Models in an Efficient Way☆416Updated 10 months ago
- Finetuning LLaMA with RLHF (Reinforcement Learning with Human Feedback) based on DeepSpeed Chat☆114Updated 2 years ago
- 中文 Instruction tuning datasets☆132Updated last year
- Train llm (bloom, llama, baichuan2-7b, chatglm3-6b) with deepspeed pipeline mode. Faster than zero/zero++/fsdp.☆97Updated last year