cauyxy / bilivideosLinks
☆53Updated 2 years ago
Alternatives and similar repositories for bilivideos
Users that are interested in bilivideos are comparing it to the libraries listed below
Sorting:
- ☆84Updated 2 years ago
- 怎么训练一个LLM分词器☆152Updated 2 years ago
- Inference code for LLaMA models☆123Updated 2 years ago
- 欢迎来到 "LLM-travel" 仓库!探索大语言模型(LLM)的奥秘 🚀。致力于深入理解、探讨以及实现与大模型相关的各种技术、原理和应用。☆339Updated last year
- 使用单个24G显卡,从0开始训练LLM☆55Updated 2 months ago
- 大语言模型指令调优工具(支持 FlashAttention)☆178Updated last year
- ☆114Updated 10 months ago
- an implementation of transformer, bert, gpt, and diffusion models for learning purposes☆157Updated 11 months ago
- The Roadmap for LLMs☆86Updated 2 years ago
- ☆174Updated this week
- Model Compression for Big Models☆165Updated 2 years ago
- Baichuan2代码的逐行解析版本,适合小白☆214Updated 2 years ago
- ☆308Updated 2 years ago
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆69Updated 2 years ago
- OpenLLMWiki: Docs of OpenLLMAI. Survey, reproduction and domain/task adaptation of open source chatgpt alternatives/implementations. PiXi…☆262Updated 9 months ago
- Official Repository for SIGIR2024 Demo Paper "An Integrated Data Processing Framework for Pretraining Foundation Models"☆82Updated last year
- 使用sentencepiece中BPE训练中文词表,并在transformers中进行使用。☆119Updated 2 years ago
- train llama on a single A100 80G node using 🤗 transformers and 🚀 Deepspeed Pipeline Parallelism☆224Updated last year
- 专注于中文领域大语言模型,落地到某个行业某个领域,成为一个行业大模型、公司级别或行业级别领域大模型。☆122Updated 6 months ago
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆19Updated 2 years ago
- NTK scaled version of ALiBi position encoding in Transformer.☆69Updated 2 years ago
- ☆79Updated last year
- ☆175Updated last year
- 中文 Instruction tuning datasets☆135Updated last year
- ☆36Updated 9 months ago
- 更纯粹、更高压缩率的Tokenizer☆482Updated 9 months ago
- ☆90Updated 2 years ago
- The GPU RAM Estimator provides a simple tool for estimating GPU memory usage during training and inference.☆34Updated last year
- Train llm (bloom, llama, baichuan2-7b, chatglm3-6b) with deepspeed pipeline mode. Faster than zero/zero++/fsdp.☆98Updated last year
- 中文图书语料MD5链接☆217Updated last year