ArtificialZeng / transformers-ExplainedLinks
官方transformers源码解析。AI大模型时代,pytorch、transformer是新操作系统,其他都是运行在其上面的软件。
☆17Updated 2 years ago
Alternatives and similar repositories for transformers-Explained
Users that are interested in transformers-Explained are comparing it to the libraries listed below
Sorting:
- 最简易的R1结果在小模型上的复现,阐述类O1与DeepSeek R1最重要的本质。Think is all your need。利用实验佐证,对于强推理能力,think思考过程性内容是AGI/ASI的核心。☆44Updated 9 months ago
- the newest version of llama3,source code explained line by line using Chinese☆22Updated last year
- Here is a demo for PDF parser (Including OCR, object detection tools)☆36Updated last year
- GLM Series Edge Models☆154Updated 5 months ago
- ☆106Updated 2 years ago
- The complete training code of the open-source high-performance Llama model, including the full process from pre-training to RLHF.☆68Updated 2 years ago
- Fast instruction tuning with Llama2☆11Updated last year
- 百度QA100万数据集☆47Updated last year
- 一套代码指令微调大模型☆38Updated 2 years ago
- a toolkit on knowledge distillation for large language models☆209Updated 3 weeks ago
- 中文金融大模型测评基准,六大类二十五任务、等级化评价,国内模型获得A级☆10Updated last year
- Mixture-of-Experts (MoE) Language Model☆192Updated last year
- Unleashing the Power of Cognitive Dynamics on Large Language Models☆63Updated last year
- Fast LLM Training CodeBase With dynamic strategy choosing [Deepspeed+Megatron+FlashAttention+CudaFusionKernel+Compiler];☆41Updated last year
- ☆95Updated 11 months ago
- A minimalist benchmarking tool designed to test the routine-generation capabilities of LLMs.☆27Updated 11 months ago
- XVERSE-7B: A multilingual large language model developed by XVERSE Technology Inc.☆53Updated last year
- YiZhao: A 2TB Open Financial Corpus. Data and tools for generating and inspecting YiZhao, a safe, high-quality, open-source bilingual fin…☆33Updated 4 months ago
- Its an open source LLM based on MOE Structure.☆58Updated last year
- CodeLLaMA 中文版 - 代码生成助手,huggingface累积下载2w+次☆45Updated 2 years ago
- The framework of training large language models,support lora, full parameters fine tune etc, define yaml to start training/fine tune of y…☆30Updated last year
- 中文原生检索增强生成测评基准☆123Updated last year
- unify-easy-llm(ULM)旨在打造一个简易的一键式大模型训练工具,支持Nvidia GPU、Ascend NPU等不同硬件以及常用的大模型。☆58Updated last year
- SELF-GUIDE: Better Task-Specific Instruction Following via Self-Synthetic Finetuning. COLM 2024 Accepted Paper☆33Updated last year
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models☆137Updated last year
- 大语言模型训练和服务调研☆36Updated 2 years ago
- aigc evals☆10Updated last year
- share data, prompt data , pretraining data☆36Updated last year
- The complete training code of the open-source high-performance Llama model, including the full process from pre-training to RLHF.☆67Updated 2 years ago
- XVERSE-MoE-A4.2B: A multilingual large language model developed by XVERSE Technology Inc.☆39Updated last year