ArtificialZeng / transformers-ExplainedLinks
官方transformers源码解析。AI大模型时代,pytorch、transformer是新操作系统,其他都是运行在其上面的软件。
☆17Updated 2 years ago
Alternatives and similar repositories for transformers-Explained
Users that are interested in transformers-Explained are comparing it to the libraries listed below
Sorting:
- 最简易的R1结果在小模型上的复现,阐述类O1与DeepSeek R1最重要的本质。Think is all your need。利用实验佐证,对于强推理能力,think思考过程性内容是AGI/ASI的核心。☆44Updated 8 months ago
- AGM阿格姆:AI基因图谱模型,从token-weight权重微粒角度,探索AI模型,GPT\LLM大模型的内在运作机制。☆29Updated 2 years ago
- the newest version of llama3,source code explained line by line using Chinese☆22Updated last year
- 中文金融大模型测评基准,六大类二十五任务、等级化评价,国内模型获得A级☆10Updated last year
- Here is a demo for PDF parser (Including OCR, object detection tools)☆36Updated 11 months ago
- a toolkit on knowledge distillation for large language models☆162Updated 3 weeks ago
- Repo for for paper "AgentRE: An Agent-Based Framework for Navigating Complex Information Landscapes in Relation Extraction".☆71Updated last year
- 中文原生检索增强生成测评基准☆123Updated last year
- ☆95Updated 10 months ago
- GLM Series Edge Models☆149Updated 3 months ago
- 🔥 AgentScale: A Scalable Microservices-based Agent Orchestration Framework☆26Updated last year
- unify-easy-llm(ULM)旨在打造一个简易的一键式大模型训练工具,支持Nvidia GPU、Ascend NPU等不同硬件以及常用的大模型。☆57Updated last year
- The complete training code of the open-source high-performance Llama model, including the full process from pre-training to RLHF.☆68Updated 2 years ago
- XVERSE-65B: A multilingual large language model developed by XVERSE Technology Inc.☆140Updated last year
- Fast LLM Training CodeBase With dynamic strategy choosing [Deepspeed+Megatron+FlashAttention+CudaFusionKernel+Compiler];☆41Updated last year
- ☆106Updated 2 years ago
- Unleashing the Power of Cognitive Dynamics on Large Language Models☆63Updated last year
- CodeLLaMA 中文版 - 代码生成助手,huggingface累积下载2w+次☆45Updated 2 years ago
- YiZhao: A 2TB Open Financial Corpus. Data and tools for generating and inspecting YiZhao, a safe, high-quality, open-source bilingual fin…☆30Updated 2 months ago
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models☆137Updated last year
- AgileGen: Empowering Agile-Based Generative Software Development through Human-AI Teamwork (accepted by ACM TOSEM)☆23Updated 11 months ago
- SELF-GUIDE: Better Task-Specific Instruction Following via Self-Synthetic Finetuning. COLM 2024 Accepted Paper☆33Updated last year
- 百度QA100万数据集☆48Updated last year
- Another ChatGLM2 implementation for GPTQ quantization☆54Updated last year
- 大语言模型训练和服务调研☆36Updated 2 years ago
- Fast instruction tuning with Llama2☆11Updated last year
- Finetune Llama 3, Mistral & Gemma LLMs 2-5x faster with 80% less memory☆29Updated last year
- ☆29Updated last year
- Mixture-of-Experts (MoE) Language Model☆189Updated last year
- Imitate OpenAI with Local Models☆88Updated last year