ironartisan / awesome-compression1Links
模型压缩的小白入门教程
☆22Updated 11 months ago
Alternatives and similar repositories for awesome-compression1
Users that are interested in awesome-compression1 are comparing it to the libraries listed below
Sorting:
- the newest version of llama3,source code explained line by line using Chinese☆22Updated last year
- 纯c++的全平台llm加速库,支持python调用,支持baichuan, glm, llama, moss基座,手机端流畅运行chatglm-6B级模型单卡可达10000+token / s,☆45Updated last year
- ☆27Updated 8 months ago
- Fast LLM Training CodeBase With dynamic strategy choosing [Deepspeed+Megatron+FlashAttention+CudaFusionKernel+Compiler];☆39Updated last year
- Transformer related optimization, including BERT, GPT☆17Updated last year
- ☆22Updated 4 months ago
- This is a personal reimplementation of Google's Infini-transformer, utilizing a small 2b model. The project includes both model and train…☆57Updated last year
- LLM101n: Let's build a Storyteller 中文版☆131Updated 10 months ago
- 官方transformers源码解析。AI大模型时代,pytorch、transformer是新操作系统,其他都是运行在其上面的软件。☆17Updated last year
- Music large model based on InternLM2-chat.☆22Updated 6 months ago
- ☆120Updated 2 years ago
- 天池 NVIDIA TensorRT Hackathon 2023 —— 生成式AI模型优化赛 初赛第三名方案☆49Updated last year
- ☢️ TensorRT 2023复赛——基于TensorRT-LLM的Llama模型推断加速优化☆48Updated last year
- ☆20Updated last year
- 大模型部署实战:TensorRT-LLM, Triton Inference Server, vLLM☆26Updated last year
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆19Updated last year
- ☆105Updated last year
- A tiny, didactical implementation of LLAMA 3☆41Updated 6 months ago
- The complete training code of the open-source high-performance Llama model, including the full process from pre-training to RLHF.☆69Updated 2 years ago
- 演示Gemma中文指令微调的教程☆46Updated last year
- A MoE impl for PyTorch, [ATC'23] SmartMoE☆64Updated last year
- ☆16Updated last year
- 顾名思义:手搓的RAG☆125Updated last year
- unify-easy-llm(ULM)旨在打造一个简易的一键式大模型训练工具,支持Nvidia GPU、Ascend NPU等不同硬件以及常用的大模型。☆55Updated 11 months ago
- NVIDIA TensorRT Hackathon 2023复赛选题:通义千问Qwen-7B用TensorRT-LLM模型搭建及优化☆42Updated last year
- A light proxy solution for HuggingFace hub.☆47Updated last year
- Official Repository for SIGIR2024 Demo Paper "An Integrated Data Processing Framework for Pretraining Foundation Models"☆82Updated 10 months ago
- 百度QA100万数据集☆47Updated last year
- 使用langchain进行任务规划,构建子任务的会话场景资源,通过MCTS任务执行器,来让每个子任务通过在上下文中资源,通过自身反思探索来获取自身对问题的最优答案;这种方式依赖模型的对齐偏好,我们在每种偏好上设计了一个工程框架,来完成自我对不同答案的奖励进行采样策略☆29Updated last month
- ☆14Updated last year