ironartisan / awesome-compression1Links
模型压缩的小白入门教程
☆22Updated last year
Alternatives and similar repositories for awesome-compression1
Users that are interested in awesome-compression1 are comparing it to the libraries listed below
Sorting:
- Tutorial for Ray☆36Updated last year
- ☢️ TensorRT 2023复赛——基于TensorRT-LLM的Llama模型推断加速优化☆51Updated 2 years ago
- ☆136Updated 10 months ago
- ☆120Updated 2 years ago
- Fast LLM Training CodeBase With dynamic strategy choosing [Deepspeed+Megatron+FlashAttention+CudaFusionKernel+Compiler];☆41Updated last year
- LLM101n: Let's build a Storyteller 中文版☆137Updated last year
- Datawhale论文分享,阅读前沿论文,分享技术创新☆51Updated this week
- 大模型部署实战:TensorRT-LLM, Triton Inference Server, vLLM☆26Updated last year
- simplest online-softmax notebook for explain Flash Attention☆13Updated last year
- Tutorials for writing high-performance GPU operators in AI frameworks.☆132Updated 2 years ago
- ☆30Updated 5 months ago
- 顾名思义:手搓的RAG☆130Updated last year
- the newest version of llama3,source code explained line by line using Chinese☆22Updated last year
- run ChatGLM2-6B in BM1684X☆49Updated last year
- Music large model based on InternLM2-chat.☆22Updated last year
- Transformer related optimization, including BERT, GPT☆17Updated 2 years ago
- Efficient, Flexible, and Highly Fault-Tolerant Model Service Management Based on SGLang☆61Updated last year
- The complete training code of the open-source high-performance Llama model, including the full process from pre-training to RLHF.☆68Updated 2 years ago
- 看图学大模型☆321Updated last year
- Pipeline-Parallel Lecture: Simplest Dualpipe Implementation.☆30Updated 3 months ago
- GLM Series Edge Models☆156Updated 6 months ago
- 天池 NVIDIA TensorRT Hackathon 2023 —— 生成式AI模型优化赛 初赛第三名方案☆50Updated 2 years ago
- Implementation of FlashAttention in PyTorch☆178Updated 11 months ago
- 使用 CUDA C++ 实现的 llama 模型推理框架☆63Updated last year
- 纯c++的全平台llm加速库,支持python调用,支持baichuan, glm, llama, moss基座,手机端流畅运行chatglm-6B级模型单卡可达10000+token / s,☆45Updated 2 years ago
- Decoding Attention is specially optimized for MHA, MQA, GQA and MLA using CUDA core for the decoding stage of LLM inference.☆46Updated 6 months ago
- 青稞Talk☆180Updated 3 weeks ago
- unify-easy-llm(ULM)旨在打造一个简易的一键式大模型训练工具,支持Nvidia GPU、Ascend NPU等不同硬件以及常用的大模型。☆59Updated last year
- 深度学习软硬件配置(小白向)☆35Updated last month
- 官方transformers源码解析。AI大模型时代,pytorch、transformer是新操作系统,其他都是运行在其上面的软件。☆17Updated 2 years ago