ironartisan / awesome-compression1Links
模型压缩的小白入门教程
☆22Updated 11 months ago
Alternatives and similar repositories for awesome-compression1
Users that are interested in awesome-compression1 are comparing it to the libraries listed below
Sorting:
- 天池 NVIDIA TensorRT Hackathon 2023 —— 生成式AI模型优化赛 初赛第三名方案☆49Updated last year
- 大模型部署实战:TensorRT-LLM, Triton Inference Server, vLLM☆26Updated last year
- ☢️ TensorRT 2023复赛——基于TensorRT-LLM的Llama模型推断加速优化☆48Updated last year
- Transformer related optimization, including BERT, GPT☆17Updated last year
- LLM101n: Let's build a Storyteller 中文版☆131Updated 9 months ago
- Fast LLM Training CodeBase With dynamic strategy choosing [Deepspeed+Megatron+FlashAttention+CudaFusionKernel+Compiler];☆37Updated last year
- Decoding Attention is specially optimized for MHA, MQA, GQA and MLA using CUDA core for the decoding stage of LLM inference.☆36Updated 2 months ago
- 使用 CUDA C++ 实现的 llama 模型推理框架☆57Updated 6 months ago
- ☆120Updated 2 years ago
- NVIDIA TensorRT Hackathon 2023复赛选题:通义千问Qwen-7B用TensorRT-LLM模型搭建及优化☆42Updated last year
- 大模型API性能指标比较 - 深入分析TTFT、TPS等关键指标☆17Updated 8 months ago
- the newest version of llama3,source code explained line by line using Chinese☆22Updated last year
- This is a personal reimplementation of Google's Infini-transformer, utilizing a small 2b model. The project includes both model and train…☆56Updated last year
- Music large model based on InternLM2-chat.☆22Updated 5 months ago
- simplest online-softmax notebook for explain Flash Attention☆10Updated 5 months ago
- ☆22Updated 3 months ago
- ☆132Updated 3 months ago
- Bert TensorRT模型加速部署☆9Updated 3 years ago
- Manages vllm-nccl dependency☆17Updated last year
- 纯c++的全平台llm加速库,支持python调用,支持baichuan, glm, llama, moss基座,手机端流畅运行chatglm-6B级模型单卡可达10000+token / s,☆45Updated last year
- Tutorials for writing high-performance GPU operators in AI frameworks.☆130Updated last year
- from MHA, MQA, GQA to MLA by 苏剑林, with code☆19Updated 3 months ago
- ☆16Updated last year
- 训练一个对中文支持更好的LLaVA模型,并开源训练代码和数据。☆60Updated 9 months ago
- unify-easy-llm(ULM)旨在打造一个简易的一键式大模型训练工具,支持Nvidia GPU、Ascend NPU等不同硬件以及常用的大模型。☆55Updated 10 months ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆16Updated last year
- run ChatGLM2-6B in BM1684X☆49Updated last year
- 顾名思义:手搓的RAG☆123Updated last year
- Implemented a script that automatically adjusts Qwen3's inference and non-inference capabilities, based on an OpenAI-like API. The infere…☆20Updated 3 weeks ago
- Open deep learning compiler stack for cpu, gpu and specialized accelerators☆18Updated last week