ironartisan / awesome-compression1
模型压缩的小白入门教程
☆22Updated 4 months ago
Related projects ⓘ
Alternatives and complementary repositories for awesome-compression1
- Decoding Attention is specially optimized for multi head attention (MHA) using CUDA core for the decoding stage of LLM inference.☆23Updated last week
- Fast LLM Training CodeBase With dynamic strategy choosing [Deepspeed+Megatron+FlashAttention+CudaFusionKernel+Compiler];☆34Updated 10 months ago
- 天池 NVIDIA TensorRT Hackathon 2023 —— 生成式AI模型优化赛 初赛第三名方案☆47Updated last year
- 大模型部署实战:TensorRT-LLM, Triton Inference Server, vLLM☆26Updated 8 months ago
- Music large model based on InternLM2-chat.☆21Updated 3 months ago
- ☆117Updated last year
- Transformer related optimization, including BERT, GPT☆17Updated last year
- NVIDIA TensorRT Hackathon 2023复赛选题:通义千问Qwen-7B用TensorRT-LLM模型搭建及优化☆40Updated last year
- ☢️ TensorRT 2023复赛——基于TensorRT-LLM的Llama模型推断加速优化☆44Updated last year
- A light proxy solution for HuggingFace hub.☆44Updated last year
- run ChatGLM2-6B in BM1684X☆48Updated 8 months ago
- LLM101n: Let's build a Storyteller 中文版☆116Updated 2 months ago
- 大语言模型训练和服务调研☆34Updated last year
- the newest version of llama3,source code explained line by line using Chinese☆22Updated 6 months ago
- 纯c++的全平台llm加速库,支持python调用,支持baichuan, glm, llama, moss基座,手机端流畅运行chatglm-6B级模型单卡可达10000+token / s,☆45Updated last year
- 看图学大模型☆175Updated 3 months ago
- unify-easy-llm(ULM)旨在打造一个简易的一键式大模型训练工具,支持Nvidia GPU、Ascend NPU等不同硬件以及常用的大模型。☆36Updated 3 months ago
- Bert TensorRT模型加速部署☆9Updated 2 years ago
- This is a personal reimplementation of Google's Infini-transformer, utilizing a small 2b model. The project includes both model and train…☆52Updated 6 months ago
- ☆22Updated last year
- 顾名思义:手搓的RAG☆110Updated 8 months ago
- A MoE impl for PyTorch, [ATC'23] SmartMoE☆57Updated last year
- A Simple MLLM Surpassed QwenVL-Max with OpenSource Data Only in 14B LLM.☆36Updated 2 months ago
- SUS-Chat: Instruction tuning done right☆47Updated 9 months ago
- simplify >2GB large onnx model☆43Updated 8 months ago
- Tutorials for writing high-performance GPU operators in AI frameworks.☆122Updated last year
- ☆13Updated 11 months ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆15Updated 5 months ago
- 百度QA100万数据集☆49Updated 11 months ago
- SELF-GUIDE: Better Task-Specific Instruction Following via Self-Synthetic Finetuning. COLM 2024 Accepted Paper☆26Updated 5 months ago