ironartisan / awesome-compression1Links
模型压缩的小白入门教程
☆22Updated last year
Alternatives and similar repositories for awesome-compression1
Users that are interested in awesome-compression1 are comparing it to the libraries listed below
Sorting:
- Fast LLM Training CodeBase With dynamic strategy choosing [Deepspeed+Megatron+FlashAttention+CudaFusionKernel+Compiler];☆41Updated last year
- LLM101n: Let's build a Storyteller 中文版☆132Updated last year
- ☢️ TensorRT 2023复赛——基于TensorRT-LLM的Llama模型推断加速优化☆50Updated last year
- ☆120Updated 2 years ago
- ☆135Updated 7 months ago
- 纯c++的全平台llm加速库,支持python调用,支持baichuan, glm, llama, moss基座,手机端流畅运行chatglm-6B级模型单卡可达10000+token / s,☆45Updated 2 years ago
- the newest version of llama3,source code explained line by line using Chinese☆22Updated last year
- 大模型部署实战:TensorRT-LLM, Triton Inference Server, vLLM☆26Updated last year
- 大模型API性能指标比较 - 深入分析TTFT、TPS等关键指标☆19Updated last year
- Transformer related optimization, including BERT, GPT☆17Updated 2 years ago
- 天池 NVIDIA TensorRT Hackathon 2023 —— 生成式AI模型优化赛 初赛第三名方案☆50Updated 2 years ago
- A light proxy solution for HuggingFace hub.☆47Updated last year
- The complete training code of the open-source high-performance Llama model, including the full process from pre-training to RLHF.☆68Updated 2 years ago
- 顾名思义:手搓的RAG☆127Updated last year
- run ChatGLM2-6B in BM1684X☆49Updated last year
- Open deep learning compiler stack for cpu, gpu and specialized accelerators☆19Updated last week
- Model compression toolkit engineered for enhanced usability, comprehensiveness, and efficiency.☆122Updated last week
- NVIDIA TensorRT Hackathon 2023复赛选题:通义千问Qwen-7B用TensorRT-LLM模型搭建及优化☆42Updated last year
- 百度QA100万数据集☆48Updated last year
- A high-throughput and memory-efficient inference and serving engine for LLMs☆17Updated last year
- AGM阿格姆:AI基因图谱模型,从token-weight权重微粒角度,探索AI模型,GPT\LLM大模型的内在运作机制。☆29Updated 2 years ago
- This is a personal reimplementation of Google's Infini-transformer, utilizing a small 2b model. The project includes both model and train…☆58Updated last year
- Tutorials for writing high-performance GPU operators in AI frameworks.☆131Updated 2 years ago
- GLM Series Edge Models☆149Updated 3 months ago
- 大语言模型训练和服务调研☆36Updated 2 years ago
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models☆137Updated last year
- Another ChatGLM2 implementation for GPTQ quantization☆54Updated last year
- Music large model based on InternLM2-chat.☆22Updated 8 months ago
- simplest online-softmax notebook for explain Flash Attention☆13Updated 8 months ago
- ☆26Updated 2 months ago