PKUFlyingPig / MIT6.5940_TinyMLLinks
Course materials for MIT6.5940: TinyML and Efficient Deep Learning Computing
☆68Updated last year
Alternatives and similar repositories for MIT6.5940_TinyML
Users that are interested in MIT6.5940_TinyML are comparing it to the libraries listed below
Sorting:
- The repository has collected a batch of noteworthy MLSys bloggers (Algorithms/Systems)☆321Updated last year
- Flash Attention from Scratch on CUDA Ampere☆122Updated 5 months ago
- Summary of some awesome work for optimizing LLM inference☆172Updated 2 months ago
- Curated collection of papers in MoE model inference☆339Updated 3 months ago
- This repository serves as a comprehensive survey of LLM development, featuring numerous research papers along with their corresponding co…☆278Updated last month
- 注释的nano_vllm仓库,并且完成了MiniCPM4的适配以及注册新模型的功能☆155Updated 5 months ago
- Learning material for CMU10-714: Deep Learning System☆300Updated last year
- ☆152Updated 7 months ago
- All Homeworks for TinyML and Efficient Deep Learning Computing 6.5940 • Fall • 2023 • https://efficientml.ai☆190Updated 2 years ago
- A PyTorch-like deep learning framework. Just for fun.☆157Updated 2 years ago
- 分享AI Infra知识&代码练习:PyTorch/vLLM/SGLang框架入门⚡️、性能加速🚀、大模型基础🧠、AI软硬件🔧等☆215Updated this week
- my cs notes☆57Updated last year
- llm theoretical performance analysis tools and support params, flops, memory and latency analysis.☆115Updated 6 months ago
- My solutions to the assignments of CMU 10-714 Deep Learning Systems 2022☆45Updated last year
- Code release for book "Efficient Training in PyTorch"☆125Updated 9 months ago
- [ICLR 2025] PEARL: Parallel Speculative Decoding with Adaptive Draft Length☆147Updated last month
- Awesome-LLM-KV-Cache: A curated list of 📙Awesome LLM KV Cache Papers with Codes.☆411Updated 11 months ago
- Codes & examples for "CUDA - From Correctness to Performance"☆121Updated last year
- LLM Inference with Deep Learning Accelerator.☆58Updated last year
- Implement some method of LLM KV Cache Sparsity☆41Updated last year
- A tiny yet powerful LLM inference system tailored for researching purpose. vLLM-equivalent performance with only 2k lines of code (2% of …☆312Updated 7 months ago
- Code release for AdapMoE accepted by ICCAD 2024☆35Updated 9 months ago
- ☆47Updated last year
- Since the emergence of chatGPT in 2022, the acceleration of Large Language Model has become increasingly important. Here is a list of pap…☆283Updated 10 months ago
- Learning TileLang with 10 puzzles!☆56Updated this week
- Lab 5 project of MIT-6.5940, deploying LLaMA2-7B-chat on one's laptop with TinyChatEngine.☆18Updated 2 years ago
- Systems for GenAI☆157Updated this week
- Summary of the Specs of Commonly Used GPUs for Training and Inference of LLM☆73Updated 5 months ago
- Accelerating Large-Scale Reasoning Model Inference with Sparse Self-Speculative Decoding☆82Updated 2 months ago
- Solution of Programming Massively Parallel Processors☆49Updated 2 years ago