PKUFlyingPig / MIT6.5940_TinyML
Course materials for MIT6.5940: TinyML and Efficient Deep Learning Computing
☆37Updated 3 months ago
Alternatives and similar repositories for MIT6.5940_TinyML:
Users that are interested in MIT6.5940_TinyML are comparing it to the libraries listed below
- ☆112Updated this week
- This repository serves as a comprehensive survey of LLM development, featuring numerous research papers along with their corresponding co…☆102Updated last month
- A simple calculation for LLM MFU.☆34Updated last month
- Implement Flash Attention using Cute.☆74Updated 3 months ago
- Decoding Attention is specially optimized for MHA, MQA, GQA and MLA using CUDA core for the decoding stage of LLM inference.☆35Updated last week
- ☆32Updated 8 months ago
- [NeurIPS 2024] Efficient LLM Scheduling by Learning to Rank☆46Updated 5 months ago
- A tiny yet powerful LLM inference system tailored for researching purpose. vLLM-equivalent performance with only 2k lines of code (2% of …☆156Updated 9 months ago
- NEO is a LLM inference engine built to save the GPU memory crisis by CPU offloading☆21Updated last month
- Implement some method of LLM KV Cache Sparsity☆30Updated 10 months ago
- Codes & examples for "CUDA - From Correctness to Performance"☆90Updated 5 months ago
- llm theoretical performance analysis tools and support params, flops, memory and latency analysis.☆81Updated 3 months ago
- ☆62Updated 5 months ago
- The repository has collected a batch of noteworthy MLSys bloggers (Algorithms/Systems)☆213Updated 3 months ago
- A PyTorch-like deep learning framework. Just for fun.☆149Updated last year
- TritonBench: Benchmarking Large Language Model Capabilities for Generating Triton Operators☆34Updated last month
- My solutions to the assignments of CMU 10-714 Deep Learning Systems 2022☆36Updated last year
- Summary of some awesome work for optimizing LLM inference☆67Updated this week
- Systems for GenAI☆130Updated last month
- A practical way of learning Swizzle☆17Updated 2 months ago
- Estimate MFU for DeepSeekV3☆21Updated 3 months ago
- ☆50Updated 2 months ago
- HPC-Lab for High Performance Computing course, 2023 Spring , Tsinghua Universit. 高性能计算导论 @ THU.☆22Updated last year
- Summary of the Specs of Commonly Used GPUs for Training and Inference of LLM☆35Updated 3 weeks ago
- my cs notes☆40Updated 5 months ago
- 📚FFPA(Split-D): Yet another Faster Flash Attention with O(1) GPU SRAM complexity large headdim, 1.8x~3x↑🎉 faster than SDPA EA.☆163Updated this week
- Compare different hardware platforms via the Roofline Model for LLM inference tasks.☆93Updated last year
- Curated collection of papers in MoE model inference☆130Updated last month
- SpInfer: Leveraging Low-Level Sparsity for Efficient Large Language Model Inference on GPUs☆39Updated 2 weeks ago
- High performance Transformer implementation in C++.☆115Updated 2 months ago