JiwenJ / mit6.5940-2023Links
TinyML and Efficient Deep Learning Computing
☆13Updated last year
Alternatives and similar repositories for mit6.5940-2023
Users that are interested in mit6.5940-2023 are comparing it to the libraries listed below
Sorting:
- All Homeworks for TinyML and Efficient Deep Learning Computing 6.5940 • Fall • 2023 • https://efficientml.ai☆174Updated last year
- Lab 5 project of MIT-6.5940, deploying LLaMA2-7B-chat on one's laptop with TinyChatEngine.☆17Updated last year
- 📚200+ Tensor/CUDA Cores Kernels, ⚡️flash-attn-mma, ⚡️hgemm with WMMA, MMA and CuTe (98%~100% TFLOPS of cuBLAS/FA2 🎉🎉).☆26Updated 2 months ago
- Puzzles for learning Triton, play it with minimal environment configuration!☆367Updated 6 months ago
- A Easy-to-understand TensorOp Matmul Tutorial☆365Updated 9 months ago
- Examples of CUDA implementations by Cutlass CuTe☆197Updated 4 months ago
- ☆170Updated last year
- Analyze the inference of Large Language Models (LLMs). Analyze aspects like computation, storage, transmission, and hardware roofline mod…☆487Updated 9 months ago
- Curated collection of papers in machine learning systems☆368Updated 2 weeks ago
- 📰 Must-read papers on KV Cache Compression (constantly updating 🤗).☆459Updated this week
- ☆110Updated 3 weeks ago
- hands on model tuning with TVM and profile it on a Mac M1, x86 CPU, and GTX-1080 GPU.☆48Updated 2 years ago
- PyTorch emulation library for Microscaling (MX)-compatible data formats☆251Updated last week
- learning how CUDA works☆271Updated 3 months ago
- Since the emergence of chatGPT in 2022, the acceleration of Large Language Model has become increasingly important. Here is a list of pap…☆255Updated 3 months ago
- flash attention tutorial written in python, triton, cuda, cutlass☆377Updated last month
- ☆21Updated last year
- Spec-Bench: A Comprehensive Benchmark and Unified Evaluation Platform for Speculative Decoding (ACL 2024 Findings)☆281Updated 2 months ago
- ☆160Updated 11 months ago
- Curated collection of papers in MoE model inference☆200Updated 4 months ago
- InfiniGen: Efficient Generative Inference of Large Language Models with Dynamic KV Cache Management (OSDI'24)☆138Updated 11 months ago
- List of papers related to neural network quantization in recent AI conferences and journals.☆658Updated 3 months ago
- Summary of some awesome work for optimizing LLM inference☆77Updated 3 weeks ago
- ☆123Updated 6 months ago
- Repo for SpecEE: Accelerating Large Language Model Inference with Speculative Early Exiting (ISCA25)☆40Updated 2 months ago
- Dynamic Memory Management for Serving LLMs without PagedAttention☆397Updated 3 weeks ago
- ☆65Updated 5 months ago
- NeuPIMs: NPU-PIM Heterogeneous Acceleration for Batched LLM Inferencing☆84Updated last year
- Yinghan's Code Sample☆335Updated 2 years ago
- List of papers related to Vision Transformers quantization and hardware acceleration in recent AI conferences and journals.☆91Updated last year