Unakar / Efficient_AILinks
此项目是我个人对MIT 6.5940 课程作业的答案,学习笔记和心得。
☆14Updated last year
Alternatives and similar repositories for Efficient_AI
Users that are interested in Efficient_AI are comparing it to the libraries listed below
Sorting:
- Course materials for MIT6.5940: TinyML and Efficient Deep Learning Computing☆60Updated 9 months ago
- The repository has collected a batch of noteworthy MLSys bloggers (Algorithms/Systems)☆292Updated 9 months ago
- A comprehensive guide for beginners in the field of data management and artificial intelligence.☆460Updated 6 months ago
- ☆148Updated 3 months ago
- ☆91Updated this week
- Codes & examples for "CUDA - From Correctness to Performance"☆114Updated last year
- 注释的nano_vllm仓库,并且完成了MiniCPM4的适配以及注册新模型的功能☆81Updated 2 months ago
- 飞桨护航计划集训营☆21Updated 2 months ago
- The dataset and baseline code for ASC23 LLM inference optimization challenge.☆32Updated last year
- Puzzles for learning Triton, play it with minimal environment configuration!☆549Updated last month
- Summer Training 2023, SAST 9.☆43Updated 2 years ago
- Learning material for CMU10-714: Deep Learning System☆279Updated last year
- Summary of some awesome work for optimizing LLM inference☆120Updated 4 months ago
- My solutions to the assignments of CMU 10-714 Deep Learning Systems 2022☆41Updated last year
- A repository sharing the literatures about large language models☆103Updated 3 months ago
- my cs notes☆56Updated last year
- 📰 Must-read papers on KV Cache Compression (constantly updating 🤗).☆566Updated 3 weeks ago
- This repository serves as a comprehensive survey of LLM development, featuring numerous research papers along with their corresponding co…☆221Updated 2 months ago
- 🏆🏆 「大模型」All in one & All from scratch. 🌍🌍 收集、清洗数据,训练Tokenizer,预训练、SFT、GRPO!☆39Updated 2 months ago
- Awesome-LLM-KV-Cache: A curated list of 📙Awesome LLM KV Cache Papers with Codes.☆376Updated 7 months ago
- ☆83Updated last month
- Curated collection of papers in MoE model inference☆285Updated last month
- Official implementation of "Fast-dLLM: Training-free Acceleration of Diffusion LLM by Enabling KV Cache and Parallel Decoding"☆581Updated last week
- Sharing my research toolchain☆85Updated last year
- Implement custom operators in PyTorch with cuda/c++☆71Updated 2 years ago
- All Homeworks for TinyML and Efficient Deep Learning Computing 6.5940 • Fall • 2023 • https://efficientml.ai☆181Updated last year
- This repository organizes materials, recordings, and schedules related to AI-infra learning meetings.☆200Updated last month
- Efficient Mixture of Experts for LLM Paper List☆140Updated 3 weeks ago
- A Survey of Efficient Attention Methods: Hardware-efficient, Sparse, Compact, and Linear Attention☆200Updated last month
- 中科大计算机学院部分课程的试卷☆78Updated 3 months ago