shen-shanshan / cs-self-learningLinks
This repo is used for archiving my notes, codes and materials of cs learning.
☆74Updated this week
Alternatives and similar repositories for cs-self-learning
Users that are interested in cs-self-learning are comparing it to the libraries listed below
Sorting:
- This repository organizes materials, recordings, and schedules related to AI-infra learning meetings.☆312Updated 3 weeks ago
- A light llama-like llm inference framework based on the triton kernel.☆169Updated 3 weeks ago
- 注释的nano_vllm仓库,并且完成了MiniCPM4的适配以及注册新模型的功能☆147Updated 5 months ago
- llm theoretical performance analysis tools and support params, flops, memory and latency analysis.☆114Updated 6 months ago
- how to learn PyTorch and OneFlow☆479Updated last year
- 校招、秋招、春招、实习好项目,带你从零动手实现支持LLama2/3和Qwen2.5的大模型推理框架。☆486Updated 3 months ago
- UltraScale Playbook 中文版☆125Updated 10 months ago
- learning how CUDA works☆369Updated 10 months ago
- A self-learning tutorail for CUDA High Performance Programing.☆854Updated 2 weeks ago
- High Performance LLM Inference Operator Library☆222Updated last week
- A tutorial for CUDA&PyTorch☆208Updated last week
- ☆522Updated last week
- 使用 CUDA C++ 实现的 llama 模型推理框架☆64Updated last year
- LLM notes, including model inference, transformer model structure, and llm framework code analysis notes.☆859Updated last month
- ☆39Updated 8 months ago
- A CUDA tutorial to make people learn CUDA program from 0☆266Updated last year
- Optimize softmax in triton in many cases☆22Updated last year
- 分享AI Infra知识&代码练习:PyTorch/vLLM/SGLang框架入门⚡️、性能加速🚀、大模型基础🧠、AI软硬件🔧等☆65Updated last week
- [EMNLP 2024 & AAAI 2026] A powerful toolkit for compressing large models including LLMs, VLMs, and video generative models.☆669Updated 2 months ago
- Some common CUDA kernel implementations (Not the fastest).☆29Updated last month
- FlagGems is an operator library for large language models implemented in the Triton Language.☆887Updated this week
- hpc 教程,包含集合通信(mpi、nccl)、cuda 编程、向量化 SIMD、RDMA 通信等☆70Updated last week
- 使用 cutlass 实现 flash-attention 精简版,具有教学意义☆54Updated last year
- ☆141Updated last year
- 📚200+ Tensor/CUDA Cores Kernels, ⚡️flash-attn-mma, ⚡️hgemm with WMMA, MMA and CuTe (98%~100% TFLOPS of cuBLAS/FA2 🎉🎉).☆62Updated 9 months ago
- Materials for learning SGLang☆728Updated 3 weeks ago
- ☆130Updated last year
- 🤖FFPA: Extend FlashAttention-2 with Split-D, ~O(1) SRAM complexity for large headdim, 1.8x~3x↑🎉 vs SDPA EA.☆246Updated last week
- The repository has collected a batch of noteworthy MLSys bloggers (Algorithms/Systems)☆317Updated last year
- 一个轻量化的大模型推理框架☆21Updated 8 months ago