ForceInjection / AI-fundermentalsLinks
AI 基础知识 - GPU 架构、CUDA 编程、大模型基础及AI Agent 相关知识
☆742Updated last week
Alternatives and similar repositories for AI-fundermentals
Users that are interested in AI-fundermentals are comparing it to the libraries listed below
Sorting:
- This repository organizes materials, recordings, and schedules related to AI-infra learning meetings.☆312Updated 3 weeks ago
- LLM notes, including model inference, transformer model structure, and llm framework code analysis notes.☆859Updated last month
- A self-learning tutorail for CUDA High Performance Programing.☆854Updated 2 weeks ago
- Learning Machine Learning, The Chinese Taoist Way☆444Updated 5 years ago
- Open Source Landscapes and Insights Produced by AntOSS☆372Updated 2 months ago
- 一种任务级GPU算力分时调度的高性能深度学习训练平台☆731Updated 2 years ago
- ☆324Updated 6 months ago
- This repo is used for archiving my notes, codes and materials of cs learning.☆74Updated this week
- ☆538Updated last year
- Persist and reuse KV Cache to speedup your LLM.☆244Updated this week
- GLake: optimizing GPU memory management and IO transmission.☆497Updated 10 months ago
- DLRover: An Automatic Distributed Deep Learning System☆1,626Updated last week
- vLLM Kunlun (vllm-kunlun) is a community-maintained hardware plugin designed to seamlessly run vLLM on the Kunlun XPU.☆239Updated this week
- HAMi-core compiles libvgpu.so, which ensures hard limit on GPU in container☆269Updated last week
- A workload for deploying LLM inference services on Kubernetes☆160Updated last week
- Community maintained hardware plugin for vLLM on Ascend☆1,597Updated this week
- An open-source kit for agent development, integrated the powerful capabilities of Volcengine.☆256Updated this week
- ☆123Updated 11 months ago
- ☆522Updated last week
- LLM全栈优质资源汇总☆673Updated 6 months ago
- ☆91Updated 9 months ago
- 《How to Scale Your Model》中文翻译项目 - 智能技术文档翻译工具。专为大语言模型扩展技术书籍设计,突破长文档翻译瓶颈,完美保留数学公式、代码块格式。采用占位符机制+分层翻译策略,基于Gemini API提供高质量翻译。Python+crawl4ai技…☆104Updated 5 months ago
- A kubernetes plugin which enables dynamically add or remove GPU resources for a running Pod☆127Updated 3 years ago
- Rapid and cost-effective operator and best practice for agent sandbox lifecycle management.☆88Updated this week
- LLM/MLOps/LLMOps☆133Updated 8 months ago
- 大模型/LLM推理和部署理论与实践☆370Updated 6 months ago
- ☆86Updated last week
- Using CRDs to manage GPU resources in Kubernetes.☆210Updated 3 years ago
- Omni_Infer is a suite of inference accelerators designed for the Ascend NPU platform, offering native support and an expanding feature se…☆101Updated last week
- ☆222Updated 2 years ago