suneeta-mall / deep_learning_at_scaleLinks
Contains hands-on example code for [O'reilly book "Deep Learning At Scale"](https://www.oreilly.com/library/view/deep-learning-at/9781098145279/).
☆31Updated last year
Alternatives and similar repositories for deep_learning_at_scale
Users that are interested in deep_learning_at_scale are comparing it to the libraries listed below
Sorting:
- Slides, notes, and materials for the workshop☆339Updated last year
- A set of scripts and notebooks on LLM finetunning and dataset creation☆115Updated last year
- Fine-tune an LLM to perform batch inference and online serving.☆117Updated 8 months ago
- Accelerate Model Training with PyTorch 2.X, published by Packt☆51Updated last month
- A repository to unravel the language of GPUs, making their kernel conversations easy to understand☆195Updated 8 months ago
- ☆178Updated last year
- ☆976Updated last week
- How to install CUDA & cuDNN for Machine Learning☆20Updated last year
- Best practices & guides on how to write distributed pytorch training code☆571Updated 3 months ago
- ☆230Updated 2 months ago
- LORA: Low-Rank Adaptation of Large Language Models implemented using PyTorch☆122Updated 2 years ago
- Distributed training (multi-node) of a Transformer model☆92Updated last year
- RAGs: Simple implementations of Retrieval Augmented Generation (RAG) Systems☆141Updated last year
- GPU Kernels☆218Updated 9 months ago
- ☆235Updated last year
- Complete implementation of Llama2 with/without KV cache & inference 🚀☆49Updated last year
- FlexAttention based, minimal vllm-style inference engine for fast Gemma 2 inference.☆334Updated 2 months ago
- 100 days of building GPU kernels!☆567Updated 9 months ago
- Tutorial Materials for "The Fundamentals of Modern Deep Learning with PyTorch" workshop at PyCon 2024☆247Updated last year
- An extension of the nanoGPT repository for training small MOE models.☆231Updated 10 months ago
- Deep Learning Fundamentals -- Code material and exercises☆398Updated last year
- A curated list of resources for learning and exploring Triton, OpenAI's programming language for writing efficient GPU code.☆457Updated 10 months ago
- ☆77Updated last year
- Recreating PyTorch from scratch (C/C++, CUDA, NCCL and Python, with multi-GPU support and automatic differentiation!)☆162Updated 2 months ago
- Minimal example scripts of the Hugging Face Trainer, focused on staying under 150 lines☆196Updated last year
- Notes on quantization in neural networks☆117Updated 2 years ago
- ☆46Updated 8 months ago
- LoRA and DoRA from Scratch Implementations☆215Updated last year
- Some CUDA example code with READMEs.☆179Updated 2 months ago
- Starter pack for NeurIPS LLM Efficiency Challenge 2023.☆129Updated 2 years ago