stanford-cs336 / assignment2-systemsLinks
Student version of Assignment 2 for Stanford CS336 - Language Modeling From Scratch
☆95Updated 2 months ago
Alternatives and similar repositories for assignment2-systems
Users that are interested in assignment2-systems are comparing it to the libraries listed below
Sorting:
- making the official triton tutorials actually comprehensible☆57Updated last month
- ☆41Updated 7 months ago
- ☆222Updated 3 weeks ago
- ☆209Updated 9 months ago
- ☆380Updated 9 months ago
- ☆174Updated last year
- ☆441Updated last month
- LLM KV cache compression made easy☆660Updated last week
- FlexAttention based, minimal vllm-style inference engine for fast Gemma 2 inference.☆296Updated 2 months ago
- [ICLR2025 Spotlight] MagicPIG: LSH Sampling for Efficient LLM Generation☆238Updated 10 months ago
- An extension of the nanoGPT repository for training small MOE models.☆202Updated 7 months ago
- A repository to unravel the language of GPUs, making their kernel conversations easy to understand☆193Updated 4 months ago
- ring-attention experiments☆154Updated last year
- Memory optimized Mixture of Experts☆68Updated 2 months ago
- Cataloging released Triton kernels.☆263Updated last month
- PTX-Tutorial Written Purely By AIs (Deep Research of Openai and Claude 3.7)☆66Updated 6 months ago
- A curated collection of resources, tutorials, and best practices for learning and mastering NVIDIA CUTLASS☆233Updated 5 months ago
- A curated list of resources for learning and exploring Triton, OpenAI's programming language for writing efficient GPU code.☆421Updated 7 months ago
- KernelBench: Can LLMs Write GPU Kernels? - Benchmark with Torch -> CUDA problems☆612Updated last week
- ☆28Updated 3 months ago
- Student version of Assignment 1 for Stanford CS336 - Language Modeling From Scratch☆816Updated last month
- ☆251Updated 4 months ago
- Dion optimizer algorithm☆369Updated 3 weeks ago
- Efficient triton implementation of Native Sparse Attention.☆238Updated 5 months ago
- ☆63Updated 3 months ago
- Survey: A collection of AWESOME papers and resources on the latest research in Mixture of Experts.☆135Updated last year
- Simple & Scalable Pretraining for Neural Architecture Research☆296Updated 2 months ago
- Chain of Experts (CoE) enables communication between experts within Mixture-of-Experts (MoE) models☆220Updated last month
- The evaluation framework for training-free sparse attention in LLMs☆101Updated last week
- [ICLR 2025] Official PyTorch Implementation of Gated Delta Networks: Improving Mamba2 with Delta Rule☆331Updated last month