stanford-cs336 / assignment2-systemsLinks
Student version of Assignment 2 for Stanford CS336 - Language Modeling From Scratch
☆73Updated last month
Alternatives and similar repositories for assignment2-systems
Users that are interested in assignment2-systems are comparing it to the libraries listed below
Sorting:
- ☆39Updated 6 months ago
- making the official triton tutorials actually comprehensible☆54Updated 3 weeks ago
- ☆217Updated 7 months ago
- ☆199Updated 8 months ago
- ☆171Updated last year
- ☆428Updated 3 weeks ago
- FlexAttention based, minimal vllm-style inference engine for fast Gemma 2 inference.☆274Updated last month
- LLM KV cache compression made easy☆609Updated last week
- Dion optimizer algorithm☆343Updated 2 weeks ago
- ☆47Updated 2 months ago
- An extension of the nanoGPT repository for training small MOE models.☆187Updated 6 months ago
- ring-attention experiments☆152Updated 11 months ago
- PTX-Tutorial Written Purely By AIs (Deep Research of Openai and Claude 3.7)☆66Updated 5 months ago
- Flash-Muon: An Efficient Implementation of Muon Optimizer☆185Updated 3 months ago
- Memory optimized Mixture of Experts☆65Updated last month
- [ICLR2025 Spotlight] MagicPIG: LSH Sampling for Efficient LLM Generation☆236Updated 9 months ago
- ☆243Updated 3 months ago
- A curated list of resources for learning and exploring Triton, OpenAI's programming language for writing efficient GPU code.☆408Updated 6 months ago
- KernelBench: Can LLMs Write GPU Kernels? - Benchmark with Torch -> CUDA problems☆566Updated 3 weeks ago
- ☆638Updated last week
- Normalized Transformer (nGPT)☆188Updated 10 months ago
- ☆35Updated last month
- Efficient LLM Inference over Long Sequences☆391Updated 2 months ago
- A curated collection of resources, tutorials, and best practices for learning and mastering NVIDIA CUTLASS☆221Updated 4 months ago
- A repository to unravel the language of GPUs, making their kernel conversations easy to understand☆193Updated 3 months ago
- Simple & Scalable Pretraining for Neural Architecture Research☆293Updated last month
- Chain of Experts (CoE) enables communication between experts within Mixture-of-Experts (MoE) models☆220Updated last week
- ☆20Updated 2 months ago
- Physics of Language Models, Part 4☆241Updated last month
- Write a fast kernel and run it on Discord. See how you compare against the best!☆57Updated this week