stanford-cs336 / assignment2-systemsLinks
Student version of Assignment 2 for Stanford CS336 - Language Modeling From Scratch
☆151Updated 5 months ago
Alternatives and similar repositories for assignment2-systems
Users that are interested in assignment2-systems are comparing it to the libraries listed below
Sorting:
- ☆45Updated 10 months ago
- ☆94Updated 5 months ago
- Based on Nano-vLLM, a simple replication of vLLM with self-contained paged attention and flash attention implementation☆184Updated this week
- Miles is an enterprise-facing reinforcement learning framework for large-scale MoE post-training and production workloads, forked from an…☆714Updated this week
- ☆233Updated last year
- ☆405Updated last year
- making the official triton tutorials actually comprehensible☆93Updated 4 months ago
- mHC kernels implemented in CUDA☆217Updated this week
- JAX backend for SGL☆218Updated this week
- ☆465Updated 4 months ago
- Student version of Assignment 1 for Stanford CS336 - Language Modeling From Scratch☆1,109Updated 4 months ago
- ☆224Updated last month
- dInfer: An Efficient Inference Framework for Diffusion Language Models☆389Updated last week
- FlexAttention based, minimal vllm-style inference engine for fast Gemma 2 inference.☆328Updated 2 months ago
- An early research stage expert-parallel load balancer for MoE models based on linear programming.☆485Updated last month
- [ICLR2025 Spotlight] MagicPIG: LSH Sampling for Efficient LLM Generation☆245Updated last year
- An extension of the nanoGPT repository for training small MOE models.☆225Updated 10 months ago
- LLM KV cache compression made easy☆749Updated last month
- ☆949Updated 2 months ago
- Cataloging released Triton kernels.☆282Updated 4 months ago
- Speed Always Wins: A Survey on Efficient Architectures for Large Language Models☆385Updated 2 months ago
- ring-attention experiments☆161Updated last year
- Accelerating MoE with IO and Tile-aware Optimizations☆542Updated this week
- ☆39Updated 5 months ago
- GPU-optimized framework for training diffusion language models at any scale. The backend of Quokka, Super Data Learners, and OpenMoE 2 tr…☆312Updated 2 months ago
- Implementation for FP8/INT8 Rollout for RL training without performence drop.☆283Updated 2 months ago
- KernelBench: Can LLMs Write GPU Kernels? - Benchmark + Toolkit with Torch -> CUDA (+ more DSLs)☆748Updated last week
- 📰 Must-read papers on KV Cache Compression (constantly updating 🤗).☆635Updated 3 months ago
- Memory optimized Mixture of Experts☆72Updated 5 months ago
- Official implementation of "Fast-dLLM: Training-free Acceleration of Diffusion LLM by Enabling KV Cache and Parallel Decoding"☆773Updated last month