deepreinforce-ai / CUDA-L1Links
CUDA-L1: Improving CUDA Optimization via Contrastive Reinforcement Learning
☆193Updated last month
Alternatives and similar repositories for CUDA-L1
Users that are interested in CUDA-L1 are comparing it to the libraries listed below
Sorting:
- Load compute kernels from the Hub☆293Updated last week
- Efficient LLM Inference over Long Sequences☆390Updated 3 months ago
- Simple & Scalable Pretraining for Neural Architecture Research☆296Updated last month
- Training-free Post-training Efficient Sub-quadratic Complexity Attention. Implemented with OpenAI Triton.☆148Updated this week
- FlexAttention based, minimal vllm-style inference engine for fast Gemma 2 inference.☆290Updated 2 months ago
- Official PyTorch implementation for Hogwild! Inference: Parallel LLM Generation with a Concurrent Attention Cache☆125Updated last month
- The Automated LLM Speedrunning Benchmark measures how well LLM agents can reproduce previous innovations and discover new ones in languag…☆99Updated 2 months ago
- 👷 Build compute kernels☆155Updated this week
- ☆251Updated 4 months ago
- Repo for "LoLCATs: On Low-Rank Linearizing of Large Language Models"☆247Updated 8 months ago
- DeMo: Decoupled Momentum Optimization☆193Updated 10 months ago
- GRadient-INformed MoE☆264Updated last year
- Official implementation for Training LLMs with MXFP4☆93Updated 5 months ago
- A collection of tricks and tools to speed up transformer models☆182Updated this week
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters☆130Updated 10 months ago
- Matrix (Multi-Agent daTa geneRation Infra and eXperimentation framework) is a versatile engine for multi-agent conversational data genera…☆96Updated this week
- PyTorch implementation of models from the Zamba2 series.☆185Updated 8 months ago
- ☆98Updated last month
- A safetensors extension to efficiently store sparse quantized tensors on disk☆164Updated last week
- ☆218Updated 8 months ago
- ☆60Updated 3 months ago
- The evaluation framework for training-free sparse attention in LLMs☆100Updated 3 months ago
- Work in progress.☆74Updated 3 months ago
- Samples of good AI generated CUDA kernels☆91Updated 4 months ago
- Official implementation of the paper: "ZClip: Adaptive Spike Mitigation for LLM Pre-Training".☆133Updated last month
- Train, tune, and infer Bamba model☆133Updated 4 months ago
- Quantized LLM training in pure CUDA/C++.☆180Updated this week
- ☆152Updated 3 months ago
- Q-GaLore: Quantized GaLore with INT4 Projection and Layer-Adaptive Low-Rank Gradients.☆201Updated last year
- Flash-Muon: An Efficient Implementation of Muon Optimizer☆189Updated 3 months ago