deepreinforce-ai / CUDA-L1Links
CUDA-L1: Improving CUDA Optimization via Contrastive Reinforcement Learning
☆178Updated 2 weeks ago
Alternatives and similar repositories for CUDA-L1
Users that are interested in CUDA-L1 are comparing it to the libraries listed below
Sorting:
- The Automated LLM Speedrunning Benchmark measures how well LLM agents can reproduce previous innovations and discover new ones in languag…☆97Updated 3 weeks ago
- Training-free Post-training Efficient Sub-quadratic Complexity Attention. Implemented with OpenAI Triton.☆145Updated this week
- A collection of tricks and tools to speed up transformer models☆170Updated 2 months ago
- ☆403Updated this week
- Work in progress.☆72Updated last month
- Efficient LLM Inference over Long Sequences☆389Updated 2 months ago
- ☆237Updated 2 months ago
- Simple & Scalable Pretraining for Neural Architecture Research☆289Updated this week
- GRadient-INformed MoE☆265Updated 11 months ago
- PyTorch implementation of models from the Zamba2 series.☆184Updated 7 months ago
- Official implementation of the paper: "ZClip: Adaptive Spike Mitigation for LLM Pre-Training".☆132Updated 2 weeks ago
- Load compute kernels from the Hub☆244Updated last week
- DeMo: Decoupled Momentum Optimization☆190Updated 8 months ago
- 👷 Build compute kernels☆106Updated last week
- Official repository for the paper "NeuZip: Memory-Efficient Training and Inference with Dynamic Compression of Neural Networks". This rep…☆59Updated 9 months ago
- Samples of good AI generated CUDA kernels☆89Updated 2 months ago
- FlexAttention based, minimal vllm-style inference engine for fast Gemma 2 inference.☆261Updated 3 weeks ago
- open source alpha evolve☆67Updated 3 months ago
- Train, tune, and infer Bamba model☆131Updated 2 months ago
- Focused on fast experimentation and simplicity☆76Updated 8 months ago
- Just another reasonably minimal repo for class-conditional training of pixel-space diffusion transformers.☆122Updated 2 months ago
- LLM Inference on consumer devices☆124Updated 5 months ago
- RWKV-7: Surpassing GPT☆94Updated 9 months ago
- PTX-Tutorial Written Purely By AIs (Deep Research of Openai and Claude 3.7)☆66Updated 5 months ago
- Making Flux go brrr on GPUs.☆131Updated last month
- ☆128Updated last month
- [WIP] Better (FP8) attention for Hopper☆33Updated 6 months ago
- Lightweight toolkit package to train and fine-tune 1.58bit Language models☆83Updated 3 months ago
- Repo for "LoLCATs: On Low-Rank Linearizing of Large Language Models"☆245Updated 6 months ago
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters☆129Updated 8 months ago