deepreinforce-ai / CUDA-L1Links
CUDA-L1: Improving CUDA Optimization via Contrastive Reinforcement Learning
☆131Updated this week
Alternatives and similar repositories for CUDA-L1
Users that are interested in CUDA-L1 are comparing it to the libraries listed below
Sorting:
- Official implementation of the paper: "ZClip: Adaptive Spike Mitigation for LLM Pre-Training".☆131Updated last month
- Simple & Scalable Pretraining for Neural Architecture Research☆283Updated this week
- Load compute kernels from the Hub☆220Updated this week
- Just another reasonably minimal repo for class-conditional training of pixel-space diffusion transformers.☆120Updated 2 months ago
- A collection of tricks and tools to speed up transformer models☆169Updated 2 months ago
- PyTorch implementation of models from the Zamba2 series.☆184Updated 6 months ago
- DeMo: Decoupled Momentum Optimization☆190Updated 8 months ago
- ☆232Updated 2 months ago
- GRadient-INformed MoE☆264Updated 10 months ago
- The Automated LLM Speedrunning Benchmark measures how well LLM agents can reproduce previous innovations and discover new ones in languag…☆94Updated last week
- Official repository for the paper "NeuZip: Memory-Efficient Training and Inference with Dynamic Compression of Neural Networks". This rep…☆59Updated 9 months ago
- Matrix (Multi-Agent daTa geneRation Infra and eXperimentation framework) is a versatile engine for multi-agent conversational data genera…☆81Updated this week
- Focused on fast experimentation and simplicity☆76Updated 7 months ago
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters☆127Updated 8 months ago
- open source alpha evolve☆66Updated 2 months ago
- NanoGPT-speedrunning for the poor T4 enjoyers☆68Updated 3 months ago
- Esoteric Language Models☆89Updated last week
- An implementation of PSGD Kron second-order optimizer for PyTorch☆94Updated 2 weeks ago
- an open source reproduction of NVIDIA's nGPT (Normalized Transformer with Representation Learning on the Hypersphere)☆103Updated 5 months ago
- ☆134Updated 11 months ago
- Q-GaLore: Quantized GaLore with INT4 Projection and Layer-Adaptive Low-Rank Gradients.☆198Updated last year
- Training-free Post-training Efficient Sub-quadratic Complexity Attention. Implemented with OpenAI Triton.☆142Updated this week
- Work in progress.☆70Updated last month
- ☆32Updated last year
- Train, tune, and infer Bamba model☆130Updated 2 months ago
- Exploring Applications of GRPO☆245Updated 3 weeks ago
- ☆190Updated 7 months ago
- Efficient LLM Inference over Long Sequences☆385Updated last month
- The evaluation framework for training-free sparse attention in LLMs☆86Updated last month
- ☆123Updated 2 weeks ago