PhialsBasement / AlphaEvolve-MatrixMul-VerificationLinks
Verification of Google DeepMind's AlphaEvolve 48-multiplication matrix algorithm, a breakthrough in matrix multiplication after 56 years.
☆124Updated 4 months ago
Alternatives and similar repositories for AlphaEvolve-MatrixMul-Verification
Users that are interested in AlphaEvolve-MatrixMul-Verification are comparing it to the libraries listed below
Sorting:
- Samples of good AI generated CUDA kernels☆91Updated 5 months ago
- ☆93Updated 4 months ago
- ☆147Updated 11 months ago
- The Automated LLM Speedrunning Benchmark measures how well LLM agents can reproduce previous innovations and discover new ones in languag…☆111Updated last month
- open source alpha evolve☆66Updated 5 months ago
- RWKV-7: Surpassing GPT☆98Updated 11 months ago
- The code repository of the paper: Competition and Attraction Improve Model Fusion☆163Updated 2 months ago
- Lightweight toolkit package to train and fine-tune 1.58bit Language models☆97Updated 5 months ago
- Train your own SOTA deductive reasoning model☆109Updated 8 months ago
- Lightweight Llama 3 8B Inference Engine in CUDA C☆48Updated 7 months ago
- Implementation of Mind Evolution, Evolving Deeper LLM Thinking, from Deepmind☆57Updated 5 months ago
- A collection of lightweight interpretability scripts to understand how LLMs think☆61Updated last week
- noise_step: Training in 1.58b With No Gradient Memory☆221Updated 10 months ago
- CUDA-L1: Improving CUDA Optimization via Contrastive Reinforcement Learning☆203Updated last week
- Simple & Scalable Pretraining for Neural Architecture Research☆298Updated last week
- SoTA Approach for ARC-AGI 2☆128Updated last month
- PyTorch implementation of models from the Zamba2 series.☆185Updated 9 months ago
- GRadient-INformed MoE☆264Updated last year
- This repo contains the source code for the paper "Evolution Strategies at Scale: LLM Fine-Tuning Beyond Reinforcement Learning"☆239Updated last week
- ☆154Updated 2 months ago
- Code to train and evaluate Neural Attention Memory Models to obtain universally-applicable memory systems for transformers.☆326Updated last year
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters☆130Updated 11 months ago
- A collection of tricks and tools to speed up transformer models☆185Updated last week
- Train, tune, and infer Bamba model☆135Updated 5 months ago
- Clue inspired puzzles for testing LLM deduction abilities☆44Updated 7 months ago
- GRPO training code which scales to 32xH100s for long horizon terminal/coding tasks. Base agent is now the top Qwen3 agent on Stanford's T…☆291Updated 2 months ago
- accompanying material for sleep-time compute paper☆117Updated 6 months ago
- Code and data for the paper "Why think step by step? Reasoning emerges from the locality of experience"☆62Updated 7 months ago
- Pivotal Token Search☆131Updated 3 months ago
- Alice in Wonderland code base for experiments and raw experiments data☆131Updated last month