SakanaAI / robust-kbenchLinks
☆81Updated 2 months ago
Alternatives and similar repositories for robust-kbench
Users that are interested in robust-kbench are comparing it to the libraries listed below
Sorting:
- The evaluation framework for training-free sparse attention in LLMs☆110Updated 3 months ago
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters☆131Updated last year
- Physics of Language Models: Part 4.2, Canon Layers at Scale where Synthetic Pretraining Resonates in Reality☆314Updated 3 weeks ago
- 🔥 LLM-powered GPU kernel synthesis: Train models to convert PyTorch ops into optimized Triton kernels via SFT+RL. Multi-turn compilation…☆115Updated 2 months ago
- Multi-Turn RL Training System with AgentTrainer for Language Model Game Reinforcement Learning☆58Updated last month
- [COLM 2025] Code for Paper: Learning Adaptive Parallel Reasoning with Language Models☆138Updated last month
- An efficient implementation of the NSA (Native Sparse Attention) kernel☆128Updated 7 months ago
- The simplest implementation of recent Sparse Attention patterns for efficient LLM inference.☆92Updated 6 months ago
- Defeating the Training-Inference Mismatch via FP16☆180Updated 2 months ago
- A scalable asynchronous reinforcement learning implementation with in-flight weight updates.☆357Updated this week
- ☆62Updated 6 months ago
- This repo contains the source code for the paper "Evolution Strategies at Scale: LLM Fine-Tuning Beyond Reinforcement Learning"☆288Updated 2 months ago
- ☆133Updated 8 months ago
- Code for studying the super weight in LLM☆120Updated last year
- Kinetics: Rethinking Test-Time Scaling Laws☆85Updated 6 months ago
- ☆104Updated 11 months ago
- A Gym for Agentic LLMs☆437Updated last week
- ThetaEvolve: Test-time Learning on Open Problems, enabling RL training on AlphaEvolve/OpenEvolve and emphasizing scaling test-time comput…☆107Updated last month
- Fast and memory-efficient exact attention☆75Updated 10 months ago
- Spectral Sphere Optimizer☆90Updated 2 weeks ago
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆186Updated last week
- Code and Configs for Asynchronous RLHF: Faster and More Efficient RL for Language Models☆68Updated 9 months ago
- Official implementation for Training LLMs with MXFP4☆118Updated 9 months ago
- ☆71Updated 2 weeks ago
- Code for NeurIPS 2024 Spotlight: "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations"☆88Updated last year
- ☆112Updated last year
- Memory optimized Mixture of Experts☆72Updated 6 months ago
- Esoteric Language Models☆109Updated 2 months ago
- Efficiently discovering algorithms via LLMs with evolutionary search and reinforcement learning.☆125Updated 2 months ago
- ☆39Updated last year