brantondemoss / GrokkingComplexityLinks
Code for
☆27Updated 7 months ago
Alternatives and similar repositories for GrokkingComplexity
Users that are interested in GrokkingComplexity are comparing it to the libraries listed below
Sorting:
- Simple repository for training small reasoning models☆32Updated 6 months ago
- Jax like function transformation engine but micro, microjax☆33Updated 9 months ago
- The Automated LLM Speedrunning Benchmark measures how well LLM agents can reproduce previous innovations and discover new ones in languag…☆94Updated last week
- Explorations into the proposal from the paper "Grokfast, Accelerated Grokking by Amplifying Slow Gradients"☆101Updated 7 months ago
- ☆81Updated last year
- ☆31Updated last year
- Efficiently discovering algorithms via LLMs with evolutionary search and reinforcement learning.☆104Updated 3 weeks ago
- ☆53Updated last year
- ☆33Updated last year
- ☆100Updated 2 weeks ago
- Simple GRPO scripts and configurations.☆59Updated 6 months ago
- Evaluation of neuro-symbolic engines☆38Updated last year
- Unofficial but Efficient Implementation of "Mamba: Linear-Time Sequence Modeling with Selective State Spaces" in JAX☆85Updated last year
- ☆83Updated 11 months ago
- ☆172Updated 3 months ago
- Triton Implementation of HyperAttention Algorithm☆48Updated last year
- Official Repo for InSTA: Towards Internet-Scale Training For Agents☆52Updated 3 weeks ago
- ☆29Updated 3 months ago
- ☆56Updated 2 months ago
- DeMo: Decoupled Momentum Optimization☆190Updated 8 months ago
- Official repo for Learning to Reason for Long-Form Story Generation☆68Updated 3 months ago
- Implementation of Mind Evolution, Evolving Deeper LLM Thinking, from Deepmind☆56Updated 2 months ago
- σ-GPT: A New Approach to Autoregressive Models☆67Updated 11 months ago
- A repository for research on medium sized language models.☆78Updated last year
- ☆27Updated last year
- Using FlexAttention to compute attention with different masking patterns☆44Updated 10 months ago
- ☆56Updated 8 months ago
- Code to reproduce "Transformers Can Do Arithmetic with the Right Embeddings", McLeish et al (NeurIPS 2024)☆190Updated last year
- Q-Probe: A Lightweight Approach to Reward Maximization for Language Models☆41Updated last year
- Repository for the paper Stream of Search: Learning to Search in Language☆149Updated 6 months ago