jfpuget / ARC-AGI-Challenge-2024Links
☆56Updated 10 months ago
Alternatives and similar repositories for ARC-AGI-Challenge-2024
Users that are interested in ARC-AGI-Challenge-2024 are comparing it to the libraries listed below
Sorting:
- ☆81Updated last year
- Simple repository for training small reasoning models☆40Updated 8 months ago
- Implementation of Infini-Transformer in Pytorch☆113Updated 9 months ago
- Explorations into the proposal from the paper "Grokfast, Accelerated Grokking by Amplifying Slow Gradients"☆102Updated 9 months ago
- Implementation of the Llama architecture with RLHF + Q-learning☆168Updated 8 months ago
- open source alpha evolve☆67Updated 4 months ago
- Triton Implementation of HyperAttention Algorithm☆48Updated last year
- $100K or 100 Days: Trade-offs when Pre-Training with Academic Resources☆147Updated last week
- Implementation of Mind Evolution, Evolving Deeper LLM Thinking, from Deepmind☆57Updated 4 months ago
- Landing repository for the paper "Softpick: No Attention Sink, No Massive Activations with Rectified Softmax"☆85Updated 3 weeks ago
- Code for NeurIPS 2024 Spotlight: "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations"☆84Updated 11 months ago
- ☆85Updated last year
- One Initialization to Rule them All: Fine-tuning via Explained Variance Adaptation☆42Updated 11 months ago
- ☆48Updated last year
- ☆57Updated last week
- Explorations into whether a transformer with RL can direct a genetic algorithm to converge faster☆71Updated 4 months ago
- Collection of autoregressive model implementation☆86Updated 5 months ago
- NanoGPT-speedrunning for the poor T4 enjoyers☆72Updated 5 months ago
- ☆58Updated last year
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆164Updated 3 months ago
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT training☆132Updated last year
- We study toy models of skill learning.☆31Updated 8 months ago
- Jax like function transformation engine but micro, microjax☆32Updated 11 months ago
- Explorations into the recently proposed Taylor Series Linear Attention☆100Updated last year
- The Automated LLM Speedrunning Benchmark measures how well LLM agents can reproduce previous innovations and discover new ones in languag…☆99Updated 2 months ago
- σ-GPT: A New Approach to Autoregressive Models☆68Updated last year
- Transformer with Mu-Parameterization, implemented in Jax/Flax. Supports FSDP on TPU pods.☆32Updated 4 months ago
- ☆114Updated last month
- ☆53Updated last year
- ☆91Updated last year