jfpuget / ARC-AGI-Challenge-2024Links
☆56Updated 11 months ago
Alternatives and similar repositories for ARC-AGI-Challenge-2024
Users that are interested in ARC-AGI-Challenge-2024 are comparing it to the libraries listed below
Sorting:
- ☆81Updated last year
- Implementation of the Llama architecture with RLHF + Q-learning☆167Updated 8 months ago
- Simple repository for training small reasoning models☆44Updated 8 months ago
- Collection of autoregressive model implementation☆86Updated 6 months ago
- Explorations into the proposal from the paper "Grokfast, Accelerated Grokking by Amplifying Slow Gradients"☆102Updated 10 months ago
- ☆53Updated last year
- Implementation of Mind Evolution, Evolving Deeper LLM Thinking, from Deepmind☆57Updated 5 months ago
- Code for NeurIPS 2024 Spotlight: "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations"☆84Updated last year
- $100K or 100 Days: Trade-offs when Pre-Training with Academic Resources☆147Updated 3 weeks ago
- A MAD laboratory to improve AI architecture designs 🧪☆131Updated 10 months ago
- σ-GPT: A New Approach to Autoregressive Models☆68Updated last year
- Explorations into whether a transformer with RL can direct a genetic algorithm to converge faster☆71Updated 5 months ago
- ☆57Updated 3 weeks ago
- ☆102Updated 3 months ago
- Triton Implementation of HyperAttention Algorithm☆48Updated last year
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT training☆132Updated last year
- One Initialization to Rule them All: Fine-tuning via Explained Variance Adaptation☆44Updated last week
- Implementation of Gradient Agreement Filtering, from Chaubard et al. of Stanford, but for single machine microbatches, in Pytorch☆25Updated 9 months ago
- Implementation of Infini-Transformer in Pytorch☆113Updated 9 months ago
- LLM training in simple, raw C/CUDA☆15Updated 10 months ago
- ☆91Updated last year
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆168Updated 4 months ago
- ☆58Updated last year
- DeMo: Decoupled Momentum Optimization☆194Updated 10 months ago
- Jax like function transformation engine but micro, microjax☆33Updated last year
- open source alpha evolve☆66Updated 5 months ago
- A single repo with all scripts and utils to train / fine-tune the Mamba model with or without FIM☆59Updated last year
- ☆61Updated last year
- A place to store reusable transformer components of my own creation or found on the interwebs☆59Updated 2 weeks ago
- Code to reproduce "Transformers Can Do Arithmetic with the Right Embeddings", McLeish et al (NeurIPS 2024)☆193Updated last year