epang080516 / arc_agiLinks
SoTA Approach for ARC-AGI 2
β157Updated 4 months ago
Alternatives and similar repositories for arc_agi
Users that are interested in arc_agi are comparing it to the libraries listed below
Sorting:
- β67Updated 6 months ago
- 𧬠The Huxley-GΓΆdel Machineβ319Updated last month
- The State Of The Art, intelligenceβ157Updated 5 months ago
- Implementation of SOARβ48Updated 4 months ago
- β133Updated last year
- Codebase from our first release.β41Updated 2 weeks ago
- This repo contains the source code for the paper "Evolution Strategies at Scale: LLM Fine-Tuning Beyond Reinforcement Learning"β285Updated last month
- smol models are fun tooβ93Updated last year
- Plotting (entropy, varentropy) for small LMsβ99Updated 8 months ago
- β59Updated 11 months ago
- β106Updated 6 months ago
- look how they massacred my boyβ63Updated last year
- β136Updated 10 months ago
- β313Updated last month
- Official CLI and Python SDK for Prime Intellect - access GPU compute, remote sandboxes, RL environments, and distributed training infrastβ¦β138Updated this week
- β40Updated last year
- β189Updated last year
- rl from zero pretrain, can it be done? yes.β286Updated 3 months ago
- Super basic implementation (gist-like) of RLMs with REPL environments.β435Updated 2 weeks ago
- Automated Capability Discovery via Foundation Model Self-Explorationβ66Updated 11 months ago
- smolLM with Entropix sampler on pytorchβ149Updated last year
- j1-micro (1.7B) & j1-nano (600M) are absurdly tiny but mighty reward models.β101Updated 6 months ago
- Exploration into the proposed architecture from Sapient Intelligence of Singapore πΈπ¬β73Updated 5 months ago
- OpenCoconut implements a latent reasoning paradigm where we generate thoughts before decoding.β175Updated last year
- Code to train and evaluate Neural Attention Memory Models to obtain universally-applicable memory systems for transformers.β345Updated last year
- Storing long contexts in tiny caches with self-studyβ231Updated last month
- The Automated LLM Speedrunning Benchmark measures how well LLM agents can reproduce previous innovations and discover new ones in languagβ¦β127Updated 3 months ago
- β114Updated 3 months ago
- Testing baseline LLMs performance across various modelsβ335Updated last week
- accompanying material for sleep-time compute paperβ118Updated 8 months ago