alxndrTL / ARC_LLMsLinks
Evaluating majors LLMs on the Abstraction and Reasoning Corpus
☆17Updated last year
Alternatives and similar repositories for ARC_LLMs
Users that are interested in ARC_LLMs are comparing it to the libraries listed below
Sorting:
- ☆29Updated last year
- ☆57Updated last week
- Simple GRPO scripts and configurations.☆59Updated 8 months ago
- ☆28Updated 2 years ago
- ☆81Updated last year
- Create an AI capable of solving reasoning tasks it has never seen before☆95Updated 10 months ago
- Collection of autoregressive model implementation☆86Updated 5 months ago
- ☆114Updated last month
- Materials for ConceptARC paper☆103Updated 11 months ago
- Code repository for the c-BTM paper☆107Updated 2 years ago
- MiniHF is an inference, human preference data collection, and fine-tuning tool for local language models. It is intended to help the user…☆180Updated 2 months ago
- ARC gym: a data generation framework for the Abstraction & Reasoning Corpus☆24Updated last month
- Our solution for the arc challenge 2024☆179Updated 3 months ago
- A Python library for automatically solving Abstraction and Reasoning Corpus (ARC) challenges using Claude and object-centric modeling.☆23Updated 9 months ago
- ☆102Updated 2 months ago
- ☆30Updated 7 months ago
- Draw more samples☆194Updated last year
- ARLC, a probabilistic abductive reasoner for solving Raven's progressive matrices.☆20Updated 3 weeks ago
- Multi-Domain Expert Learning☆66Updated last year
- LLM training in simple, raw C/CUDA☆15Updated 10 months ago
- my solution for Abstaction and reasoning challenge on kaggle☆10Updated last year
- EvaByte: Efficient Byte-level Language Models at Scale☆109Updated 5 months ago
- Alice in Wonderland code base for experiments and raw experiments data☆131Updated 3 weeks ago
- Latent Program Network (from the "Searching Latent Program Spaces" paper)☆98Updated last week
- A python framework to streamline your ARC challenge solutions. From graphical displays to optimized Kaggle submissions☆13Updated 11 months ago
- ☆56Updated 10 months ago
- FastFeedForward Networks☆19Updated last year
- Large scale 4D parallelism pre-training for 🤗 transformers in Mixture of Experts *(still work in progress)*☆87Updated last year
- Comprehensive analysis of difference in performance of QLora, Lora, and Full Finetunes.☆82Updated 2 years ago
- ☆53Updated last year