NousResearch / Open-Reasoning-TasksLinks
A comprehensive repository of reasoning tasks for LLMs (and beyond)
☆452Updated last year
Alternatives and similar repositories for Open-Reasoning-Tasks
Users that are interested in Open-Reasoning-Tasks are comparing it to the libraries listed below
Sorting:
- Generate Synthetic Data Using OpenAI, MistralAI or AnthropicAI☆222Updated last year
- ⚖️ Awesome LLM Judges ⚖️☆134Updated 7 months ago
- Atropos is a Language Model Reinforcement Learning Environments framework for collecting and evaluating LLM trajectories through diverse …☆760Updated this week
- ☆415Updated last year
- Aidan Bench attempts to measure <big_model_smell> in LLMs.☆315Updated 5 months ago
- Inference-time scaling for LLMs-as-a-judge.☆314Updated last month
- Fast parallel LLM inference for MLX☆234Updated last year
- An easy-to-understand framework for LLM samplers that rewind and revise generated tokens☆146Updated 9 months ago
- OpenCoconut implements a latent reasoning paradigm where we generate thoughts before decoding.☆173Updated 10 months ago
- ☆179Updated 11 months ago
- Just a bunch of benchmark logs for different LLMs☆119Updated last year
- Tutorial for building LLM router☆236Updated last year
- ☆164Updated 3 months ago
- Plotting (entropy, varentropy) for small LMs☆99Updated 6 months ago
- ☆136Updated 8 months ago
- ☆128Updated 11 months ago
- smol models are fun too☆92Updated last year
- Long context evaluation for large language models☆224Updated 9 months ago
- Synthetic Data for LLM Fine-Tuning☆119Updated 2 years ago
- Training-Ready RL Environments + Evals☆185Updated this week
- System 2 Reasoning Link Collection☆861Updated 8 months ago
- smolLM with Entropix sampler on pytorch☆149Updated last year
- Steer LLM outputs towards a certain topic/subject and enhance response capabilities using activation engineering by adding steering vecto…☆250Updated 9 months ago
- function calling-based LLM agents☆289Updated last year
- This repository allows reproduction of Poetiq's record-breaking submission to the ARC-AGI-1 and ARC-AGI-2 benchmarks.☆217Updated last week
- rl from zero pretrain, can it be done? yes.☆281Updated 2 months ago
- A compact LLM pretrained in 9 days by using high quality data☆334Updated 7 months ago
- A library for easily merging multiple LLM experts, and efficiently train the merged LLM.☆499Updated last year
- A library for making RepE control vectors☆668Updated 2 months ago
- ☆159Updated last year