Blkalkin / Optimal-TestTime
☆11Updated last week
Alternatives and similar repositories for Optimal-TestTime:
Users that are interested in Optimal-TestTime are comparing it to the libraries listed below
- ☆48Updated 4 months ago
- Codebase accompanying the Summary of a Haystack paper.☆76Updated 6 months ago
- ☆27Updated 4 months ago
- Testing paligemma2 finetuning on reasoning dataset☆18Updated 3 months ago
- ☆76Updated 9 months ago
- ☆38Updated last month
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆55Updated 7 months ago
- Optimizing Causal LMs through GRPO with weighted reward functions and automated hyperparameter tuning using Optuna☆39Updated last month
- Maya: An Instruction Finetuned Multilingual Multimodal Model using Aya☆107Updated last month
- Codebase the paper "The Remarkable Robustness of LLMs: Stages of Inference?"☆17Updated 9 months ago
- ☆47Updated 7 months ago
- The first dense retrieval model that can be prompted like an LM☆68Updated 6 months ago
- Functional Benchmarks and the Reasoning Gap☆84Updated 5 months ago
- Simple GRPO scripts and configurations.☆59Updated last month
- EvaByte: Efficient Byte-level Language Models at Scale☆85Updated last week
- ☆111Updated last month
- Train your own SOTA deductive reasoning model☆81Updated 3 weeks ago
- A framework for pitting LLMs against each other in an evolving library of games ⚔☆32Updated this week
- ☆67Updated 7 months ago
- ☆60Updated 11 months ago
- [ACL 2024] Do Large Language Models Latently Perform Multi-Hop Reasoning?☆56Updated last week
- Archon provides a modular framework for combining different inference-time techniques and LMs with just a JSON config file.☆166Updated 3 weeks ago
- The code for the paper ROUTERBENCH: A Benchmark for Multi-LLM Routing System☆111Updated 9 months ago
- Kura is a simple reproduction of the CLIO paper which uses language models to label user behaviour before clustering them based on embedd…☆93Updated 2 months ago
- ☆74Updated 7 months ago
- Code and data for the paper "Why think step by step? Reasoning emerges from the locality of experience"☆59Updated last year
- Combining Base and Instruction-Tuned Language Models for Better Synthetic Data Generation☆26Updated last month
- Systematic evaluation framework that automatically rates overthinking behavior in large language models.☆82Updated this week
- ☆15Updated 6 months ago
- Lean implementation of various multi-agent LLM methods, including Iteration of Thought (IoT)☆107Updated last month