microsoft / ConstrainedReasonerLinks
☆13Updated last year
Alternatives and similar repositories for ConstrainedReasoner
Users that are interested in ConstrainedReasoner are comparing it to the libraries listed below
Sorting:
- ☆13Updated last year
- Systematic evaluation framework that automatically rates overthinking behavior in large language models.☆95Updated 7 months ago
- Source code for our paper: "Put Your Money Where Your Mouth Is: Evaluating Strategic Planning and Execution of LLM Agents in an Auction A…☆47Updated last year
- We view Large Language Models as stochastic language layers in a network, where the learnable parameters are the natural language prompts…☆95Updated last year
- [ICLR 2025] DSBench: How Far are Data Science Agents from Becoming Data Science Experts?☆98Updated 4 months ago
- [ICML 2025] Flow of Reasoning: Training LLMs for Divergent Reasoning with Minimal Examples☆113Updated 5 months ago
- "Improving Mathematical Reasoning with Process Supervision" by OPENAI☆114Updated 2 months ago
- ☆50Updated 11 months ago
- [NAACL 2024] Struc-Bench: Are Large Language Models Good at Generating Complex Structured Tabular Data? https://aclanthology.org/2024.naa…☆55Updated 5 months ago
- Public code repo for paper "SaySelf: Teaching LLMs to Express Confidence with Self-Reflective Rationales"☆112Updated last year
- official repo for the paper "Learning From Mistakes Makes LLM Better Reasoner"☆60Updated 2 years ago
- A framework for standardizing evaluations of large foundation models, beyond single-score reporting and rankings.☆173Updated last week
- Co-LLM: Learning to Decode Collaboratively with Multiple Language Models☆123Updated last year
- SLED: Self Logits Evolution Decoding for Improving Factuality in Large Language Model https://arxiv.org/pdf/2411.02433☆116Updated last year
- augmented LLM with self reflection☆135Updated 2 years ago
- [ICLR 2025 Oral] "Your Mixture-of-Experts LLM Is Secretly an Embedding Model For Free"☆86Updated last year
- RL Scaling and Test-Time Scaling (ICML'25)☆112Updated 11 months ago
- ☆97Updated last week
- A benchmark for evaluating learning agents based on just language feedback☆93Updated 7 months ago
- Code for PHATGOOSE introduced in "Learning to Route Among Specialized Experts for Zero-Shot Generalization"☆90Updated last year
- ☆108Updated last year
- ☆107Updated last month
- Scalable Meta-Evaluation of LLMs as Evaluators☆43Updated last year
- Aligning with Human Judgement: The Role of Pairwise Preference in Large Language Model Evaluators (Liu et al.; COLM 2024)☆48Updated 11 months ago
- ☆37Updated 4 months ago
- Repository for NPHardEval, a quantified-dynamic benchmark of LLMs☆62Updated last year
- Reproduction of "RLCD Reinforcement Learning from Contrast Distillation for Language Model Alignment☆69Updated 2 years ago
- ☆86Updated 11 months ago
- [NeurIPS 2024] Knowledge Circuits in Pretrained Transformers☆162Updated last month
- ☆117Updated 11 months ago