declare-lab / LLM-PuzzleTest
This repository is maintained to release dataset and models for multimodal puzzle reasoning.
β50Updated last month
Alternatives and similar repositories for LLM-PuzzleTest:
Users that are interested in LLM-PuzzleTest are comparing it to the libraries listed below
- [NeurIPS-2024] π Scaling Laws with Vocabulary: Larger Models Deserve Larger Vocabularies https://arxiv.org/abs/2407.13623β75Updated 3 months ago
- β93Updated 6 months ago
- [NeurIPS 2024] The official implementation of paper: Chain of Preference Optimization: Improving Chain-of-Thought Reasoning in LLMs.β88Updated 3 months ago
- This is the official repository of the paper "OlympicArena: Benchmarking Multi-discipline Cognitive Reasoning for Superintelligent AI"β90Updated last month
- Evaluation framework for paper "VisualWebBench: How Far Have Multimodal LLMs Evolved in Web Page Understanding and Grounding?"β47Updated 3 months ago
- Co-LLM: Learning to Decode Collaboratively with Multiple Language Modelsβ106Updated 8 months ago
- Challenge LLMs to Reason About Reasoning: A Benchmark to Unveil Cognitive Depth in LLMsβ42Updated 6 months ago
- Official github repo for the paper "Compression Represents Intelligence Linearly" [COLM 2024]β130Updated 3 months ago
- β69Updated 5 months ago
- Homepage for ProLong (Princeton long-context language models) and paper "How to Train Long-Context Language Models (Effectively)"β147Updated last month
- B-STAR: Monitoring and Balancing Exploration and Exploitation in Self-Taught Reasonersβ66Updated 2 weeks ago
- DiffuGPT and DiffuLLaMA: Scaling Diffusion Language Models via Adaptation from Autoregressive Modelsβ71Updated last month
- Self-Alignment with Principle-Following Reward Modelsβ151Updated 10 months ago
- Code for the arXiv preprint "The Unreasonable Effectiveness of Easy Training Data"β46Updated last year
- Auto Interpretation Pipeline and many other functionalities for Multimodal SAE Analysis.β97Updated 2 weeks ago
- LL3M: Large Language and Multi-Modal Model in Jaxβ68Updated 8 months ago
- Large Language Models Can Self-Improve in Long-context Reasoningβ61Updated last month
- [ACL 2024] Self-Training with Direct Preference Optimization Improves Chain-of-Thought Reasoningβ33Updated 5 months ago
- Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervisionβ111Updated 4 months ago
- Improving Language Understanding from Screenshots. Paper: https://arxiv.org/abs/2402.14073β26Updated 6 months ago
- Code for Math-LLaVA: Bootstrapping Mathematical Reasoning for Multimodal Large Language Modelsβ76Updated 6 months ago
- β79Updated 3 months ago
- This is the repo for our paper "Mr-Ben: A Comprehensive Meta-Reasoning Benchmark for Large Language Models"β44Updated 2 months ago
- β57Updated 4 months ago
- Unofficial Implementation of Chain-of-Thought Reasoning Without Promptingβ24Updated 10 months ago
- A repository for research on medium sized language models.β76Updated 7 months ago
- Code and data for the benchmark "Multimodal Needle in a Haystack (MMNeedle): Benchmarking Long-Context Capability of Multimodal Large Lanβ¦β36Updated 6 months ago
- β43Updated this week
- β69Updated this week
- What Happened in LLMs Layers when Trained for Fast vs. Slow Thinking: A Gradient Perspectiveβ52Updated 2 months ago