SciMT / SciMT-benchmarkLinks
☆11Updated 2 years ago
Alternatives and similar repositories for SciMT-benchmark
Users that are interested in SciMT-benchmark are comparing it to the libraries listed below
Sorting:
- PyTorch codes for the paper "An Empirical Study of Multimodal Model Merging"☆37Updated 2 years ago
- MathFusion: Enhancing Mathematical Problem-solving of LLM through Instruction Fusion (ACL 2025)☆35Updated 6 months ago
- MMSci: A Multimodal Multi-Discipline Dataset for PhD-Level Scientific Comprehension☆51Updated last year
- Structured Chemistry Reasoning with Large Language Models☆39Updated last year
- [NeurIPS'24 LanGame workshop] On The Planning Abilities of OpenAI's o1 Models: Feasibility, Optimality, and Generalizability☆41Updated 6 months ago
- Official Implementation of UA^{2}-Agent and other baseline algorithms of "Towards Unified Alignment Between Agents, Humans, and Environme…☆19Updated last year
- ☆13Updated 8 months ago
- Pre-trained Language Model for Scientific Text☆45Updated last year
- SciKnowEval: Evaluating Multi-level Scientific Knowledge of Large Language Models☆25Updated 6 months ago
- [AAAI26] LongLLaDA: Unlocking Long Context Capabilities in Diffusion LLMs☆51Updated last month
- MetaLadder: Ascending Mathematical Solution Quality via Analogical-Problem Reasoning Transfer (EMNLP 2025)☆11Updated 9 months ago
- ☆17Updated last year
- ☆23Updated last year
- Applies ROME and MEMIT on Mamba-S4 models☆14Updated last year
- exploring whether LLMs perform case-based or rule-based reasoning☆30Updated last year
- The source code for running LLMs on the AAAR-1.0 benchmark.☆18Updated 9 months ago
- SCoRe: Training Language Models to Self-Correct via Reinforcement Learning☆15Updated last year
- Code for "Seeking Neural Nuggets: Knowledge Transfer in Large Language Models from a Parametric Perspective"☆33Updated last year
- Code release for "CURIE: Evaluating LLMs On Multitask Scientific Long Context Understanding and Reasoning", ICLR 2025☆29Updated 9 months ago
- ☆19Updated 10 months ago
- [ICML 2023] "Data Efficient Neural Scaling Law via Model Reusing" by Peihao Wang, Rameswar Panda, Zhangyang Wang☆14Updated 2 years ago
- [NeurIPS-2024] 📈 Scaling Laws with Vocabulary: Larger Models Deserve Larger Vocabularies https://arxiv.org/abs/2407.13623☆89Updated last year
- Experiments for "A Closer Look at In-Context Learning under Distribution Shifts"☆19Updated 2 years ago
- ☆17Updated 5 months ago
- This repository contains the code and data for the paper "VisOnlyQA: Large Vision Language Models Still Struggle with Visual Perception o…