alexzhang13 / videogamebenchLinks
Benchmark environment for evaluating vision-language models (VLMs) on popular video games!
☆322Updated 7 months ago
Alternatives and similar repositories for videogamebench
Users that are interested in videogamebench are comparing it to the libraries listed below
Sorting:
- OpenCoconut implements a latent reasoning paradigm where we generate thoughts before decoding.☆174Updated 11 months ago
- ☆86Updated 6 months ago
- ☆185Updated last month
- ☆98Updated 3 weeks ago
- Training teachers with reinforcement learning able to make LLMs learn how to reason for test time scaling.☆355Updated 6 months ago
- The Automated LLM Speedrunning Benchmark measures how well LLM agents can reproduce previous innovations and discover new ones in languag…☆125Updated 3 months ago
- RLP: Reinforcement as a Pretraining Objective☆222Updated 3 months ago
- GRadient-INformed MoE☆264Updated last year
- This repo contains the source code for the paper "Evolution Strategies at Scale: LLM Fine-Tuning Beyond Reinforcement Learning"☆284Updated last month
- Benchmarking Agentic LLM and VLM Reasoning On Games☆221Updated last month
- ☆149Updated 5 months ago
- Official PyTorch implementation for Hogwild! Inference: Parallel LLM Generation with a Concurrent Attention Cache☆137Updated 4 months ago
- Build your own visual reasoning model☆416Updated last month
- OpenTinker is an RL-as-a-Service infrastructure for foundation models☆547Updated this week
- ☆113Updated 3 months ago
- LLM/VLM gaming agents and model evaluation through games.☆844Updated last month
- ☆226Updated 10 months ago
- General multi-task deep RL Agent☆185Updated last year
- A Collection of Competitive Text-Based Games for Language Model Evaluation and Reinforcement Learning☆334Updated 2 months ago
- accompanying material for sleep-time compute paper☆118Updated 8 months ago
- Scaling RL on advanced reasoning models☆656Updated 2 months ago
- Training an LLM to use a calculator with multi-turn reinforcement learning, achieving a **62% absolute increase in evaluation accuracy**.☆65Updated 8 months ago
- Hypernetworks that adapt LLMs for specific benchmark tasks using only textual task description as the input☆934Updated 7 months ago
- ☆118Updated 9 months ago
- Testing baseline LLMs performance across various models☆332Updated last week
- The official code implementation for "Cache-to-Cache: Direct Semantic Communication Between Large Language Models"☆312Updated this week
- Code for the paper: "Learning to Reason without External Rewards"☆385Updated 6 months ago
- Memory layers use a trainable key-value lookup mechanism to add extra parameters to a model without increasing FLOPs. Conceptually, spars…☆366Updated last year
- ☆158Updated 8 months ago
- MLGym A New Framework and Benchmark for Advancing AI Research Agents☆583Updated 5 months ago