upiterbarg / diff_historyLinks
[ICML 2024] Official code release accompanying the paper "diff History for Neural Language Agents" (Piterbarg, Pinto, Fergus)
☆20Updated last year
Alternatives and similar repositories for diff_history
Users that are interested in diff_history are comparing it to the libraries listed below
Sorting:
- Intrinsic Motivation from Artificial Intelligence Feedback☆135Updated 2 years ago
- Learn online intrinsic rewards from LLM feedback☆45Updated last year
- OMNI-EPIC: Open-endedness via Models of human Notions of Interestingness with Environments Programmed in Code (ICLR 2025).☆73Updated last year
- Intelligent Go-Explore: Standing on the Shoulders of Giant Foundation Models☆66Updated 11 months ago
- Efficient baselines for autocurricula in JAX.☆206Updated last year
- ☆110Updated last year
- Efficient World Models with Context-Aware Tokenization. ICML 2024☆115Updated last year
- CleanRL's implementation of DeepMind's Podracer Sebulba Architecture for Distributed DRL☆122Updated last year
- Official code from the paper "Offline RL for Natural Language Generation with Implicit Language Q Learning"☆210Updated 2 years ago
- ☆128Updated last year
- Repo to reproduce the First-Explore paper results☆39Updated last year
- An OpenAI gym environment to evaluate the ability of LLMs (eg. GPT-4, Claude) in long-horizon reasoning and task planning in dynamic mult…☆73Updated 2 years ago
- [ICLR 2025] "Training LMs on Synthetic Edit Sequences Improves Code Synthesis" (Piterbarg, Pinto, Fergus)☆19Updated 11 months ago
- Benchmarking Agentic LLM and VLM Reasoning On Games☆227Updated last month
- ☆16Updated last year
- SmartPlay is a benchmark for Large Language Models (LLMs). Uses a variety of games to test various important LLM capabilities as agents. …☆145Updated last year
- Scaling scaling laws with board games.☆53Updated 2 years ago
- ☆144Updated last year
- Lamorel is a Python library designed for RL practitioners eager to use Large Language Models (LLMs).☆243Updated last month
- Code and Configs for Asynchronous RLHF: Faster and More Efficient RL for Language Models☆68Updated 9 months ago
- Code for the paper "VinePPO: Unlocking RL Potential For LLM Reasoning Through Refined Credit Assignment"☆184Updated 8 months ago
- Skill Design From AI Feedback☆33Updated 11 months ago
- Dataset and benchmark for assessing LLMs in translating natural language descriptions of planning problems into PDDL☆64Updated last year
- ☆57Updated last year
- ☆91Updated last week
- Learning from preferences is a common paradigm for fine-tuning language models. Yet, many algorithmic design decisions come into play. Ou…☆32Updated last year
- Code for Discovered Policy Optimisation (NeurIPS 2022)☆12Updated 2 years ago
- Drop-in environment replacements that make your RL algorithm train faster.☆21Updated last year
- Interpreting how transformers simulate agents performing RL tasks☆90Updated 2 years ago
- Official implementation of the DECKARD Agent from the paper "Do Embodied Agents Dream of Pixelated Sheep?"☆94Updated 2 years ago