GoodAI / goodai-ltm-benchmarkLinks
A library for benchmarking the Long Term Memory and Continual learning capabilities of LLM based agents. With all the tests and code you need to evaluate your own agents. See more in the blogpost:
☆82Updated last year
Alternatives and similar repositories for goodai-ltm-benchmark
Users that are interested in goodai-ltm-benchmark are comparing it to the libraries listed below
Sorting:
- Official homepage for "Self-Harmonized Chain of Thought" (NAACL 2025)☆91Updated 11 months ago
- Mixing Language Models with Self-Verification and Meta-Verification☆111Updated last year
- A DSPy-based implementation of the tree of thoughts method (Yao et al., 2023) for generating persuasive arguments☆95Updated 2 months ago
- ReDel is a toolkit for researchers and developers to build, iterate on, and analyze recursive multi-agent systems. (EMNLP 2024 Demo)☆89Updated 2 weeks ago
- ☆105Updated 11 months ago
- accompanying material for sleep-time compute paper☆118Updated 7 months ago
- Just a bunch of benchmark logs for different LLMs☆119Updated last year
- Train your own SOTA deductive reasoning model☆107Updated 9 months ago
- Archon provides a modular framework for combining different inference-time techniques and LMs with just a JSON config file.☆190Updated 9 months ago
- OpenCoconut implements a latent reasoning paradigm where we generate thoughts before decoding.☆174Updated 11 months ago
- Formal-LLM: Integrating Formal Language and Natural Language for Controllable LLM-based Agents☆133Updated last year
- Implementation of the paper: "AssistantBench: Can Web Agents Solve Realistic and Time-Consuming Tasks?"☆66Updated last year
- ☆63Updated 6 months ago
- Optimizing Causal LMs through GRPO with weighted reward functions and automated hyperparameter tuning using Optuna☆59Updated 2 months ago
- Training an LLM to use a calculator with multi-turn reinforcement learning, achieving a **62% absolute increase in evaluation accuracy**.☆64Updated 7 months ago
- 🔧 Compare how Agent systems perform on several benchmarks. 📊🚀☆102Updated 4 months ago
- Evaluating LLMs with CommonGen-Lite☆93Updated last year
- Source code of "How to Correctly do Semantic Backpropagation on Language-based Agentic Systems" 🤖☆76Updated last year
- Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation☆49Updated last year
- LLM reads a paper and produce a working prototype☆60Updated 8 months ago
- ☆68Updated last year
- Official repo for Learning to Reason for Long-Form Story Generation☆73Updated 8 months ago
- ☆40Updated last year
- Functional Benchmarks and the Reasoning Gap☆90Updated last year
- Track the progress of LLM context utilisation☆55Updated 8 months ago
- ☆55Updated last year
- EcoAssistant: using LLM assistant more affordably and accurately☆133Updated last year
- [ACL 2024] Do Large Language Models Latently Perform Multi-Hop Reasoning?☆84Updated 9 months ago
- ☆41Updated last year
- ☆125Updated 10 months ago