cosmoquester / memoria
Memoria is a human-inspired memory architecture for neural networks.
☆59Updated 3 months ago
Alternatives and similar repositories for memoria:
Users that are interested in memoria are comparing it to the libraries listed below
- ☆81Updated last year
- Code repository for the c-BTM paper☆105Updated last year
- ☆25Updated 4 months ago
- ☆94Updated last year
- Official code for ACL 2023 (short, findings) paper "Recursion of Thought: A Divide and Conquer Approach to Multi-Context Reasoning with L…☆42Updated last year
- The Next Generation Multi-Modality Superintelligence☆70Updated 4 months ago
- ☆98Updated last week
- entropix style sampling + GUI☆25Updated 3 months ago
- A repository for research on medium sized language models.☆76Updated 8 months ago
- [NeurIPS '23 Spotlight] Thought Cloning: Learning to Think while Acting by Imitating Human Thinking☆260Updated 7 months ago
- OMNI: Open-endedness via Models of human Notions of Interestingness☆39Updated this week
- Repository for the paper Stream of Search: Learning to Search in Language☆125Updated 5 months ago
- Intelligent Go-Explore: Standing on the Shoulders of Giant Foundation Models☆47Updated 7 months ago
- Q-Probe: A Lightweight Approach to Reward Maximization for Language Models☆40Updated 7 months ago
- ☆48Updated 2 months ago
- ☆140Updated 8 months ago
- MiniHF is an inference, human preference data collection, and fine-tuning tool for local language models. It is intended to help the user…☆163Updated this week
- Official implementation of the DECKARD Agent from the paper "Do Embodied Agents Dream of Pixelated Sheep?"☆90Updated last year
- Dataset and benchmark for assessing LLMs in translating natural language descriptions of planning problems into PDDL☆48Updated 3 months ago
- ☆41Updated last year
- Efficient World Models with Context-Aware Tokenization. ICML 2024☆89Updated 4 months ago
- The simplest, fastest repository for training/finetuning medium-sized xLSTMs.☆38Updated 8 months ago
- ☆74Updated last year
- ☆76Updated 6 months ago
- ☆27Updated last year
- Mixing Language Models with Self-Verification and Meta-Verification☆100Updated last month
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Updated last year
- ☆37Updated 6 months ago
- ☆62Updated 4 months ago