Victorwz / LongMemLinks
Official implementation of our NeurIPS 2023 paper "Augmenting Language Models with Long-Term Memory".
☆805Updated last year
Alternatives and similar repositories for LongMem
Users that are interested in LongMem are comparing it to the libraries listed below
Sorting:
- Codes for "Chameleon: Plug-and-Play Compositional Reasoning with Large Language Models".☆1,136Updated last year
- [ACL2023] We introduce LLM-Blender, an innovative ensembling framework to attain consistently superior performance by leveraging the dive…☆958Updated 10 months ago
- ☆1,039Updated 2 years ago
- LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale Instructions☆821Updated 2 years ago
- Code for fine-tuning Platypus fam LLMs using LoRA☆628Updated last year
- [ICLR 2024] Lemur: Open Foundation Models for Language Agents☆555Updated last year
- This repository contains code and tooling for the Abacus.AI LLM Context Expansion project. Also included are evaluation scripts and bench…☆593Updated last year
- LongLLaMA is a large language model capable of handling long contexts. It is based on OpenLLaMA and fine-tuned with the Focused Transform…☆1,463Updated last year
- ToRA is a series of Tool-integrated Reasoning LLM Agents designed to solve challenging mathematical reasoning problems by interacting wit…☆1,084Updated last year
- Dromedary: towards helpful, ethical and reliable LLMs.☆1,148Updated 3 months ago
- ☆367Updated 2 years ago
- PaL: Program-Aided Language Models (ICML 2023)☆504Updated 2 years ago
- LOMO: LOw-Memory Optimization☆989Updated last year
- Official repository for LongChat and LongEval☆528Updated last year
- GPT4Tools is an intelligent system that can automatically decide, control, and utilize different visual foundation models, allowing the u…☆774Updated last year
- Official codebase for "SelFee: Iterative Self-Revising LLM Empowered by Self-Feedback Generation"☆229Updated 2 years ago
- LLMs can generate feedback on their work, use it to improve the output, and repeat this process iteratively.☆726Updated 10 months ago
- Landmark Attention: Random-Access Infinite Context Length for Transformers☆423Updated last year
- Decoupling Reasoning from Observations for Efficient Augmented Language Models☆916Updated 2 years ago
- [COLM 2024] LoraHub: Efficient Cross-Task Generalization via Dynamic LoRA Composition☆646Updated last year
- Implementation of plug in and play Attention from "LongNet: Scaling Transformers to 1,000,000,000 Tokens"☆709Updated last year
- ☆444Updated 2 years ago
- ☆760Updated last year
- AgentTuning: Enabling Generalized Agent Abilities for LLMs☆1,461Updated last year
- Reflexion: an autonomous agent with dynamic memory and self-reflection☆388Updated last year
- Repo for paper "Unleashing Cognitive Synergy in Large Language Models: A Task-Solving Agent through Multi-Persona Self-Collaboration"☆344Updated last year
- Xwin-LM: Powerful, Stable, and Reproducible LLM Alignment☆1,041Updated last year
- Extend existing LLMs way beyond the original training length with constant memory usage, without retraining☆710Updated last year
- ☆460Updated last year
- [NeurIPS 22] [AAAI 24] Recurrent Transformer-based long-context architecture.☆767Updated 10 months ago