Victorwz / LongMem
Official implementation of our NeurIPS 2023 paper "Augmenting Language Models with Long-Term Memory".
☆784Updated 11 months ago
Alternatives and similar repositories for LongMem:
Users that are interested in LongMem are comparing it to the libraries listed below
- Codes for "Chameleon: Plug-and-Play Compositional Reasoning with Large Language Models".☆1,118Updated last year
- ☆1,028Updated last year
- LOMO: LOw-Memory Optimization☆981Updated 8 months ago
- [ACL2023] We introduce LLM-Blender, an innovative ensembling framework to attain consistently superior performance by leveraging the dive…☆919Updated 4 months ago
- Dromedary: towards helpful, ethical and reliable LLMs.☆1,139Updated last year
- Code for fine-tuning Platypus fam LLMs using LoRA☆628Updated last year
- Official repository for LongChat and LongEval☆517Updated 9 months ago
- [ICLR 2024] Lemur: Open Foundation Models for Language Agents☆541Updated last year
- [COLM 2024] LoraHub: Efficient Cross-Task Generalization via Dynamic LoRA Composition☆615Updated 7 months ago
- AgentTuning: Enabling Generalized Agent Abilities for LLMs☆1,391Updated last year
- LongLLaMA is a large language model capable of handling long contexts. It is based on OpenLLaMA and fine-tuned with the Focused Transform…☆1,450Updated last year
- A simulation framework for RLHF and alternatives. Develop your RLHF method without collecting human data.☆796Updated 8 months ago
- Customizable implementation of the self-instruct paper.☆1,038Updated 11 months ago
- Extend existing LLMs way beyond the original training length with constant memory usage, without retraining☆689Updated 10 months ago
- YaRN: Efficient Context Window Extension of Large Language Models☆1,432Updated 10 months ago
- [NeurIPS 2023] MeZO: Fine-Tuning Language Models with Just Forward Passes. https://arxiv.org/abs/2305.17333☆1,088Updated last year
- A collection of open-source dataset to train instruction-following LLMs (ChatGPT,LLaMA,Alpaca)☆1,113Updated last year
- Landmark Attention: Random-Access Infinite Context Length for Transformers☆421Updated last year
- LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale Instructions☆819Updated last year
- This repository contains code to quantitatively evaluate instruction-tuned models such as Alpaca and Flan-T5 on held-out tasks.☆542Updated 11 months ago
- Decoupling Reasoning from Observations for Efficient Augmented Language Models☆893Updated last year
- ☆353Updated last year
- PaL: Program-Aided Language Models (ICML 2023)☆482Updated last year
- GPT4Tools is an intelligent system that can automatically decide, control, and utilize different visual foundation models, allowing the u…☆766Updated last year
- Official codebase for "SelFee: Iterative Self-Revising LLM Empowered by Self-Feedback Generation"☆226Updated last year
- Repo for paper "Unleashing Cognitive Synergy in Large Language Models: A Task-Solving Agent through Multi-Persona Self-Collaboration"☆319Updated 9 months ago
- Implementation of plug in and play Attention from "LongNet: Scaling Transformers to 1,000,000,000 Tokens"☆699Updated last year
- Reflexion: an autonomous agent with dynamic memory and self-reflection☆384Updated last year
- [NIPS2023] RRHF & Wombat☆804Updated last year
- ☆444Updated last year