Victorwz / LongMem
Official implementation of our NeurIPS 2023 paper "Augmenting Language Models with Long-Term Memory".
☆755Updated 5 months ago
Related projects: ⓘ
- Codes for "Chameleon: Plug-and-Play Compositional Reasoning with Large Language Models".☆1,077Updated 8 months ago
- Code for fine-tuning Platypus fam LLMs using LoRA☆625Updated 7 months ago
- [ACL2023] We introduce LLM-Blender, an innovative ensembling framework to attain consistently superior performance by leveraging the dive…☆858Updated 4 months ago
- Customizable implementation of the self-instruct paper.☆1,004Updated 6 months ago
- LOMO: LOw-Memory Optimization☆974Updated 2 months ago
- YaRN: Efficient Context Window Extension of Large Language Models☆1,306Updated 5 months ago
- LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale Instructions☆810Updated last year
- [NeurIPS 2023] MeZO: Fine-Tuning Language Models with Just Forward Passes. https://arxiv.org/abs/2305.17333☆1,021Updated 8 months ago
- Extend existing LLMs way beyond the original training length with constant memory usage, without retraining☆657Updated 5 months ago
- LongLLaMA is a large language model capable of handling long contexts. It is based on OpenLLaMA and fine-tuned with the Focused Transform…☆1,446Updated 10 months ago
- ☆1,014Updated last year
- Official repository for LongChat and LongEval☆504Updated 3 months ago
- [ICLR 2024] Lemur: Open Foundation Models for Language Agents☆530Updated 10 months ago
- Implementation of plug in and play Attention from "LongNet: Scaling Transformers to 1,000,000,000 Tokens"☆675Updated 8 months ago
- Dromedary: towards helpful, ethical and reliable LLMs.☆1,106Updated 10 months ago
- ☆338Updated last year
- Decoupling Reasoning from Observations for Efficient Augmented Language Models☆873Updated last year
- LLMs can generate feedback on their work, use it to improve the output, and repeat this process iteratively.☆561Updated 8 months ago
- AgentTuning: Enabling Generalized Agent Abilities for LLMs☆1,329Updated 10 months ago
- Landmark Attention: Random-Access Infinite Context Length for Transformers☆405Updated 8 months ago
- 800,000 step-level correctness labels on LLM solutions to MATH problems☆1,413Updated last year
- The complete training code of the open-source high-performance Llama model, including the full process from pre-training to RLHF.☆19Updated last year
- CodeTF: One-stop Transformer Library for State-of-the-art Code LLM☆1,453Updated 3 months ago
- GPT4Tools is an intelligent system that can automatically decide, control, and utilize different visual foundation models, allowing the u…☆753Updated 9 months ago
- Code for our ACL 2023 Paper "Plan-and-Solve Prompting: Improving Zero-Shot Chain-of-Thought Reasoning by Large Language Models".☆574Updated last year
- Salesforce open-source LLMs with 8k sequence length.☆717Updated 8 months ago
- Benchmarking large language models' complex reasoning ability with chain-of-thought prompting☆2,513Updated last month
- A library for advanced large language model reasoning☆1,124Updated 2 weeks ago
- A tiny library for coding with large language models.☆1,205Updated 2 months ago
- Official repo for MM-REACT☆927Updated 7 months ago