abacusai / Long-Context
This repository contains code and tooling for the Abacus.AI LLM Context Expansion project. Also included are evaluation scripts and benchmark tasks that evaluate a model’s information retrieval capabilities with context expansion. We also include key experimental results and instructions for reproducing and building on them.
☆582Updated last year
Related projects ⓘ
Alternatives and complementary repositories for Long-Context
- Extend existing LLMs way beyond the original training length with constant memory usage, without retraining☆675Updated 7 months ago
- [ICLR 2024] Lemur: Open Foundation Models for Language Agents☆536Updated last year
- Official repository for LongChat and LongEval☆512Updated 5 months ago
- LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale Instructions☆812Updated last year
- [ICML'24 Spotlight] LLM Maybe LongLM: Self-Extend LLM Context Window Without Tuning☆613Updated 5 months ago
- OpenAlpaca: A Fully Open-Source Instruction-Following Model Based On OpenLLaMA☆301Updated last year
- Code for fine-tuning Platypus fam LLMs using LoRA☆623Updated 9 months ago
- ☆411Updated last year
- Inference code for Mistral and Mixtral hacked up into original Llama implementation☆373Updated 11 months ago
- Official codebase for "SelFee: Iterative Self-Revising LLM Empowered by Self-Feedback Generation"☆220Updated last year
- Generate textbook-quality synthetic LLM pretraining data☆488Updated last year
- [ACL2023] We introduce LLM-Blender, an innovative ensembling framework to attain consistently superior performance by leveraging the dive…