abacusai / Long-Context
This repository contains code and tooling for the Abacus.AI LLM Context Expansion project. Also included are evaluation scripts and benchmark tasks that evaluate a model’s information retrieval capabilities with context expansion. We also include key experimental results and instructions for reproducing and building on them.
☆584Updated last year
Alternatives and similar repositories for Long-Context:
Users that are interested in Long-Context are comparing it to the libraries listed below
- Code for fine-tuning Platypus fam LLMs using LoRA☆628Updated last year
- [ICLR 2024] Lemur: Open Foundation Models for Language Agents☆542Updated last year
- OpenAlpaca: A Fully Open-Source Instruction-Following Model Based On OpenLLaMA☆301Updated last year
- Extend existing LLMs way beyond the original training length with constant memory usage, without retraining☆691Updated 11 months ago
- Official repository for LongChat and LongEval☆515Updated 10 months ago
- [ACL2023] We introduce LLM-Blender, an innovative ensembling framework to attain consistently superior performance by leveraging the dive…☆927Updated 5 months ago
- Generate textbook-quality synthetic LLM pretraining data☆498Updated last year
- Official codebase for "SelFee: Iterative Self-Revising LLM Empowered by Self-Feedback Generation"☆226Updated last year
- LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale Instructions☆820Updated last year
- Code and data for "Lumos: Learning Agents with Unified Data, Modular Design, and Open-Source LLMs"☆463Updated last year
- This repository contains code to quantitatively evaluate instruction-tuned models such as Alpaca and Flan-T5 on held-out tasks.☆544Updated last year
- Inference code for Mistral and Mixtral hacked up into original Llama implementation☆371Updated last year
- ☆412Updated last year
- A bagel, with everything.☆317Updated 11 months ago
- Salesforce open-source LLMs with 8k sequence length.☆716Updated 2 months ago
- Ongoing research training transformer models at scale☆383Updated 7 months ago
- ModuleFormer is a MoE-based architecture that includes two different types of experts: stick-breaking attention heads and feedforward exp…☆217Updated 11 months ago
- NexusRaven-13B, a new SOTA Open-Source LLM for function calling. This repo contains everything for reproducing our evaluation on NexusRav…☆313Updated last year
- ☆458Updated last year
- ☆504Updated 4 months ago
- Run evaluation on LLMs using human-eval benchmark☆402Updated last year
- LOMO: LOw-Memory Optimization☆982Updated 8 months ago
- Official implementation of our NeurIPS 2023 paper "Augmenting Language Models with Long-Term Memory".☆787Updated last year
- ☆517Updated 7 months ago
- Codes for "Chameleon: Plug-and-Play Compositional Reasoning with Large Language Models".☆1,124Updated last year
- batched loras☆340Updated last year
- Tune any FALCON in 4-bit☆466Updated last year
- 🐙 OctoPack: Instruction Tuning Code Large Language Models☆460Updated last month
- YaRN: Efficient Context Window Extension of Large Language Models☆1,455Updated 11 months ago
- ☆268Updated last year