kyegomez / LongNet
Implementation of plug in and play Attention from "LongNet: Scaling Transformers to 1,000,000,000 Tokens"
☆700Updated last year
Alternatives and similar repositories for LongNet:
Users that are interested in LongNet are comparing it to the libraries listed below
- LOMO: LOw-Memory Optimization☆980Updated 8 months ago
- Official implementation of our NeurIPS 2023 paper "Augmenting Language Models with Long-Term Memory".☆784Updated 11 months ago
- Code for fine-tuning Platypus fam LLMs using LoRA☆628Updated last year
- LongLLaMA is a large language model capable of handling long contexts. It is based on OpenLLaMA and fine-tuned with the Focused Transform…☆1,450Updated last year
- Implementation of MEGABYTE, Predicting Million-byte Sequences with Multiscale Transformers, in Pytorch☆639Updated 2 months ago
- Extend existing LLMs way beyond the original training length with constant memory usage, without retraining☆690Updated 11 months ago
- [ACL2023] We introduce LLM-Blender, an innovative ensembling framework to attain consistently superior performance by leveraging the dive…☆920Updated 4 months ago
- [NeurIPS 2023] MeZO: Fine-Tuning Language Models with Just Forward Passes. https://arxiv.org/abs/2305.17333☆1,092Updated last year
- Codes for "Chameleon: Plug-and-Play Compositional Reasoning with Large Language Models".☆1,119Updated last year
- ☆540Updated 2 months ago
- Landmark Attention: Random-Access Infinite Context Length for Transformers☆422Updated last year
- [NeurIPS 22] [AAAI 24] Recurrent Transformer-based long-context architecture.☆760Updated 4 months ago
- ToRA is a series of Tool-integrated Reasoning LLM Agents designed to solve challenging mathematical reasoning problems by interacting wit…☆1,040Updated last year
- ☆1,027Updated last year
- ☆457Updated last year
- YaRN: Efficient Context Window Extension of Large Language Models☆1,443Updated 10 months ago
- Dromedary: towards helpful, ethical and reliable LLMs.☆1,143Updated last year
- Public repo for the NeurIPS 2023 paper "Unlimiformer: Long-Range Transformers with Unlimited Length Input"☆1,059Updated last year
- Salesforce open-source LLMs with 8k sequence length.☆717Updated last month
- [COLM 2024] LoraHub: Efficient Cross-Task Generalization via Dynamic LoRA Composition☆617Updated 7 months ago
- This repository contains code and tooling for the Abacus.AI LLM Context Expansion project. Also included are evaluation scripts and bench…☆585Updated last year
- Official repository for LongChat and LongEval☆516Updated 9 months ago
- ☆412Updated last year
- The official implementation of “Sophia: A Scalable Stochastic Second-order Optimizer for Language Model Pre-training”☆951Updated last year
- Fine-tune mistral-7B on 3090s, a100s, h100s☆706Updated last year
- Ongoing research training transformer models at scale☆381Updated 6 months ago
- Convolutions for Sequence Modeling☆877Updated 8 months ago
- Finetuning Large Language Models on One Consumer GPU in 2 Bits☆719Updated 9 months ago
- Mamba-Chat: A chat LLM based on the state-space model architecture 🐍☆921Updated last year
- Implementation of Toolformer, Language Models That Can Use Tools, by MetaAI☆2,007Updated 7 months ago