CStanKonrad / long_llamaView external linksLinks
LongLLaMA is a large language model capable of handling long contexts. It is based on OpenLLaMA and fine-tuned with the Focused Transformer (FoT) method.
☆1,463Nov 7, 2023Updated 2 years ago
Alternatives and similar repositories for long_llama
Users that are interested in long_llama are comparing it to the libraries listed below
Sorting:
- Code and documents of LongLoRA and LongAlpaca (ICLR 2024 Oral)☆2,696Aug 14, 2024Updated last year
- [ICLR 2024] Efficient Streaming Language Models with Attention Sinks☆7,188Jul 11, 2024Updated last year
- LLMs build upon Evol Insturct: WizardLM, WizardCoder, WizardMath☆9,477Jun 7, 2025Updated 8 months ago
- Official repository for LongChat and LongEval☆534May 24, 2024Updated last year
- YaRN: Efficient Context Window Extension of Large Language Models☆1,669Apr 17, 2024Updated last year
- OpenLLaMA, a permissively licensed open source reproduction of Meta AI’s LLaMA 7B trained on the RedPajama dataset☆7,530Jul 16, 2023Updated 2 years ago
- Implementation of plug in and play Attention from "LongNet: Scaling Transformers to 1,000,000,000 Tokens"☆714Jan 7, 2024Updated 2 years ago
- LOMO: LOw-Memory Optimization☆987Jul 2, 2024Updated last year
- Official implementation of our NeurIPS 2023 paper "Augmenting Language Models with Long-Term Memory".☆823Mar 30, 2024Updated last year
- This repository contains code and tooling for the Abacus.AI LLM Context Expansion project. Also included are evaluation scripts and bench…☆599Nov 17, 2023Updated 2 years ago
- Salesforce open-source LLMs with 8k sequence length.☆724Jan 31, 2025Updated last year
- QLoRA: Efficient Finetuning of Quantized LLMs☆10,835Jun 10, 2024Updated last year
- OpenChat: Advancing Open-source Language Models with Imperfect Data☆5,472Sep 13, 2024Updated last year
- [ICLR 2024] Fine-tuning LLaMA to follow Instructions within 1 Hour and 1.2M Parameters☆5,936Mar 14, 2024Updated last year
- An Open-source Toolkit for LLM Development☆2,804Jan 13, 2025Updated last year
- [ICLR'24 spotlight] An open platform for training, serving, and evaluating large language model for tool learning.☆5,525May 21, 2025Updated 8 months ago
- The RedPajama-Data repository contains code for preparing large datasets for training large language models.☆4,924Dec 7, 2024Updated last year
- AgentTuning: Enabling Generalized Agent Abilities for LLMs☆1,477Oct 31, 2023Updated 2 years ago
- Official release of InternLM series (InternLM, InternLM2, InternLM2.5, InternLM3).☆7,157Oct 30, 2025Updated 3 months ago
- CodeTF: One-stop Transformer Library for State-of-the-art Code LLM☆1,481May 1, 2025Updated 9 months ago
- A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.☆2,911Sep 30, 2023Updated 2 years ago
- Medusa: Simple Framework for Accelerating LLM Generation with Multiple Decoding Heads☆2,705Jun 25, 2024Updated last year
- prompt2model - Generate Deployable Models from Natural Language Instructions☆2,007Dec 29, 2024Updated last year
- Instruction Tuning with GPT-4☆4,340Jun 11, 2023Updated 2 years ago
- An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.☆39,402Jun 2, 2025Updated 8 months ago
- Large language models (LLMs) made easy, EasyLM is a one stop solution for pre-training, finetuning, evaluating and serving LLMs in JAX/Fl…☆2,514Aug 13, 2024Updated last year
- Gorilla: Training and Evaluating LLMs for Function Calls (Tool Calls)☆12,717Updated this week
- The TinyLlama project is an open endeavor to pretrain a 1.1B Llama model on 3 trillion tokens.☆8,891May 3, 2024Updated last year
- Benchmarking large language models' complex reasoning ability with chain-of-thought prompting☆2,768Aug 4, 2024Updated last year
- An Extensible Toolkit for Finetuning and Inference of Large Foundation Models. Large Models for All.☆8,497Jan 28, 2026Updated 2 weeks ago
- code for Scaling Laws of RoPE-based Extrapolation☆73Oct 16, 2023Updated 2 years ago
- [ICLR 2024] Lemur: Open Foundation Models for Language Agents☆555Oct 28, 2023Updated 2 years ago
- [NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.☆24,446Aug 12, 2024Updated last year
- Implementation of paper Data Engineering for Scaling Language Models to 128K Context☆484Mar 19, 2024Updated last year
- Instruct-tune LLaMA on consumer hardware☆18,978Jul 29, 2024Updated last year
- Large Language Model Text Generation Inference☆10,757Jan 8, 2026Updated last month
- Landmark Attention: Random-Access Infinite Context Length for Transformers☆426Dec 20, 2023Updated 2 years ago
- ☆1,057May 29, 2023Updated 2 years ago
- Implementation of the LLaMA language model based on nanoGPT. Supports flash attention, Int8 and GPTQ 4bit quantization, LoRA and LLaMA-Ad…☆6,087Jul 1, 2025Updated 7 months ago