DachengLi1 / LongChatLinks
Official repository for LongChat and LongEval
☆520Updated last year
Alternatives and similar repositories for LongChat
Users that are interested in LongChat are comparing it to the libraries listed below
Sorting:
- This repository contains code to quantitatively evaluate instruction-tuned models such as Alpaca and Flan-T5 on held-out tasks.☆546Updated last year
- Code for fine-tuning Platypus fam LLMs using LoRA☆629Updated last year
- ☆270Updated 2 years ago
- [ACL'24 Outstanding] Data and code for L-Eval, a comprehensive long context language models evaluation benchmark☆379Updated 11 months ago
- [NIPS2023] RRHF & Wombat☆808Updated last year
- [ICML 2024] Break the Sequential Dependency of LLM Inference Using Lookahead Decoding☆1,258Updated 3 months ago
- Codes for the paper "∞Bench: Extending Long Context Evaluation Beyond 100K Tokens": https://arxiv.org/abs/2402.13718☆336Updated 8 months ago
- [ICLR 2024] Sheared LLaMA: Accelerating Language Model Pre-training via Structured Pruning☆617Updated last year
- Landmark Attention: Random-Access Infinite Context Length for Transformers☆423Updated last year
- Fast Inference Solutions for BLOOM☆564Updated 8 months ago
- ☆459Updated last year
- A bagel, with everything.☆321Updated last year
- [ACL 2024] Progressive LLaMA with Block Expansion.☆505Updated last year
- Implementation of paper Data Engineering for Scaling Language Models to 128K Context☆463Updated last year
- ☆361Updated 2 years ago
- This repository contains code and tooling for the Abacus.AI LLM Context Expansion project. Also included are evaluation scripts and bench…☆589Updated last year
- ☆756Updated last year
- Official repository of NEFTune: Noisy Embeddings Improves Instruction Finetuning☆396Updated last year
- distributed trainer for LLMs☆577Updated last year
- Memory optimization and training recipes to extrapolate language models' context length to 1 million tokens, with minimal hardware.☆731Updated 8 months ago
- LongBench v2 and LongBench (ACL 25'&24')☆903Updated 5 months ago
- train llama on a single A100 80G node using 🤗 transformers and 🚀 Deepspeed Pipeline Parallelism☆221Updated last year
- Naive Bayes-based Context Extension☆326Updated 6 months ago
- [ACL2023] We introduce LLM-Blender, an innovative ensembling framework to attain consistently superior performance by leveraging the dive…☆946Updated 8 months ago
- [EMNLP 2023] Adapting Language Models to Compress Long Contexts☆306Updated 9 months ago
- LOMO: LOw-Memory Optimization☆987Updated 11 months ago
- [ICLR 2024] Lemur: Open Foundation Models for Language Agents☆550Updated last year
- Extend existing LLMs way beyond the original training length with constant memory usage, without retraining☆698Updated last year
- batched loras☆343Updated last year
- All available datasets for Instruction Tuning of Large Language Models☆252Updated last year