yangjianxin1 / LongQLoRALinks
LongQLoRA: Extent Context Length of LLMs Efficiently
☆167Updated 2 years ago
Alternatives and similar repositories for LongQLoRA
Users that are interested in LongQLoRA are comparing it to the libraries listed below
Sorting:
- ☆331Updated last year
- Imitate OpenAI with Local Models☆89Updated last year
- ☆147Updated last year
- code for Scaling Laws of RoPE-based Extrapolation☆73Updated 2 years ago
- The complete training code of the open-source high-performance Llama model, including the full process from pre-training to RLHF.☆67Updated 2 years ago
- 大语言模型指令调优工具(支持 FlashAttention)☆178Updated last year
- ☆96Updated 2 years ago
- CLongEval: A Chinese Benchmark for Evaluating Long-Context Large Language Models☆45Updated last year
- [ACL 2024] LLM2LLM: Boosting LLMs with Novel Iterative Data Enhancement☆192Updated last year
- 1st Solution For Conversational Multi-Doc QA Workshop & International Challenge @ WSDM'24 - Xiaohongshu.Inc☆162Updated 4 months ago
- ☆181Updated 2 years ago
- Naive Bayes-based Context Extension☆325Updated last year
- InsTag: A Tool for Data Analysis in LLM Supervised Fine-tuning☆284Updated 2 years ago
- [EMNLP 2024] LongAlign: A Recipe for Long Context Alignment of LLMs☆257Updated last year
- ☆318Updated last year
- Mixture-of-Experts (MoE) Language Model☆192Updated last year
- [EMNLP 2023] Lion: Adversarial Distillation of Proprietary Large Language Models☆212Updated last year
- [NAACL'24] Self-data filtering of LLM instruction-tuning data using a novel perplexity-based difficulty score, without using any other mo…☆410Updated 5 months ago
- [ACL 2024] Progressive LLaMA with Block Expansion.☆514Updated last year
- Self-Evolved Diverse Data Sampling for Efficient Instruction Tuning☆86Updated 2 years ago
- ☆129Updated 2 years ago
- ☆235Updated last year
- SUS-Chat: Instruction tuning done right☆49Updated last year
- ☆233Updated last year
- Evaluating LLMs' multi-round chatting capability via assessing conversations generated by two LLM instances.☆159Updated 7 months ago
- ☆164Updated 2 years ago
- train llama on a single A100 80G node using 🤗 transformers and 🚀 Deepspeed Pipeline Parallelism☆225Updated 2 years ago
- Finetuning LLaMA with RLHF (Reinforcement Learning with Human Feedback) based on DeepSpeed Chat☆116Updated 2 years ago
- ☆146Updated last year
- SuperCLUE-Agent: 基于中文原生任务的Agent智能体核心能力测评基准☆94Updated 2 years ago