yangjianxin1 / LongQLoRALinks
LongQLoRA: Extent Context Length of LLMs Efficiently
☆166Updated last year
Alternatives and similar repositories for LongQLoRA
Users that are interested in LongQLoRA are comparing it to the libraries listed below
Sorting:
- ☆141Updated last year
- ☆322Updated 11 months ago
- ☆142Updated 11 months ago
- [ACL'24] Superfiltering: Weak-to-Strong Data Filtering for Fast Instruction-Tuning☆158Updated 9 months ago
- InsTag: A Tool for Data Analysis in LLM Supervised Fine-tuning☆261Updated last year
- code for Scaling Laws of RoPE-based Extrapolation☆73Updated last year
- [EMNLP 2024] LongAlign: A Recipe for Long Context Alignment of LLMs☆250Updated 6 months ago
- [NAACL'24] Self-data filtering of LLM instruction-tuning data using a novel perplexity-based difficulty score, without using any other mo…☆372Updated 9 months ago
- ☆288Updated 10 months ago
- ☆169Updated last year
- Deita: Data-Efficient Instruction Tuning for Alignment [ICLR2024]☆556Updated 6 months ago
- CLongEval: A Chinese Benchmark for Evaluating Long-Context Large Language Models☆40Updated last year
- [ICML 2025] Programming Every Example: Lifting Pre-training Data Quality Like Experts at Scale☆251Updated 2 weeks ago
- Clustering and Ranking: Diversity-preserved Instruction Selection through Expert-aligned Quality Estimation☆79Updated 7 months ago
- ☆63Updated 2 years ago
- train llama on a single A100 80G node using 🤗 transformers and 🚀 Deepspeed Pipeline Parallelism☆221Updated last year
- ☆162Updated 2 years ago
- 1st Solution For Conversational Multi-Doc QA Workshop & International Challenge @ WSDM'24 - Xiaohongshu.Inc☆160Updated last year
- Finetuning LLaMA with RLHF (Reinforcement Learning with Human Feedback) based on DeepSpeed Chat☆114Updated 2 years ago
- ☆222Updated last year
- Imitate OpenAI with Local Models☆87Updated 9 months ago
- ☆228Updated last year
- 大语言模型指令调优工具(支持 FlashAttention)☆173Updated last year
- Naive Bayes-based Context Extension☆326Updated 6 months ago
- ☆94Updated 6 months ago
- llama2 finetuning with deepspeed and lora☆175Updated last year
- ☆281Updated last year
- 🐋 An unofficial implementation of Self-Alignment with Instruction Backtranslation.☆140Updated last month
- Mixture-of-Experts (MoE) Language Model☆189Updated 9 months ago
- ☆108Updated 7 months ago