keezen / ntk_alibi
NTK scaled version of ALiBi position encoding in Transformer.
☆67Updated last year
Alternatives and similar repositories for ntk_alibi:
Users that are interested in ntk_alibi are comparing it to the libraries listed below
- 百川Dynamic NTK-ALiBi的代码实现:无需微调即可推理更长文本☆47Updated last year
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆19Updated last year
- SuperCLUE-Math6:新一代中文原生多轮多步数学推理数据集的探索之旅☆54Updated last year
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆69Updated last year
- code for Scaling Laws of RoPE-based Extrapolation☆73Updated last year
- MEASURING MASSIVE MULTITASK CHINESE UNDERSTANDING☆87Updated last year
- ☆84Updated last year
- OPD: Chinese Open-Domain Pre-trained Dialogue Model☆75Updated last year
- ☆53Updated 3 years ago
- NLU & NLG (zero-shot) depend on mengzi-t5-base-mt pretrained model☆75Updated 2 years ago
- 中文 Instruction tuning datasets☆130Updated last year
- ☆59Updated last year
- This is a personal reimplementation of Google's Infini-transformer, utilizing a small 2b model. The project includes both model and train…☆56Updated last year
- The complete training code of the open-source high-performance Llama model, including the full process from pre-training to RLHF.☆65Updated 2 years ago
- Source code for ACL 2023 paper Decoder Tuning: Efficient Language Understanding as Decoding☆49Updated last year
- CLongEval: A Chinese Benchmark for Evaluating Long-Context Large Language Models☆40Updated last year
- ☆98Updated 7 months ago
- Finetuning LLaMA with RLHF (Reinforcement Learning with Human Feedback) based on DeepSpeed Chat☆115Updated last year
- 零样本学习测评基准,中文版☆56Updated 3 years ago
- ☆97Updated last year
- Clustering and Ranking: Diversity-preserved Instruction Selection through Expert-aligned Quality Estimation☆78Updated 5 months ago
- A more efficient GLM implementation!☆55Updated 2 years ago
- (NBCE)Naive Bayes-based Context Extension on ChatGLM-6b☆14Updated last year
- 中文大语言模型评测第二期☆70Updated last year
- “悟道”数据☆43Updated 3 years ago
- ☆16Updated last year
- ☆172Updated 2 years ago
- 文本去重☆71Updated 11 months ago
- 怎么训练一个LLM分词器☆144Updated last year
- Train llm (bloom, llama, baichuan2-7b, chatglm3-6b) with deepspeed pipeline mode. Faster than zero/zero++/fsdp.☆95Updated last year