jiahe7ay / infini-mini-transformerLinks
This is a personal reimplementation of Google's Infini-transformer, utilizing a small 2b model. The project includes both model and training code.
☆58Updated last year
Alternatives and similar repositories for infini-mini-transformer
Users that are interested in infini-mini-transformer are comparing it to the libraries listed below
Sorting:
- code for Scaling Laws of RoPE-based Extrapolation☆73Updated 2 years ago
- SuperCLUE-Math6:新一代中文原生多轮多步数学推理数据集的探索之旅☆60Updated last year
- NTK scaled version of ALiBi position encoding in Transformer.☆69Updated 2 years ago
- An Experiment on Dynamic NTK Scaling RoPE☆64Updated last year
- The complete training code of the open-source high-performance Llama model, including the full process from pre-training to RLHF.☆67Updated 2 years ago
- 1.4B sLLM for Chinese and English - HammerLLM🔨☆43Updated last year
- Fast LLM Training CodeBase With dynamic strategy choosing [Deepspeed+Megatron+FlashAttention+CudaFusionKernel+Compiler];☆41Updated last year
- ☆49Updated last year
- ☆36Updated last year
- CLongEval: A Chinese Benchmark for Evaluating Long-Context Large Language Models☆45Updated last year
- [ICML'24] The official implementation of “Rethinking Optimization and Architecture for Tiny Language Models”☆125Updated 10 months ago
- ☆108Updated 4 months ago
- 百川Dynamic NTK-ALiBi的代码实现:无需微调即可推理更长文本☆49Updated 2 years ago
- Research without Re-search: Maximal Update Parametrization Yields Accurate Loss Prediction across Scales☆32Updated 2 years ago
- Official completion of “Training on the Benchmark Is Not All You Need”.☆37Updated 10 months ago
- Implementations of online merging optimizers proposed by Online Merging Optimizers for Boosting Rewards and Mitigating Tax in Alignment☆79Updated last year
- ☆83Updated last year
- Unofficial implementation of AlpaGasus☆93Updated 2 years ago
- ☆96Updated 2 years ago
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆19Updated 2 years ago
- SELF-GUIDE: Better Task-Specific Instruction Following via Self-Synthetic Finetuning. COLM 2024 Accepted Paper☆33Updated last year
- ☆119Updated last year
- [ICLR 2024] CLEX: Continuous Length Extrapolation for Large Language Models☆78Updated last year
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆68Updated 2 years ago
- Counting-Stars (★)☆83Updated 5 months ago
- Reformatted Alignment☆112Updated last year
- ☆84Updated 2 years ago
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models☆138Updated last year
- ☆147Updated last year
- Train llm (bloom, llama, baichuan2-7b, chatglm3-6b) with deepspeed pipeline mode. Faster than zero/zero++/fsdp.☆98Updated last year