jiahe7ay / infini-mini-transformerLinks
This is a personal reimplementation of Google's Infini-transformer, utilizing a small 2b model. The project includes both model and training code.
☆58Updated last year
Alternatives and similar repositories for infini-mini-transformer
Users that are interested in infini-mini-transformer are comparing it to the libraries listed below
Sorting:
- code for Scaling Laws of RoPE-based Extrapolation☆73Updated last year
- The complete training code of the open-source high-performance Llama model, including the full process from pre-training to RLHF.☆67Updated 2 years ago
- An Experiment on Dynamic NTK Scaling RoPE☆64Updated last year
- 百川Dynamic NTK-ALiBi的代码实现:无需微调即可推理更长文本☆49Updated 2 years ago
- NTK scaled version of ALiBi position encoding in Transformer.☆69Updated 2 years ago
- ☆49Updated last year
- SuperCLUE-Math6:新一代中文原生多轮多步数学推理数据集的探索之旅☆60Updated last year
- ☆105Updated 2 months ago
- CLongEval: A Chinese Benchmark for Evaluating Long-Context Large Language Models☆42Updated last year
- 1.4B sLLM for Chinese and English - HammerLLM🔨☆44Updated last year
- Research without Re-search: Maximal Update Parametrization Yields Accurate Loss Prediction across Scales☆32Updated 2 years ago
- Fast LLM Training CodeBase With dynamic strategy choosing [Deepspeed+Megatron+FlashAttention+CudaFusionKernel+Compiler];☆41Updated last year
- Qwen-WisdomVast is a large model trained on 1 million high-quality Chinese multi-turn SFT data, 200,000 English multi-turn SFT data, and …☆18Updated last year
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆19Updated 2 years ago
- [ICML'24] The official implementation of “Rethinking Optimization and Architecture for Tiny Language Models”☆123Updated 8 months ago
- code for paper 《RankingGPT: Empowering Large Language Models in Text Ranking with Progressive Enhancement》☆35Updated last year
- ☆36Updated last year
- ☆115Updated last year
- ☆40Updated last year
- Unofficial implementation of AlpaGasus☆93Updated 2 years ago
- SELF-GUIDE: Better Task-Specific Instruction Following via Self-Synthetic Finetuning. COLM 2024 Accepted Paper☆33Updated last year
- [ICLR 2024] CLEX: Continuous Length Extrapolation for Large Language Models☆78Updated last year
- Official completion of “Training on the Benchmark Is Not All You Need”.☆36Updated 9 months ago
- Llama-3-SynE: A Significantly Enhanced Version of Llama-3 with Advanced Scientific Reasoning and Chinese Language Capabilities | 继续预训练提升 …☆34Updated 4 months ago
- Implementations of online merging optimizers proposed by Online Merging Optimizers for Boosting Rewards and Mitigating Tax in Alignment☆76Updated last year
- [ACL 2024] Long-Context Language Modeling with Parallel Encodings☆163Updated last year
- Train llm (bloom, llama, baichuan2-7b, chatglm3-6b) with deepspeed pipeline mode. Faster than zero/zero++/fsdp.☆98Updated last year
- ☆147Updated last year
- ☆96Updated last year
- LongQLoRA: Extent Context Length of LLMs Efficiently☆167Updated last year