jiahe7ay / infini-mini-transformerLinks
This is a personal reimplementation of Google's Infini-transformer, utilizing a small 2b model. The project includes both model and training code.
☆58Updated last year
Alternatives and similar repositories for infini-mini-transformer
Users that are interested in infini-mini-transformer are comparing it to the libraries listed below
Sorting:
- code for Scaling Laws of RoPE-based Extrapolation☆73Updated last year
- Fast LLM Training CodeBase With dynamic strategy choosing [Deepspeed+Megatron+FlashAttention+CudaFusionKernel+Compiler];☆40Updated last year
- The complete training code of the open-source high-performance Llama model, including the full process from pre-training to RLHF.☆66Updated 2 years ago
- 1.4B sLLM for Chinese and English - HammerLLM🔨☆44Updated last year
- An Experiment on Dynamic NTK Scaling RoPE☆64Updated last year
- ☆107Updated last year
- SuperCLUE-Math6:新一代中文原生多轮多步数学推理数据集的探索之旅☆59Updated last year
- 百川Dynamic NTK-ALiBi的代码实现:无需微调即可推理更长文本☆47Updated last year
- ☆48Updated last year
- ☆103Updated 3 weeks ago
- NTK scaled version of ALiBi position encoding in Transformer.☆68Updated last year
- [ICML'24] The official implementation of “Rethinking Optimization and Architecture for Tiny Language Models”☆122Updated 6 months ago
- Research without Re-search: Maximal Update Parametrization Yields Accurate Loss Prediction across Scales☆32Updated 2 years ago
- CLongEval: A Chinese Benchmark for Evaluating Long-Context Large Language Models☆41Updated last year
- Counting-Stars (★)☆83Updated 2 months ago
- code for paper 《RankingGPT: Empowering Large Language Models in Text Ranking with Progressive Enhancement》☆33Updated last year
- Official completion of “Training on the Benchmark Is Not All You Need”.☆35Updated 7 months ago
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆19Updated 2 years ago
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models☆136Updated last year
- Implementations of online merging optimizers proposed by Online Merging Optimizers for Boosting Rewards and Mitigating Tax in Alignment☆75Updated last year
- ☆83Updated last year
- [ACL 2024] Long-Context Language Modeling with Parallel Encodings☆157Updated last year
- Qwen-WisdomVast is a large model trained on 1 million high-quality Chinese multi-turn SFT data, 200,000 English multi-turn SFT data, and …☆18Updated last year
- ☆95Updated last year
- ☆36Updated 10 months ago
- ☆144Updated last year
- A more efficient GLM implementation!☆55Updated 2 years ago
- Clustering and Ranking: Diversity-preserved Instruction Selection through Expert-aligned Quality Estimation☆86Updated 8 months ago
- [ICLR 2024] CLEX: Continuous Length Extrapolation for Large Language Models☆78Updated last year
- ☆83Updated last year