ifromeast / LLMTrainer
A comparison of pretraining framework for LLM
☆21Updated 3 months ago
Alternatives and similar repositories for LLMTrainer
Users that are interested in LLMTrainer are comparing it to the libraries listed below
Sorting:
- ☆98Updated 7 months ago
- A prototype repo for hybrid training of pipeline parallel and distributed data parallel with comments on core code snippets. Feel free to…☆55Updated last year
- This is a personal reimplementation of Google's Infini-transformer, utilizing a small 2b model. The project includes both model and train…☆56Updated last year
- code for Scaling Laws of RoPE-based Extrapolation☆73Updated last year
- Train llm (bloom, llama, baichuan2-7b, chatglm3-6b) with deepspeed pipeline mode. Faster than zero/zero++/fsdp.☆95Updated last year
- CLongEval: A Chinese Benchmark for Evaluating Long-Context Large Language Models☆40Updated last year
- NTK scaled version of ALiBi position encoding in Transformer.☆68Updated last year
- [ACL 2024] Long-Context Language Modeling with Parallel Encodings☆153Updated 11 months ago
- The complete training code of the open-source high-performance Llama model, including the full process from pre-training to RLHF.☆65Updated 2 years ago
- [ICLR 2024] CLEX: Continuous Length Extrapolation for Large Language Models☆77Updated last year
- [ICLR 2025] 🧬 RegMix: Data Mixture as Regression for Language Model Pre-training (Spotlight)☆135Updated 3 months ago
- Llama-3-SynE: A Significantly Enhanced Version of Llama-3 with Advanced Scientific Reasoning and Chinese Language Capabilities | 继续预训练提升 …☆32Updated 5 months ago
- 百川Dynamic NTK-ALiBi的代码实现:无需微调即可推理更长文本☆47Updated last year
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆69Updated last year
- An Experiment on Dynamic NTK Scaling RoPE☆64Updated last year
- Intuitive Fine-Tuning: Towards Simplifying Alignment into a Single Process☆27Updated 9 months ago
- ☆46Updated 11 months ago
- train llama on a single A100 80G node using 🤗 transformers and 🚀 Deepspeed Pipeline Parallelism☆219Updated last year
- ☆14Updated last year
- This repository contains the joint use of CPO and SimPO method for better reference-free preference learning methods.☆53Updated 9 months ago
- ☆84Updated last year
- [NeurIPS 2024] Fast Best-of-N Decoding via Speculative Rejection☆44Updated 6 months ago
- Finetuning LLaMA with RLHF (Reinforcement Learning with Human Feedback) based on DeepSpeed Chat☆115Updated last year
- OPD: Chinese Open-Domain Pre-trained Dialogue Model☆75Updated last year
- [ICML'24] The official implementation of “Rethinking Optimization and Architecture for Tiny Language Models”☆121Updated 4 months ago
- Code for a New Loss for Mitigating the Bias of Learning Difficulties in Generative Language Models☆62Updated 2 months ago
- 1.4B sLLM for Chinese and English - HammerLLM🔨☆44Updated last year
- Lion and Adam optimization comparison☆61Updated 2 years ago
- Are Intermediate Layers and Labels Really Necessary? A General Language Model Distillation Method ; GKD: A General Knowledge Distillation…☆32Updated last year
- A more efficient GLM implementation!☆55Updated 2 years ago