FreedomIntelligence / FastLLMLinks
Fast LLM Training CodeBase With dynamic strategy choosing [Deepspeed+Megatron+FlashAttention+CudaFusionKernel+Compiler];
☆41Updated last year
Alternatives and similar repositories for FastLLM
Users that are interested in FastLLM are comparing it to the libraries listed below
Sorting:
- SELF-GUIDE: Better Task-Specific Instruction Following via Self-Synthetic Finetuning. COLM 2024 Accepted Paper☆33Updated last year
- This is a personal reimplementation of Google's Infini-transformer, utilizing a small 2b model. The project includes both model and train…☆58Updated last year
- ☆36Updated last year
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models☆137Updated last year
- FuseAI Project☆87Updated 8 months ago
- [ICML'24] The official implementation of “Rethinking Optimization and Architecture for Tiny Language Models”☆123Updated 8 months ago
- code for Scaling Laws of RoPE-based Extrapolation☆73Updated last year
- An Experiment on Dynamic NTK Scaling RoPE☆64Updated last year
- Reformatted Alignment☆113Updated last year
- Implementation of "LM-Infinite: Simple On-the-Fly Length Generalization for Large Language Models"☆40Updated 10 months ago
- the newest version of llama3,source code explained line by line using Chinese☆22Updated last year
- We aim to provide the best references to search, select, and synthesize high-quality and large-quantity data for post-training your LLMs.☆59Updated last year
- ☆95Updated 9 months ago
- Code for paper titled "Towards the Law of Capacity Gap in Distilling Language Models"☆102Updated last year
- ☆83Updated last year
- 1.4B sLLM for Chinese and English - HammerLLM🔨☆44Updated last year
- The source code and dataset mentioned in the paper Seal-Tools: Self-Instruct Tool Learning Dataset for Agent Tuning and Detailed Benchmar…☆52Updated 11 months ago
- The code for paper: Decoupled Planning and Execution: A Hierarchical Reasoning Framework for Deep Search☆58Updated 3 months ago
- ☆77Updated last month
- code for paper 《RankingGPT: Empowering Large Language Models in Text Ranking with Progressive Enhancement》☆35Updated last year
- ☆49Updated last year
- ☆90Updated 4 months ago
- ☆97Updated last month
- Automatic prompt optimization framework for multi-step agent tasks.☆34Updated 10 months ago
- Code for KaLM-Embedding models☆91Updated 3 months ago
- ☆40Updated last year
- Implementations of online merging optimizers proposed by Online Merging Optimizers for Boosting Rewards and Mitigating Tax in Alignment☆76Updated last year
- Official completion of “Training on the Benchmark Is Not All You Need”.☆36Updated 9 months ago
- ☆59Updated 11 months ago
- Qwen-WisdomVast is a large model trained on 1 million high-quality Chinese multi-turn SFT data, 200,000 English multi-turn SFT data, and …☆18Updated last year