FreedomIntelligence / FastLLM
Fast LLM Training CodeBase With dynamic strategy choosing [Deepspeed+Megatron+FlashAttention+CudaFusionKernel+Compiler];
☆36Updated last year
Alternatives and similar repositories for FastLLM:
Users that are interested in FastLLM are comparing it to the libraries listed below
- FuseAI Project☆83Updated last month
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models☆128Updated 9 months ago
- [ICML'24] The official implementation of “Rethinking Optimization and Architecture for Tiny Language Models”☆121Updated 2 months ago
- Data preparation code for CrystalCoder 7B LLM☆44Updated 10 months ago
- Code for preprint "Metadata Conditioning Accelerates Language Model Pre-training (MeCo)"☆36Updated 2 months ago
- SELF-GUIDE: Better Task-Specific Instruction Following via Self-Synthetic Finetuning. COLM 2024 Accepted Paper☆29Updated 9 months ago
- Official implementation for 'Extending LLMs’ Context Window with 100 Samples'☆75Updated last year
- An Experiment on Dynamic NTK Scaling RoPE☆62Updated last year
- the newest version of llama3,source code explained line by line using Chinese☆22Updated 10 months ago
- This is a personal reimplementation of Google's Infini-transformer, utilizing a small 2b model. The project includes both model and train…☆56Updated 10 months ago
- Implementation of "LM-Infinite: Simple On-the-Fly Length Generalization for Large Language Models"☆42Updated 4 months ago
- Hammer: Robust Function-Calling for On-Device Language Models via Function Masking☆63Updated 3 weeks ago
- ☆42Updated 2 months ago
- From GaLore to WeLore: How Low-Rank Weights Non-uniformly Emerge from Low-Rank Gradients. Ajay Jaiswal, Lu Yin, Zhenyu Zhang, Shiwei Liu,…☆44Updated 7 months ago
- We aim to provide the best references to search, select, and synthesize high-quality and large-quantity data for post-training your LLMs.☆53Updated 5 months ago
- ☆31Updated 8 months ago
- [NAACL 2025] Representing Rule-based Chatbots with Transformers☆19Updated last month
- Implementation of the paper: "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention" from Google in pyTO…☆53Updated last month
- Leveraging passage embeddings for efficient listwise reranking with large language models.☆38Updated 3 months ago
- ☆30Updated 7 months ago
- ☆36Updated 10 months ago
- ☆36Updated 6 months ago
- 1.4B sLLM for Chinese and English - HammerLLM🔨☆44Updated 11 months ago
- code for Scaling Laws of RoPE-based Extrapolation☆70Updated last year
- Official Repository for Paper "BaichuanSEED: Sharing the Potential of ExtensivE Data Collection and Deduplication by Introducing a Compet…☆18Updated 6 months ago
- ☆92Updated 3 months ago
- CLongEval: A Chinese Benchmark for Evaluating Long-Context Large Language Models☆40Updated last year