18907305772 / FuseAILinks
FuseAI Project
☆87Updated 6 months ago
Alternatives and similar repositories for FuseAI
Users that are interested in FuseAI are comparing it to the libraries listed below
Sorting:
- Reformatted Alignment☆113Updated 10 months ago
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks (EMNLP'24)☆146Updated 10 months ago
- ☆94Updated 7 months ago
- ☆87Updated 8 months ago
- [ICML'24] The official implementation of “Rethinking Optimization and Architecture for Tiny Language Models”☆122Updated 6 months ago
- [ACL 2024] LLM2LLM: Boosting LLMs with Novel Iterative Data Enhancement☆186Updated last year
- ☆90Updated 2 months ago
- Code for paper titled "Towards the Law of Capacity Gap in Distilling Language Models"☆101Updated last year
- Data preparation code for CrystalCoder 7B LLM☆45Updated last year
- ☆50Updated last year
- Code for "Critique Fine-Tuning: Learning to Critique is More Effective than Learning to Imitate" [COLM 2025]☆169Updated 3 weeks ago
- ☆103Updated 7 months ago
- Repo for "Z1: Efficient Test-time Scaling with Code"☆63Updated 3 months ago
- Official implementation for 'Extending LLMs’ Context Window with 100 Samples'☆79Updated last year
- ☆43Updated 9 months ago
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models☆136Updated last year
- This is the official repository for Inheritune.☆112Updated 5 months ago
- Fast LLM Training CodeBase With dynamic strategy choosing [Deepspeed+Megatron+FlashAttention+CudaFusionKernel+Compiler];☆40Updated last year
- A simple GPT-based evaluation tool for multi-aspect, interpretable assessment of LLMs.☆85Updated last year
- ☆121Updated last year
- SELF-GUIDE: Better Task-Specific Instruction Following via Self-Synthetic Finetuning. COLM 2024 Accepted Paper☆33Updated last year
- ☆47Updated last month
- Co-LLM: Learning to Decode Collaboratively with Multiple Language Models☆116Updated last year
- [EMNLP 2024] LongAlign: A Recipe for Long Context Alignment of LLMs☆252Updated 7 months ago
- We aim to provide the best references to search, select, and synthesize high-quality and large-quantity data for post-training your LLMs.☆57Updated 10 months ago
- Data preparation code for Amber 7B LLM☆91Updated last year
- ☆36Updated 10 months ago
- Implementation of the LongRoPE: Extending LLM Context Window Beyond 2 Million Tokens Paper☆146Updated last year
- Positional Skip-wise Training for Efficient Context Window Extension of LLMs to Extremely Length (ICLR 2024)☆205Updated last year
- Official code for "MAmmoTH2: Scaling Instructions from the Web" [NeurIPS 2024]☆146Updated 9 months ago