cofe-ai / FLM-101BLinks
☆12Updated last year
Alternatives and similar repositories for FLM-101B
Users that are interested in FLM-101B are comparing it to the libraries listed below
Sorting:
- ☆36Updated last year
- ☆96Updated last year
- ☆17Updated last year
- SkyScript-100M: 1,000,000,000 Pairs of Scripts and Shooting Scripts for Short Drama: https://arxiv.org/abs/2408.09333v2☆126Updated 10 months ago
- [NAACL 2025] Representing Rule-based Chatbots with Transformers☆22Updated 7 months ago
- code for Scaling Laws of RoPE-based Extrapolation☆73Updated last year
- Code and data for CoachLM, an automatic instruction revision approach LLM instruction tuning.☆60Updated last year
- Our 2nd-gen LMM☆34Updated last year
- ☆95Updated 9 months ago
- Reformatted Alignment☆113Updated last year
- FuseAI Project☆87Updated 8 months ago
- ☆29Updated last year
- Code for KaLM-Embedding models☆91Updated 2 months ago
- ☆63Updated last year
- code for paper 《RankingGPT: Empowering Large Language Models in Text Ranking with Progressive Enhancement》☆34Updated last year
- The paper list of multilingual pre-trained models (Continual Updated).☆23Updated last year
- SELF-GUIDE: Better Task-Specific Instruction Following via Self-Synthetic Finetuning. COLM 2024 Accepted Paper☆33Updated last year
- [ICML'24] The official implementation of “Rethinking Optimization and Architecture for Tiny Language Models”☆123Updated 8 months ago
- Unleashing the Power of Cognitive Dynamics on Large Language Models☆63Updated last year
- Fast LLM Training CodeBase With dynamic strategy choosing [Deepspeed+Megatron+FlashAttention+CudaFusionKernel+Compiler];☆41Updated last year
- ☆16Updated last year
- An Experiment on Dynamic NTK Scaling RoPE☆64Updated last year
- ☆49Updated 11 months ago
- 最简易的R1结果在小模型上的复现,阐述类O1与DeepSeek R1最重要的本质。Think is all your need。利用实验佐证,对于强推理能力,think思考过程性内容是AGI/ASI的核心。☆44Updated 7 months ago
- ☆71Updated 9 months ago
- [ACL 2024] LLM2LLM: Boosting LLMs with Novel Iterative Data Enhancement☆190Updated last year
- Qwen-WisdomVast is a large model trained on 1 million high-quality Chinese multi-turn SFT data, 200,000 English multi-turn SFT data, and …☆18Updated last year
- A Simple MLLM Surpassed QwenVL-Max with OpenSource Data Only in 14B LLM.☆38Updated last year
- XVERSE-65B: A multilingual large language model developed by XVERSE Technology Inc.☆140Updated last year
- Implementation of "LM-Infinite: Simple On-the-Fly Length Generalization for Large Language Models"☆40Updated 10 months ago