OpenBMB / BMTrainLinks
Efficient Training (including pre-training and fine-tuning) for Big Models
☆611Updated last month
Alternatives and similar repositories for BMTrain
Users that are interested in BMTrain are comparing it to the libraries listed below
Sorting:
- Model Compression for Big Models☆165Updated 2 years ago
- Best practice for training LLaMA models in Megatron-LM☆660Updated last year
- Collaborative Training of Large Language Models in an Efficient Way☆414Updated last year
- Efficient Inference for Big Models☆588Updated 2 years ago
- Efficient, Low-Resource, Distributed transformer implementation based on BMTrain☆263Updated last year
- ☆460Updated last year
- train llama on a single A100 80G node using 🤗 transformers and 🚀 Deepspeed Pipeline Parallelism☆224Updated last year
- A flexible and efficient training framework for large-scale alignment tasks☆431Updated this week
- Naive Bayes-based Context Extension☆324Updated 10 months ago
- LongBench v2 and LongBench (ACL 25'&24')☆997Updated 9 months ago
- LiBai(李白): A Toolbox for Large-Scale Distributed Parallel Training☆407Updated 2 months ago
- [NIPS2023] RRHF & Wombat☆811Updated 2 years ago
- ☆322Updated last year
- ☆175Updated this week
- ☆84Updated 2 years ago
- A repository sharing the literatures about long-context large language models, including the methodologies and the evaluation benchmarks☆268Updated last year
- 更纯粹、更高压缩率的Tokenizer☆485Updated 10 months ago
- ☆281Updated last year
- ☆765Updated last year
- FlagEval is an evaluation toolkit for AI large foundation models.☆338Updated 6 months ago
- Accelerate inference without tears☆338Updated last week
- ⛷️ LLaMA-MoE: Building Mixture-of-Experts from LLaMA with Continual Pre-training (EMNLP 2024)☆995Updated 10 months ago
- Easy and Efficient Finetuning LLMs. (Supported LLama, LLama2, LLama3, Qwen, Baichuan, GLM , Falcon) 大模型高效量化训练+部署.☆615Updated 8 months ago
- Live Training for Open-source Big Models☆506Updated 2 years ago
- [ACL 2024 Demo] Official GitHub repo for UltraEval: An open source framework for evaluating foundation models.☆251Updated 11 months ago
- The official repo of Pai-Megatron-Patch for LLM & VLM large scale training developed by Alibaba Cloud.☆1,395Updated this week
- A plug-and-play library for parameter-efficient-tuning (Delta Tuning)☆1,037Updated last year
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆1,422Updated last year
- The official repo of Aquila2 series proposed by BAAI, including pretrained & chat large language models.☆445Updated last year
- Finetuning LLaMA with RLHF (Reinforcement Learning with Human Feedback) based on DeepSpeed Chat☆115Updated 2 years ago