OpenBMB / BMCook
Model Compression for Big Models
☆162Updated last year
Alternatives and similar repositories for BMCook
Users that are interested in BMCook are comparing it to the libraries listed below
Sorting:
- Efficient Training (including pre-training and fine-tuning) for Big Models☆589Updated this week
- Efficient, Low-Resource, Distributed transformer implementation based on BMTrain☆256Updated last year
- train llama on a single A100 80G node using 🤗 transformers and 🚀 Deepspeed Pipeline Parallelism☆219Updated last year
- Efficient Inference for Big Models☆583Updated 2 years ago
- Finetuning LLaMA with RLHF (Reinforcement Learning with Human Feedback) based on DeepSpeed Chat☆115Updated last year
- ☆84Updated last year
- The complete training code of the open-source high-performance Llama model, including the full process from pre-training to RLHF.☆65Updated 2 years ago
- Collaborative Training of Large Language Models in an Efficient Way☆415Updated 8 months ago
- Train llm (bloom, llama, baichuan2-7b, chatglm3-6b) with deepspeed pipeline mode. Faster than zero/zero++/fsdp.☆95Updated last year
- A flexible and efficient training framework for large-scale alignment tasks☆346Updated 3 months ago
- 怎么训练一个LLM分词器☆144Updated last year
- [ACL 2024 Demo] Official GitHub repo for UltraEval: An open source framework for evaluating foundation models.☆240Updated 6 months ago
- Naive Bayes-based Context Extension☆326Updated 5 months ago
- ☆168Updated last year
- A repository sharing the literatures about long-context large language models, including the methodologies and the evaluation benchmarks☆262Updated 9 months ago
- CLongEval: A Chinese Benchmark for Evaluating Long-Context Large Language Models☆40Updated last year
- ☆459Updated 11 months ago
- Codes for the paper "∞Bench: Extending Long Context Evaluation Beyond 100K Tokens": https://arxiv.org/abs/2402.13718☆324Updated 7 months ago
- A unified tokenization tool for Images, Chinese and English.☆152Updated 2 years ago
- ☆162Updated last month
- ☆79Updated last year
- Best practice for training LLaMA models in Megatron-LM☆650Updated last year
- Implementation of Chinese ChatGPT☆287Updated last year
- ☆128Updated last year
- Rectified Rotary Position Embeddings☆367Updated 11 months ago
- ☆280Updated last year
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆69Updated last year
- code for Scaling Laws of RoPE-based Extrapolation☆73Updated last year
- 文本去重☆71Updated 11 months ago
- 使用sentencepiece中BPE训练中文词表,并在transformers中进行使用。☆117Updated last year