OpenMOSS / CoLLiE
Collaborative Training of Large Language Models in an Efficient Way
☆411Updated 5 months ago
Alternatives and similar repositories for CoLLiE:
Users that are interested in CoLLiE are comparing it to the libraries listed below
- Naive Bayes-based Context Extension☆320Updated 2 months ago
- train llama on a single A100 80G node using 🤗 transformers and 🚀 Deepspeed Pipeline Parallelism☆215Updated last year
- A repository sharing the literatures about long-context large language models, including the methodologies and the evaluation benchmarks☆255Updated 6 months ago
- ☆278Updated 9 months ago
- ☆456Updated 8 months ago
- ☆318Updated 7 months ago
- Efficient Training (including pre-training and fine-tuning) for Big Models☆577Updated 7 months ago
- A collection of phenomenons observed during the scaling of big foundation models, which may be developed into consensus, principles, or l…☆277Updated last year
- LongBench v2 and LongBench (ACL 2024)☆782Updated last month
- Rectified Rotary Position Embeddings☆351Updated 9 months ago
- [ICML 2024] LESS: Selecting Influential Data for Targeted Instruction Tuning☆412Updated 4 months ago
- [ACL 2024 Demo] Official GitHub repo for UltraEval: An open source framework for evaluating foundation models.☆233Updated 3 months ago
- Deita: Data-Efficient Instruction Tuning for Alignment [ICLR2024]☆534Updated 2 months ago
- [NIPS2023] RRHF & Wombat☆799Updated last year
- 更纯粹、更高压缩率的Tokenizer☆471Updated 2 months ago
- [NAACL'24] Self-data filtering of LLM instruction-tuning data using a novel perplexity-based difficulty score, without using any other mo…☆337Updated 5 months ago
- 欢迎来到 "LLM-travel" 仓库!探索大语言模型(LLM)的奥秘 🚀。致力于深入理解、探讨以及实现与大模型相关的各种技术、原理和应用。☆298Updated 7 months ago
- Finetuning LLaMA with RLHF (Reinforcement Learning with Human Feedback) based on DeepSpeed Chat☆112Updated last year
- Efficient, Low-Resource, Distributed transformer implementation based on BMTrain☆246Updated last year
- FlagEval is an evaluation toolkit for AI large foundation models.☆319Updated 7 months ago
- LongQLoRA: Extent Context Length of LLMs Efficiently☆163Updated last year
- Implementation of Chinese ChatGPT☆287Updated last year
- ☆162Updated last year
- 怎么训练一个LLM分词器☆140Updated last year
- Best practice for training LLaMA models in Megatron-LM☆644Updated last year
- [ICLR 2024] Sheared LLaMA: Accelerating Language Model Pre-training via Structured Pruning☆585Updated 11 months ago
- alpaca中文指令微调数据集☆392Updated last year
- Must-read Papers of Parameter-Efficient Tuning (Delta Tuning) Methods on Pre-trained Models.☆281Updated last year
- [EMNLP 2024] LongAlign: A Recipe for Long Context Alignment of LLMs☆241Updated 2 months ago
- Firefly中文LLaMA-2大模型,支持增量预训练Baichuan2、Llama2、Llama、Falcon、Qwen、Baichuan、InternLM、Bloom等大模型☆406Updated last year