HuangLK / transpeeder
train llama on a single A100 80G node using 🤗 transformers and 🚀 Deepspeed Pipeline Parallelism
☆208Updated last year
Alternatives and similar repositories for transpeeder:
Users that are interested in transpeeder are comparing it to the libraries listed below
- Finetuning LLaMA with RLHF (Reinforcement Learning with Human Feedback) based on DeepSpeed Chat☆108Updated last year
- Train llm (bloom, llama, baichuan2-7b, chatglm3-6b) with deepspeed pipeline mode. Faster than zero/zero++/fsdp.☆91Updated 10 months ago
- A repository sharing the literatures about long-context large language models, including the methodologies and the evaluation benchmarks☆254Updated 4 months ago
- InsTag: A Tool for Data Analysis in LLM Supervised Fine-tuning☆224Updated last year
- Rectified Rotary Position Embeddings☆344Updated 6 months ago
- ☆83Updated last year
- Collaborative Training of Large Language Models in an Efficient Way☆410Updated 3 months ago
- ☆275Updated 7 months ago
- Deita: Data-Efficient Instruction Tuning for Alignment [ICLR2024]☆506Updated last week
- ☆161Updated last year
- Code used for sourcing and cleaning the BigScience ROOTS corpus☆305Updated last year
- Official repository of NEFTune: Noisy Embeddings Improves Instruction Finetuning☆388Updated 7 months ago
- [NAACL'24] Self-data filtering of LLM instruction-tuning data using a novel perplexity-based difficulty score, without using any other mo…☆315Updated 3 months ago
- Naive Bayes-based Context Extension☆316Updated last week
- Model Compression for Big Models☆151Updated last year
- [ACL 2024] LongBench: A Bilingual, Multitask Benchmark for Long Context Understanding☆692Updated 3 weeks ago
- ☆456Updated 6 months ago
- Pytorch implementation of DoReMi, a method for optimizing the data mixture weights in language modeling datasets☆311Updated 11 months ago
- ☆173Updated last year
- ☆299Updated last year
- LongQLoRA: Extent Context Length of LLMs Efficiently☆159Updated last year
- [NIPS2023] RRHF & Wombat☆800Updated last year
- CLongEval: A Chinese Benchmark for Evaluating Long-Context Large Language Models☆38Updated 9 months ago
- Codes for the paper "∞Bench: Extending Long Context Evaluation Beyond 100K Tokens": https://arxiv.org/abs/2402.13718☆291Updated 2 months ago
- [ACL'24 Outstanding] Data and code for L-Eval, a comprehensive long context language models evaluation benchmark☆363Updated 5 months ago
- 中文 Instruction tuning datasets☆122Updated 8 months ago
- A large-scale, fine-grained, diverse preference dataset (and models).☆321Updated 11 months ago
- The aim of this repository is to utilize LLaMA to reproduce and enhance the Stanford Alpaca☆95Updated last year
- [ACL 2024 Demo] Official GitHub repo for UltraEval: An open source framework for evaluating foundation models.☆224Updated last month
- A unified tokenization tool for Images, Chinese and English.☆151Updated last year