wangguojim / LargeScaleLinks
☆19Updated last year
Alternatives and similar repositories for LargeScale
Users that are interested in LargeScale are comparing it to the libraries listed below
Sorting:
- Train llm (bloom, llama, baichuan2-7b, chatglm3-6b) with deepspeed pipeline mode. Faster than zero/zero++/fsdp.☆98Updated last year
- ☆79Updated last year
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆69Updated 2 years ago
- Inference framework for MoE layers based on TensorRT with Python binding☆41Updated 4 years ago
- Training library for Megatron-based models☆74Updated last week
- A more efficient GLM implementation!☆54Updated 2 years ago
- Simple Dynamic Batching Inference☆145Updated 3 years ago
- code for Scaling Laws of RoPE-based Extrapolation☆73Updated last year
- This is a personal reimplementation of Google's Infini-transformer, utilizing a small 2b model. The project includes both model and train…☆58Updated last year
- implement bert in pure c++☆36Updated 5 years ago
- Odysseus: Playground of LLM Sequence Parallelism☆77Updated last year
- Built upon Megatron-Deepspeed and HuggingFace Trainer, EasyLLM has reorganized the code logic with a focus on usability. While enhancing …☆49Updated last year
- SuperCLUE-Math6:新一代中文原生多轮多步数学推理数据集的探索之旅☆60Updated last year
- Transformer related optimization, including BERT, GPT☆17Updated 2 years ago
- Distributed DataLoader For Pytorch Based On Ray☆24Updated 3 years ago
- Distributed IO-aware Attention algorithm☆21Updated 2 weeks ago
- This is a text generation method which returns a generator, streaming out each token in real-time during inference, based on Huggingface/…☆97Updated last year
- An Experiment on Dynamic NTK Scaling RoPE☆64Updated last year
- Transformer related optimization, including BERT, GPT☆59Updated 2 years ago
- NTK scaled version of ALiBi position encoding in Transformer.☆69Updated 2 years ago
- ☆24Updated last year
- A unified tokenization tool for Images, Chinese and English.☆151Updated 2 years ago
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆19Updated 2 years ago
- A MoE impl for PyTorch, [ATC'23] SmartMoE☆70Updated 2 years ago
- Finetune CPM-1☆24Updated 4 years ago
- An easy-to-use package for implementing SmoothQuant for LLMs☆106Updated 5 months ago
- Summary of system papers/frameworks/codes/tools on training or serving large model☆57Updated last year
- 百川Dynamic NTK-ALiBi的代码实现:无需微调即可推理更长文本☆49Updated 2 years ago
- ☆121Updated last year
- train llama on a single A100 80G node using 🤗 transformers and 🚀 Deepspeed Pipeline Parallelism☆224Updated last year