wangguojim / LargeScale
☆19Updated last year
Alternatives and similar repositories for LargeScale
Users that are interested in LargeScale are comparing it to the libraries listed below
Sorting:
- Inference framework for MoE layers based on TensorRT with Python binding☆41Updated 3 years ago
- Odysseus: Playground of LLM Sequence Parallelism☆69Updated 11 months ago
- Contextual Position Encoding but with some custom CUDA Kernels https://arxiv.org/abs/2405.18719☆22Updated 11 months ago
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆69Updated last year
- Distributed IO-aware Attention algorithm☆20Updated 8 months ago
- code for Scaling Laws of RoPE-based Extrapolation☆73Updated last year
- ☆79Updated last year
- Distributed DataLoader For Pytorch Based On Ray☆24Updated 3 years ago
- ☆22Updated last year
- ☆23Updated last year
- Finetune CPM-1☆24Updated 3 years ago
- Transformer related optimization, including BERT, GPT☆17Updated last year
- This is a personal reimplementation of Google's Infini-transformer, utilizing a small 2b model. The project includes both model and train…☆56Updated last year
- An Experiment on Dynamic NTK Scaling RoPE☆64Updated last year
- implement bert in pure c++☆36Updated 5 years ago
- Train llm (bloom, llama, baichuan2-7b, chatglm3-6b) with deepspeed pipeline mode. Faster than zero/zero++/fsdp.☆95Updated last year
- A more efficient GLM implementation!☆55Updated 2 years ago
- A MoE impl for PyTorch, [ATC'23] SmartMoE☆62Updated last year
- Transformer related optimization, including BERT, GPT☆39Updated 2 years ago
- Summary of system papers/frameworks/codes/tools on training or serving large model☆56Updated last year
- ☆16Updated last year
- Vocabulary Parallelism☆19Updated 2 months ago
- Manages vllm-nccl dependency☆17Updated 11 months ago
- The code of paper "Learning to Break the Loop: Analyzing and Mitigating Repetitions for Neural Text Generation" published at NeurIPS 202…☆46Updated 2 years ago
- [ICML 2023] SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models☆23Updated last year
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆19Updated last year
- ☆20Updated 3 weeks ago
- 百川Dynamic NTK-ALiBi的代码实现:无需微调即可推理更长文本☆47Updated last year
- Sequence-level 1F1B schedule for LLMs.☆17Updated 11 months ago
- Nano repo for RL training of LLMs☆56Updated 2 weeks ago