hpcaitech / PaLM-colossalaiLinks
Scalable PaLM implementation of PyTorch
☆190Updated 2 years ago
Alternatives and similar repositories for PaLM-colossalai
Users that are interested in PaLM-colossalai are comparing it to the libraries listed below
Sorting:
- Examples of training models with hybrid parallelism using ColossalAI☆340Updated 2 years ago
- Open Instruction Generalist is an assistant trained on massive synthetic instructions to perform many millions of tasks☆208Updated last year
- ☆104Updated 2 years ago
- Large Scale Distributed Model Training strategy with Colossal AI and Lightning AI☆56Updated last year
- ☆96Updated 2 years ago
- Fast Inference Solutions for BLOOM☆564Updated 10 months ago
- train llama on a single A100 80G node using 🤗 transformers and 🚀 Deepspeed Pipeline Parallelism☆224Updated last year
- Code used for sourcing and cleaning the BigScience ROOTS corpus☆313Updated 2 years ago
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆69Updated 2 years ago
- Train llm (bloom, llama, baichuan2-7b, chatglm3-6b) with deepspeed pipeline mode. Faster than zero/zero++/fsdp.☆98Updated last year
- Simple implementation of Speculative Sampling in NumPy for GPT-2.☆95Updated 2 years ago
- Inference script for Meta's LLaMA models using Hugging Face wrapper☆110Updated 2 years ago
- DSIR large-scale data selection framework for language model training☆258Updated last year
- Official repository for DistFlashAttn: Distributed Memory-efficient Attention for Long-context LLMs Training☆214Updated last year
- A unified tokenization tool for Images, Chinese and English.☆151Updated 2 years ago
- 📑 Dive into Big Model Training☆114Updated 2 years ago
- [ACL 2022] Structured Pruning Learns Compact and Accurate Models https://arxiv.org/abs/2204.00408☆197Updated 2 years ago
- GPTQ inference Triton kernel☆306Updated 2 years ago
- Techniques used to run BLOOM at inference in parallel☆37Updated 2 years ago
- Scaling Data-Constrained Language Models☆339Updated 2 months ago
- This is a text generation method which returns a generator, streaming out each token in real-time during inference, based on Huggingface/…☆97Updated last year
- Multipack distributed sampler for fast padding-free training of LLMs☆199Updated last year
- ☆121Updated last year
- Inspired by google c4, here is a series of colossal clean data cleaning scripts focused on CommonCrawl data processing. Including Chinese…☆130Updated 2 years ago
- ☆180Updated 2 years ago
- ☆24Updated 2 years ago
- An experimental implementation of the retrieval-enhanced language model☆76Updated 2 years ago
- The aim of this repository is to utilize LLaMA to reproduce and enhance the Stanford Alpaca☆98Updated 2 years ago
- Minimal code to train a Large Language Model (LLM).☆172Updated 3 years ago
- LLaMa Tuning with Stanford Alpaca Dataset using Deepspeed and Transformers☆51Updated 2 years ago