hpcaitech / PaLM-colossalaiView external linksLinks
Scalable PaLM implementation of PyTorch
☆190Dec 19, 2022Updated 3 years ago
Alternatives and similar repositories for PaLM-colossalai
Users that are interested in PaLM-colossalai are comparing it to the libraries listed below
Sorting:
- Performance benchmarking with ColossalAI☆38Jul 6, 2022Updated 3 years ago
- Sky Computing: Accelerating Geo-distributed Computing in Federated Learning☆90Nov 22, 2022Updated 3 years ago
- Examples of training models with hybrid parallelism using ColossalAI☆339Mar 23, 2023Updated 2 years ago
- Large-scale model inference.☆627Sep 12, 2023Updated 2 years ago
- Implementation of the specific Transformer architecture from PaLM - Scaling Language Modeling with Pathways☆828Nov 9, 2022Updated 3 years ago
- ☆11Oct 3, 2021Updated 4 years ago
- kogpt를 oslo로 파인튜닝하는 예제.☆23Aug 26, 2022Updated 3 years ago
- This is project for korean auto spacing☆12Aug 3, 2020Updated 5 years ago
- For the rlhf learning environment of Koreans☆25Sep 25, 2023Updated 2 years ago
- 모두의 말뭉치 데이터를 분석에 편리한 형태로 변환하는 기능을 제공합니다.☆11Mar 2, 2022Updated 3 years ago
- A utility for storing and reading files for Korean LM training 💾☆35Oct 15, 2025Updated 3 months ago
- ☆19Sep 20, 2022Updated 3 years ago
- ☆14May 3, 2022Updated 3 years ago
- ☆24Nov 22, 2022Updated 3 years ago
- Korean Named Entity Corpus☆25May 12, 2023Updated 2 years ago
- This repo is for Korean wiki table question answering datasets described in the paper of Korean-Specific Dataset for Table Question Answe…☆91Oct 22, 2024Updated last year
- OSLO: Open Source for Large-scale Optimization☆175Sep 9, 2023Updated 2 years ago
- 🦕 A library that handles everything with 🤗 and supports batching to models in PORORO☆37Jun 16, 2022Updated 3 years ago
- Slicing a PyTorch Tensor Into Parallel Shards☆300Jun 7, 2025Updated 8 months ago
- Megatron LM 11B on Huggingface Transformers☆27Jul 11, 2021Updated 4 years ago
- ELECTRA기반 한국어 대화체 언어모델☆53Aug 4, 2021Updated 4 years ago
- OSLO: Open Source framework for Large-scale model Optimization☆309Aug 25, 2022Updated 3 years ago
- Implementation of the specific Transformer architecture from PaLM - Scaling Language Modeling with Pathways - in Jax (Equinox framework)☆190Jun 24, 2022Updated 3 years ago
- Elixir: Train a Large Language Model on a Small GPU Cluster☆15Jun 8, 2023Updated 2 years ago
- An implementation of model parallel autoregressive transformers on GPUs, based on the DeepSpeed library.☆21Nov 28, 2022Updated 3 years ago
- A memory efficient DLRM training solution using ColossalAI☆107Nov 22, 2022Updated 3 years ago
- Embroid: Unsupervised Prediction Smoothing Can Improve Few-Shot Classification☆11Aug 12, 2023Updated 2 years ago
- Open-Retrieval Conversational Machine Reading: A new setting & OR-ShARC dataset☆13Nov 19, 2022Updated 3 years ago
- Local Attention - Flax module for Jax☆22May 26, 2021Updated 4 years ago
- KETOD Knowledge-Enriched Task-Oriented Dialogue☆32Jan 4, 2023Updated 3 years ago
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆1,433Mar 20, 2024Updated last year
- Korean Math Word Problems☆59Jan 14, 2022Updated 4 years ago
- CareCall for Seniors: Role Specified Open-Domain Dialogue dataset generated by leveraging LLMs (NAACL 2022).☆60May 3, 2022Updated 3 years ago
- Deploy KoGPT with Triton Inference Server☆14Nov 18, 2022Updated 3 years ago
- Depict GPU memory footprint during DNN training of PyTorch☆11Nov 17, 2022Updated 3 years ago
- Natural Language Processing Tasks and Examples.☆61Aug 17, 2022Updated 3 years ago
- Automated Parallelization System and Infrastructure for Multiple Ecosystems☆82Nov 19, 2024Updated last year
- Research and development for optimizing transformers☆131Feb 16, 2021Updated 4 years ago
- Princeton NLP's pre-training library based on fairseq with DeepSpeed kernel integration 🚃☆116Oct 27, 2022Updated 3 years ago