hpcaitech / PaLM-colossalai
Scalable PaLM implementation of PyTorch
☆192Updated last year
Related projects ⓘ
Alternatives and complementary repositories for PaLM-colossalai
- Performance benchmarking with ColossalAI☆39Updated 2 years ago
- Official repository for LightSeq: Sequence Level Parallelism for Distributed Training of Long Context Transformers☆195Updated 3 months ago
- GPTQ inference Triton kernel☆284Updated last year
- Examples of training models with hybrid parallelism using ColossalAI☆336Updated last year
- Open Instruction Generalist is an assistant trained on massive synthetic instructions to perform many millions of tasks☆206Updated 10 months ago
- ☆111Updated 8 months ago
- ☆102Updated last year
- Fast Inference Solutions for BLOOM☆560Updated last month
- ☆94Updated last year
- REST: Retrieval-Based Speculative Decoding, NAACL 2024☆176Updated last month
- Explorations into some recent techniques surrounding speculative decoding☆211Updated last year
- A Python library transfers PyTorch tensors between CPU and NVMe☆98Updated last week
- Microsoft Automatic Mixed Precision Library☆525Updated last month
- Code repo for the paper "LLM-QAT Data-Free Quantization Aware Training for Large Language Models"☆254Updated 2 months ago
- Latency and Memory Analysis of Transformer Models for Training and Inference☆355Updated last week
- A unified tokenization tool for Images, Chinese and English.☆150Updated last year
- ☆88Updated 2 months ago
- Inference script for Meta's LLaMA models using Hugging Face wrapper☆111Updated last year
- Codes for the paper "∞Bench: Extending Long Context Evaluation Beyond 100K Tokens": https://arxiv.org/abs/2402.13718☆285Updated last month
- An experimental implementation of the retrieval-enhanced language model☆75Updated last year
- DSIR large-scale data selection framework for language model training☆230Updated 7 months ago
- USP: Unified (a.k.a. Hybrid, 2D) Sequence Parallel Attention for Long Context Transformers Model Training and Inference☆357Updated this week
- Simple implementation of Speculative Sampling in NumPy for GPT-2.☆89Updated last year
- [NeurIPS'23] Speculative Decoding with Big Little Decoder☆86Updated 9 months ago
- Zero Bubble Pipeline Parallelism☆281Updated last week
- Ouroboros: Speculative Decoding with Large Model Enhanced Drafting (EMNLP 2024 main)☆76Updated last month
- Code used for sourcing and cleaning the BigScience ROOTS corpus☆306Updated last year
- Scaling Data-Constrained Language Models☆321Updated last month
- PyTorch bindings for CUTLASS grouped GEMM.☆68Updated 4 months ago