Fast Inference Solutions for BLOOM
☆566Oct 9, 2024Updated last year
Alternatives and similar repositories for transformers-bloom-inference
Users that are interested in transformers-bloom-inference are comparing it to the libraries listed below
Sorting:
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆1,436Mar 20, 2024Updated last year
- Techniques used to run BLOOM at inference in parallel☆37Oct 21, 2022Updated 3 years ago
- MII makes low-latency and high-throughput inference possible, powered by DeepSpeed.☆2,097Jun 30, 2025Updated 8 months ago
- ☆66Aug 2, 2022Updated 3 years ago
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆2,230Aug 14, 2025Updated 6 months ago
- A repo for distributed training of language models with Reinforcement Learning via Human Feedback (RLHF)☆4,741Jan 8, 2024Updated 2 years ago
- Crosslingual Generalization through Multitask Finetuning☆537Sep 22, 2024Updated last year
- This repo contains the data preparation, tokenization, training and inference code for BLOOMChat. BLOOMChat is a 176 billion parameter mu…☆584Oct 10, 2023Updated 2 years ago
- Transformer related optimization, including BERT, GPT☆6,398Mar 27, 2024Updated last year
- Central place for the engineering/scaling WG: documentation, SLURM scripts and logs, compute environment and data.☆1,008Jul 29, 2024Updated last year
- ☆39Oct 3, 2022Updated 3 years ago
- Example models using DeepSpeed☆6,791Feb 7, 2026Updated 3 weeks ago
- Large Language Model Text Generation Inference☆10,788Jan 8, 2026Updated last month
- ☆1,560Feb 20, 2026Updated last week
- Ongoing research training transformer models at scale☆15,461Updated this week
- Accessible large language models via k-bit quantization for PyTorch.☆7,997Updated this week
- Pipeline for pulling and processing online language model pretraining data from the web☆177Jul 31, 2023Updated 2 years ago
- Repo for external large-scale work☆6,543Apr 27, 2024Updated last year
- Scaling Data-Constrained Language Models☆342Jun 28, 2025Updated 8 months ago
- 🚀 Accelerate inference and training of 🤗 Transformers, Diffusers, TIMM and Sentence Transformers with easy to use hardware optimization…☆3,305Feb 9, 2026Updated 3 weeks ago
- GLM-130B: An Open Bilingual Pre-Trained Model (ICLR 2023)☆7,669Jul 25, 2023Updated 2 years ago
- 🚀 A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (i…☆9,513Updated this week
- Instruction Tuning with GPT-4☆4,341Jun 11, 2023Updated 2 years ago
- An implementation of model parallel autoregressive transformers on GPUs, based on the Megatron and DeepSpeed libraries☆7,395Feb 3, 2026Updated last month
- An easy-to-use LLMs quantization package with user-friendly apis, based on GPTQ algorithm.☆5,027Apr 11, 2025Updated 10 months ago
- Train transformer language models with reinforcement learning.☆17,460Updated this week
- Training and serving large-scale neural networks with auto parallelization.☆3,183Dec 9, 2023Updated 2 years ago
- 4 bits quantization of LLaMA using GPTQ☆3,074Jul 13, 2024Updated last year
- Code for the ICLR 2023 paper "GPTQ: Accurate Post-training Quantization of Generative Pretrained Transformers".☆2,261Mar 27, 2024Updated last year
- A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit and 4-bit floating point (FP8 and FP4) precision on H…☆3,176Updated this week
- PyTorch extensions for high performance and large scale training.☆3,400Apr 26, 2025Updated 10 months ago
- An open collection of implementation tips, tricks and resources for training large language models☆498Mar 8, 2023Updated 2 years ago
- Running large language models on a single GPU for throughput-oriented scenarios.☆9,382Oct 28, 2024Updated last year
- Fast and memory-efficient exact attention☆22,361Updated this week
- Parallelformers: An Efficient Model Parallelization Toolkit for Deployment☆791Apr 24, 2023Updated 2 years ago
- Aligning pretrained language models with instruction data generated by themselves.☆4,580Mar 27, 2023Updated 2 years ago
- 🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.☆20,678Feb 24, 2026Updated last week
- Human preference data for "Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback"☆1,818Jun 17, 2025Updated 8 months ago
- Efficient, scalable and enterprise-grade CPU/GPU inference server for 🤗 Hugging Face transformer models 🚀☆1,687Oct 23, 2024Updated last year