facebookresearch / fairscaleLinks
PyTorch extensions for high performance and large scale training.
β3,331Updated last month
Alternatives and similar repositories for fairscale
Users that are interested in fairscale are comparing it to the libraries listed below
Sorting:
- Transformer related optimization, including BERT, GPTβ6,211Updated last year
- π A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (iβ¦β8,839Updated this week
- A high-performance Python-based I/O system for large (and small) deep learning problems, with strong support for PyTorch.β2,639Updated 3 weeks ago
- A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating point (FP8) precision on Hopper, Ada and Blaβ¦β2,491Updated this week
- Ongoing research training transformer models at scaleβ12,600Updated this week
- π Accelerate inference and training of π€ Transformers, Diffusers, TIMM and Sentence Transformers with easy to use hardware optimizationβ¦β2,942Updated this week
- Ongoing research training transformer language models at scale, including: BERT & GPT-2β2,088Updated 2 months ago
- Machine learning metrics for distributed, scalable PyTorch applications.β2,291Updated this week
- Accessible large language models via k-bit quantization for PyTorch.β7,142Updated this week
- A Python-level JIT compiler designed to make unmodified PyTorch programs faster.β1,051Updated last year
- A PyTorch repo for data loading and utilities to be shared by the PyTorch domain libraries.β1,206Updated this week
- Ongoing research training transformer language models at scale, including: BERT & GPT-2β1,395Updated last year
- FFCV: Fast Forward Computer Vision (and other ML workloads!)β2,937Updated last year
- Kernl lets you run PyTorch transformer models several times faster on GPU with a single line of code, and is designed to be easily hackabβ¦β1,571Updated last year
- VISSL is FAIR's library of extensible, modular and scalable components for SOTA Self-Supervised Learning with images.β3,284Updated last year
- A fast MoE impl for PyTorchβ1,744Updated 4 months ago
- functorch is JAX-like composable function transforms for PyTorch.β1,432Updated this week
- Flexible and powerful tensor operations for readable and reliable code (for pytorch, jax, TF and others)β8,991Updated last month
- A PyTorch Extension: Tools for easy mixed precision and distributed training in Pytorchβ8,686Updated this week
- Enabling PyTorch on XLA Devices (e.g. Google TPU)β2,623Updated this week
- maximal update parametrization (Β΅P)β1,541Updated 11 months ago
- Toolbox of models, callbacks, and datasets for AI/ML researchers.β1,730Updated 2 weeks ago
- PyTorch native quantization and sparsity for training and inferenceβ2,114Updated this week
- MII makes low-latency and high-throughput inference possible, powered by DeepSpeed.β2,020Updated 2 months ago
- PyTorch/TorchScript/FX compiler for NVIDIA GPUs using TensorRTβ2,783Updated this week
- Mesh TensorFlow: Model Parallelism Made Easierβ1,608Updated last year
- Serve, optimize and scale PyTorch models in productionβ4,336Updated this week
- Foundation Architecture for (M)LLMsβ3,084Updated last year
- Training and serving large-scale neural networks with auto parallelization.β3,136Updated last year
- Fast and memory-efficient exact attentionβ17,846Updated this week