microsoft / torchscaleLinks
Foundation Architecture for (M)LLMs
☆3,121Updated last year
Alternatives and similar repositories for torchscale
Users that are interested in torchscale are comparing it to the libraries listed below
Sorting:
- An implementation of "Retentive Network: A Successor to Transformer for Large Language Models"☆1,209Updated 2 years ago
- PyTorch extensions for high performance and large scale training.☆3,387Updated 6 months ago
- 🦁 Lion, new optimizer discovered by Google Brain using genetic algorithms that is purportedly better than Adam(w), in Pytorch☆2,169Updated 11 months ago
- A Unified Library for Parameter-Efficient and Modular Transfer Learning☆2,783Updated last month
- An open-source framework for training large multimodal models.☆4,045Updated last year
- maximal update parametrization (µP)☆1,626Updated last year
- A repo for distributed training of language models with Reinforcement Learning via Human Feedback (RLHF)☆4,728Updated last year
- The official implementation of “Sophia: A Scalable Stochastic Second-order Optimizer for Language Model Pre-training”☆976Updated last year
- Accessible large language models via k-bit quantization for PyTorch.☆7,755Updated this week
- Kernl lets you run PyTorch transformer models several times faster on GPU with a single line of code, and is designed to be easily hackab…☆1,585Updated last year
- Transformer related optimization, including BERT, GPT☆6,348Updated last year
- Cramming the training of a (BERT-type) language model into limited compute.☆1,351Updated last year
- ☆2,907Updated last week
- LOMO: LOw-Memory Optimization☆990Updated last year
- TorchMultimodal is a PyTorch library for training state-of-the-art multimodal multi-task models at scale.☆1,670Updated this week
- [ICLR 2024] Fine-tuning LLaMA to follow Instructions within 1 Hour and 1.2M Parameters☆5,917Updated last year
- The hub for EleutherAI's work on interpretability and learning dynamics☆2,671Updated 5 months ago
- Large language models (LLMs) made easy, EasyLM is a one stop solution for pre-training, finetuning, evaluating and serving LLMs in JAX/Fl…☆2,499Updated last year
- Implementation of 🦩 Flamingo, state-of-the-art few-shot visual question answering attention net out of Deepmind, in Pytorch☆1,267Updated 3 years ago
- Code for the ICLR 2023 paper "GPTQ: Accurate Post-training Quantization of Generative Pretrained Transformers".☆2,219Updated last year
- [NeurIPS 2023] MeZO: Fine-Tuning Language Models with Just Forward Passes. https://arxiv.org/abs/2305.17333☆1,134Updated last year
- A concise but complete full-attention transformer with a set of promising experimental features from various papers☆5,686Updated 2 weeks ago
- 🚀 A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (i…☆9,289Updated last week
- A fast MoE impl for PyTorch☆1,813Updated 9 months ago
- Implementation of the specific Transformer architecture from PaLM - Scaling Language Modeling with Pathways☆824Updated 3 years ago
- General technology for enabling AI capabilities w/ LLMs and MLLMs☆4,175Updated this week
- Implementation of the LLaMA language model based on nanoGPT. Supports flash attention, Int8 and GPTQ 4bit quantization, LoRA and LLaMA-Ad…☆6,087Updated 4 months ago
- Structured state space sequence models☆2,773Updated last year
- Hackable and optimized Transformers building blocks, supporting a composable construction.☆10,094Updated last week
- A modular RL library to fine-tune language models to human preferences☆2,366Updated last year