kingoflolz / mesh-transformer-jax
Model parallel transformers in JAX and Haiku
☆6,291Updated last year
Related projects ⓘ
Alternatives and complementary repositories for mesh-transformer-jax
- An implementation of model parallel autoregressive transformers on GPUs, based on the Megatron and DeepSpeed libraries☆6,937Updated this week
- An implementation of model parallel GPT-2 and GPT-3-style models using the mesh-tensorflow library.☆8,233Updated 2 years ago
- 🌸 Run LLMs at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading☆9,224Updated 2 months ago
- Repo for external large-scale work☆6,513Updated 6 months ago
- Implementation of RLHF (Reinforcement Learning with Human Feedback) on top of the PaLM architecture. Basically ChatGPT but with PaLM☆7,702Updated 9 months ago
- Running large language models on a single GPU for throughput-oriented scenarios.☆9,186Updated last week
- StableLM: Stability AI Language Models☆15,831Updated 7 months ago
- A repo for distributed training of language models with Reinforcement Learning via Human Feedback (RLHF)☆4,500Updated 10 months ago
- OpenLLaMA, a permissively licensed open source reproduction of Meta AI’s LLaMA 7B trained on the RedPajama dataset☆7,384Updated last year
- ☆2,677Updated last week
- Instruct-tune LLaMA on consumer hardware☆18,634Updated 3 months ago
- CodeGen is a family of open-source model for program synthesis. Trained on TPU-v4. Competitive with OpenAI Codex.☆4,931Updated 7 months ago
- High-speed download of LLaMA, Facebook's 65B parameter GPT model☆4,169Updated last year
- RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best…☆12,630Updated last week
- Training and serving large-scale neural networks with auto parallelization.☆3,073Updated 11 months ago
- QLoRA: Efficient Finetuning of Quantized LLMs☆10,036Updated 4 months ago
- A scalable generative AI framework built for researchers and developers working on Large Language Models, Multimodal, and Speech AI (Auto…☆12,054Updated this week
- Hackable and optimized Transformers building blocks, supporting a composable construction.☆8,612Updated this week
- Implementation of the LLaMA language model based on nanoGPT. Supports flash attention, Int8 and GPTQ 4bit quantization, LoRA and LLaMA-Ad…☆5,988Updated 2 months ago
- Home of StarCoder: fine-tuning & inference!☆7,312Updated 8 months ago
- The simplest way to run LLaMA on your local machine☆13,090Updated 4 months ago
- A minimal PyTorch re-implementation of the OpenAI GPT (Generative Pretrained Transformer) training☆20,105Updated 2 months ago
- Implementation of Imagen, Google's Text-to-Image Neural Network, in Pytorch☆8,071Updated last month
- Ongoing research training transformer models at scale☆10,497Updated this week
- Accessible large language models via k-bit quantization for PyTorch.☆6,260Updated this week
- LLMs build upon Evol Insturct: WizardLM, WizardCoder, WizardMath☆9,258Updated 3 months ago
- The RedPajama-Data repository contains code for preparing large datasets for training large language models.☆4,568Updated 3 weeks ago
- Tensor library for machine learning☆11,172Updated this week
- 4 bits quantization of LLaMA using GPTQ☆2,993Updated 3 months ago