EleutherAI / gpt-neox
An implementation of model parallel autoregressive transformers on GPUs, based on the Megatron and DeepSpeed libraries
☆6,947Updated this week
Related projects ⓘ
Alternatives and complementary repositories for gpt-neox
- An implementation of model parallel GPT-2 and GPT-3-style models using the mesh-tensorflow library.☆8,232Updated 2 years ago
- Instruct-tune LLaMA on consumer hardware☆18,653Updated 3 months ago
- Model parallel transformers in JAX and Haiku☆6,298Updated last year
- A repo for distributed training of language models with Reinforcement Learning via Human Feedback (RLHF)☆4,502Updated 10 months ago
- Repo for external large-scale work☆6,516Updated 6 months ago
- RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best…☆12,672Updated this week
- Implementation of the LLaMA language model based on nanoGPT. Supports flash attention, Int8 and GPTQ 4bit quantization, LoRA and LLaMA-Ad…☆5,994Updated 2 months ago
- Running large language models on a single GPU for throughput-oriented scenarios.☆9,198Updated 3 weeks ago
- Implementation of RLHF (Reinforcement Learning with Human Feedback) on top of the PaLM architecture. Basically ChatGPT but with PaLM☆7,705Updated 10 months ago
- The RedPajama-Data repository contains code for preparing large datasets for training large language models.☆4,571Updated last month
- OpenLLaMA, a permissively licensed open source reproduction of Meta AI’s LLaMA 7B trained on the RedPajama dataset☆7,384Updated last year
- The hub for EleutherAI's work on interpretability and learning dynamics☆2,282Updated 2 weeks ago
- Large Language Model Text Generation Inference☆9,122Updated this week
- Accessible large language models via k-bit quantization for PyTorch.☆6,299Updated this week
- 🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.☆16,471Updated this week
- Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"☆10,776Updated 3 months ago
- ☆2,686Updated this week
- 🌸 Run LLMs at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading☆9,248Updated 2 months ago
- QLoRA: Efficient Finetuning of Quantized LLMs☆10,059Updated 5 months ago
- Ongoing research training transformer models at scale☆10,595Updated this week
- A collection of libraries to optimise AI model performances☆8,375Updated 3 months ago
- LLMs build upon Evol Insturct: WizardLM, WizardCoder, WizardMath☆9,269Updated 3 months ago
- LLaMA: Open and Efficient Foundation Language Models☆2,807Updated last year
- Code and documentation to train Stanford's Alpaca models, and generate the data.☆29,561Updated 4 months ago
- tiktoken is a fast BPE tokeniser for use with OpenAI's models.☆12,427Updated last month
- Train transformer language models with reinforcement learning.☆10,086Updated this week
- A minimal PyTorch re-implementation of the OpenAI GPT (Generative Pretrained Transformer) training☆20,199Updated 3 months ago
- A scalable generative AI framework built for researchers and developers working on Large Language Models, Multimodal, and Speech AI (Auto…☆12,150Updated this week