facebookresearch / metaseq
Repo for external large-scale work
☆6,527Updated last year
Alternatives and similar repositories for metaseq
Users that are interested in metaseq are comparing it to the libraries listed below
Sorting:
- A repo for distributed training of language models with Reinforcement Learning via Human Feedback (RLHF)☆4,641Updated last year
- Implementation of RLHF (Reinforcement Learning with Human Feedback) on top of the PaLM architecture. Basically ChatGPT but with PaLM☆7,806Updated 3 weeks ago
- An implementation of model parallel autoregressive transformers on GPUs, based on the Megatron and DeepSpeed libraries☆7,178Updated this week
- QLoRA: Efficient Finetuning of Quantized LLMs☆10,423Updated 11 months ago
- GLM-130B: An Open Bilingual Pre-Trained Model (ICLR 2023)☆7,682Updated last year
- Running large language models on a single GPU for throughput-oriented scenarios.☆9,316Updated 6 months ago
- Accessible large language models via k-bit quantization for PyTorch.☆7,020Updated this week
- 🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.☆18,371Updated this week
- Hackable and optimized Transformers building blocks, supporting a composable construction.☆9,476Updated 2 weeks ago
- [ICLR 2024] Fine-tuning LLaMA to follow Instructions within 1 Hour and 1.2M Parameters☆5,872Updated last year
- RWKV (pronounced RwaKuv) is an RNN with great LLM performance, which can also be directly trained like a GPT transformer (parallelizable)…☆13,590Updated this week
- Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"☆11,921Updated 4 months ago
- 🚀 A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (i…☆8,708Updated this week
- Instruct-tune LLaMA on consumer hardware☆18,900Updated 9 months ago
- OpenLLaMA, a permissively licensed open source reproduction of Meta AI’s LLaMA 7B trained on the RedPajama dataset☆7,484Updated last year
- Fast and memory-efficient exact attention☆17,346Updated last week
- The RedPajama-Data repository contains code for preparing large datasets for training large language models.☆4,713Updated 5 months ago
- ☆2,807Updated 2 weeks ago
- Training and serving large-scale neural networks with auto parallelization.☆3,131Updated last year
- Official implementation for "Multimodal Chain-of-Thought Reasoning in Language Models" (stay tuned and more will be updated)☆3,918Updated 11 months ago
- Train transformer language models with reinforcement learning.☆13,703Updated this week
- Model parallel transformers in JAX and Haiku☆6,334Updated 2 years ago
- Implementation of the LLaMA language model based on nanoGPT. Supports flash attention, Int8 and GPTQ 4bit quantization, LoRA and LLaMA-Ad…☆6,054Updated 8 months ago
- LLaMA: Open and Efficient Foundation Language Models☆2,800Updated last year
- Example models using DeepSpeed☆6,479Updated 3 weeks ago
- ImageBind One Embedding Space to Bind Them All☆8,640Updated 9 months ago
- An open-source framework for training large multimodal models.☆3,909Updated 8 months ago
- A collection of libraries to optimise AI model performances☆8,371Updated 9 months ago
- ChatRWKV is like ChatGPT but powered by RWKV (100% RNN) language model, and open source.☆9,481Updated last week
- Foundation Architecture for (M)LLMs☆3,074Updated last year