facebookresearch / metaseq
Repo for external large-scale work
☆6,516Updated 6 months ago
Related projects ⓘ
Alternatives and complementary repositories for metaseq
- An implementation of model parallel autoregressive transformers on GPUs, based on the Megatron and DeepSpeed libraries☆6,947Updated this week
- A repo for distributed training of language models with Reinforcement Learning via Human Feedback (RLHF)☆4,502Updated 10 months ago
- Transformer related optimization, including BERT, GPT☆5,890Updated 7 months ago
- A collection of libraries to optimise AI model performances☆8,375Updated 3 months ago
- Running large language models on a single GPU for throughput-oriented scenarios.☆9,198Updated 3 weeks ago
- ☆2,686Updated this week
- Ongoing research training transformer models at scale☆10,595Updated this week
- Accessible large language models via k-bit quantization for PyTorch.☆6,299Updated this week
- LLaMA: Open and Efficient Foundation Language Models☆2,807Updated last year
- Training and serving large-scale neural networks with auto parallelization.☆3,077Updated 11 months ago
- 🚀 A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (i…☆7,958Updated this week
- Implementation of the LLaMA language model based on nanoGPT. Supports flash attention, Int8 and GPTQ 4bit quantization, LoRA and LLaMA-Ad…☆5,994Updated 2 months ago
- Train transformer language models with reinforcement learning.☆10,086Updated this week
- Official implementation for "Multimodal Chain-of-Thought Reasoning in Language Models" (stay tuned and more will be updated)☆3,812Updated 5 months ago
- QLoRA: Efficient Finetuning of Quantized LLMs☆10,059Updated 5 months ago
- Fast and memory-efficient exact attention☆14,279Updated this week
- Beyond the Imitation Game collaborative benchmark for measuring and extrapolating the capabilities of language models☆2,871Updated 4 months ago
- CodeGen is a family of open-source model for program synthesis. Trained on TPU-v4. Competitive with OpenAI Codex.☆4,934Updated 8 months ago
- Aligning pretrained language models with instruction data generated by themselves.☆4,164Updated last year
- General technology for enabling AI capabilities w/ LLMs and MLLMs☆3,699Updated last month
- DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.☆35,508Updated this week
- RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best…☆12,672Updated this week
- Code and documentation to train Stanford's Alpaca models, and generate the data.☆29,561Updated 4 months ago
- Large Language Model Text Generation Inference☆9,122Updated this week
- A minimal PyTorch re-implementation of the OpenAI GPT (Generative Pretrained Transformer) training☆20,199Updated 3 months ago
- Instruct-tune LLaMA on consumer hardware☆18,653Updated 3 months ago
- Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"☆10,776Updated 3 months ago