microsoft / unilm
Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities
☆20,914Updated 2 weeks ago
Alternatives and similar repositories for unilm:
Users that are interested in unilm are comparing it to the libraries listed below
- Fast and memory-efficient exact attention☆16,370Updated this week
- 🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.☆17,795Updated this week
- LAVIS - A One-stop Library for Language-Vision Intelligence☆10,357Updated 4 months ago
- Hackable and optimized Transformers building blocks, supporting a composable construction.☆9,187Updated this week
- 🚀 A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (i…☆8,497Updated this week
- DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.☆37,533Updated this week
- Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"☆11,536Updated 3 months ago
- CLIP (Contrastive Language-Image Pretraining), Predict the most relevant text snippet given an image☆27,943Updated 7 months ago
- RWKV (pronounced RwaKuv) is an RNN with great LLM performance, which can also be directly trained like a GPT transformer (parallelizable)…☆13,374Updated this week
- Accessible large language models via k-bit quantization for PyTorch.☆6,818Updated this week
- [NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.☆21,854Updated 7 months ago
- An open source implementation of CLIP.☆11,272Updated this week
- Implementation of the LLaMA language model based on nanoGPT. Supports flash attention, Int8 and GPTQ 4bit quantization, LoRA and LLaMA-Ad…☆6,040Updated 6 months ago
- Facebook AI Research Sequence-to-Sequence Toolkit written in Python.☆31,179Updated 2 months ago
- Ongoing research training transformer models at scale☆11,837Updated this week
- 🤗 Diffusers: State-of-the-art diffusion models for image, video, and audio generation in PyTorch and FLAX.☆28,103Updated this week
- Instruct-tune LLaMA on consumer hardware☆18,842Updated 7 months ago
- Foundation Architecture for (M)LLMs☆3,062Updated 11 months ago
- QLoRA: Efficient Finetuning of Quantized LLMs☆10,322Updated 9 months ago
- Train transformer language models with reinforcement learning.☆12,591Updated this week
- An open-source framework for training large multimodal models.☆3,857Updated 6 months ago
- [ICLR 2024] Fine-tuning LLaMA to follow Instructions within 1 Hour and 1.2M Parameters☆5,832Updated last year
- Pretrain, finetune ANY AI model of ANY size on multiple GPUs, TPUs with zero code changes.☆29,161Updated this week
- PyTorch code and models for the DINOv2 self-supervised learning method.☆10,026Updated 7 months ago
- This repository contains demos I made with the Transformers library by HuggingFace.☆10,189Updated 2 months ago
- Google Research☆35,151Updated this week
- The AI developer platform. Use Weights & Biases to train and fine-tune models, and manage models from experimentation to production.☆9,650Updated this week
- Transformer related optimization, including BERT, GPT☆6,084Updated 11 months ago
- An Extensible Toolkit for Finetuning and Inference of Large Foundation Models. Large Models for All.☆8,377Updated this week
- Mamba SSM architecture☆14,291Updated 2 months ago