microsoft / unilmLinks
Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities
β21,840Updated 4 months ago
Alternatives and similar repositories for unilm
Users that are interested in unilm are comparing it to the libraries listed below
Sorting:
- LAVIS - A One-stop Library for Language-Vision Intelligenceβ11,036Updated last year
- π€ PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.β20,099Updated this week
- Fast and memory-efficient exact attentionβ20,669Updated this week
- An open source implementation of CLIP.β12,963Updated 2 weeks ago
- π A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (iβ¦β9,289Updated last week
- Train transformer language models with reinforcement learning.β16,308Updated last week
- Ongoing research training transformer models at scaleβ14,225Updated this week
- Repo for external large-scale workβ6,546Updated last year
- Facebook AI Research Sequence-to-Sequence Toolkit written in Python.β31,952Updated last month
- Hackable and optimized Transformers building blocks, supporting a composable construction.β10,094Updated last week
- Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"β12,945Updated 11 months ago
- DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.β40,733Updated this week
- QLoRA: Efficient Finetuning of Quantized LLMsβ10,755Updated last year
- PyTorch code and models for the DINOv2 self-supervised learning method.β11,882Updated 3 months ago
- Inference code for Llama modelsβ58,934Updated 9 months ago
- ImageBind One Embedding Space to Bind Them Allβ8,859Updated last month
- State-of-the-Art Text Embeddingsβ17,874Updated this week
- Open-sourced codes for MiniGPT-4 and MiniGPT-v2 (https://minigpt-4.github.io, https://minigpt-v2.github.io/)β25,755Updated last year
- CLIP (Contrastive Language-Image Pretraining), Predict the most relevant text snippet given an imageβ31,564Updated last year
- PyTorch code for BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generationβ5,579Updated last year
- This repository contains demos I made with the Transformers library by HuggingFace.β11,355Updated 4 months ago
- [ICLR 2024] Fine-tuning LLaMA to follow Instructions within 1 Hour and 1.2M Parametersβ5,917Updated last year
- π€ Transformers: the model-definition framework for state-of-the-art machine learning models in text, vision, audio, and multimodal modelβ¦β152,590Updated this week
- [NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.β24,008Updated last year
- Accessible large language models via k-bit quantization for PyTorch.β7,755Updated last week
- Unsupervised text tokenizer for Neural Network-based text generation.β11,441Updated 2 weeks ago
- Foundation Architecture for (M)LLMsβ3,121Updated last year
- An open-source framework for training large multimodal models.β4,045Updated last year
- The repository provides code for running inference with the SegmentAnything Model (SAM), links for downloading the trained model checkpoiβ¦β52,616Updated last year
- Build and share delightful machine learning apps, all in Python. π Star to support our work!β40,534Updated last week