NielsRogge / Transformers-TutorialsLinks
This repository contains demos I made with the Transformers library by HuggingFace.
ā11,433Updated 5 months ago
Alternatives and similar repositories for Transformers-Tutorials
Users that are interested in Transformers-Tutorials are comparing it to the libraries listed below
Sorting:
- š¤ PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.ā20,347Updated last week
- š A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (iā¦ā9,398Updated last week
- An open source implementation of CLIP.ā13,150Updated last month
- LAVIS - A One-stop Library for Language-Vision Intelligenceā11,073Updated last year
- A playbook for systematically maximizing the performance of deep learning models.ā29,605Updated last year
- BertViz: Visualize Attention in NLP Models (BERT, GPT2, BART, etc.)ā7,845Updated 6 months ago
- Cleanlab's open-source library is the standard data-centric AI package for data quality and machine learning with messy, real-world data ā¦ā11,218Updated last week
- Fast and memory-efficient exact attentionā21,317Updated this week
- Accessible large language models via k-bit quantization for PyTorch.ā7,845Updated 2 weeks ago
- Notebooks using the Hugging Face libraries š¤ā4,415Updated this week
- Train transformer language models with reinforcement learning.ā16,722Updated last week
- Official codebase used to develop Vision Transformer, SigLIP, MLP-Mixer, LiT and more.ā3,297Updated 7 months ago
- Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalitiesā21,915Updated last week
- Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"ā13,093Updated last year
- An ultimately comprehensive paper list of Vision Transformer/Attention, including papers, codes, and related websitesā4,986Updated last year
- State-of-the-Art Text Embeddingsā18,039Updated this week
- Hackable and optimized Transformers building blocks, supporting a composable construction.ā10,201Updated 2 weeks ago
- Robust recipes to align language models with human and AI preferencesā5,460Updated 3 months ago
- Scenic: A Jax Library for Computer Vision Research and Beyondā3,736Updated last week
- A concise but complete full-attention transformer with a set of promising experimental features from various papersā5,734Updated 2 weeks ago
- Latest Advances on Multimodal Large Language Modelsā17,063Updated this week
- Jupyter notebooks for the Natural Language Processing with Transformers bookā4,664Updated last year
- A Unified Library for Parameter-Efficient and Modular Transfer Learningā2,793Updated 2 months ago
- Flexible and powerful tensor operations for readable and reliable code (for pytorch, jax, TF and others)ā9,334Updated last month
- Pretrain, finetune ANY AI model of ANY size on 1 or 10,000+ GPUs with zero code changes.ā30,626Updated this week
- Explanation to key concepts in MLā8,245Updated 5 months ago
- PyTorch code for BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generationā5,618Updated last year
- 20+ high-performance LLMs with recipes to pretrain, finetune and deploy at scale.ā13,052Updated last week
- CLIP (Contrastive Language-Image Pretraining), Predict the most relevant text snippet given an imageā32,021Updated last year
- Mamba SSM architectureā16,778Updated last month