NielsRogge / Transformers-Tutorials
This repository contains demos I made with the Transformers library by HuggingFace.
☆9,799Updated this week
Alternatives and similar repositories for Transformers-Tutorials:
Users that are interested in Transformers-Tutorials are comparing it to the libraries listed below
- 🚀 A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (i…☆8,178Updated this week
- 🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.☆16,978Updated this week
- Jupyter notebooks for the Natural Language Processing with Transformers book☆3,972Updated 4 months ago
- A concise but complete full-attention transformer with a set of promising experimental features from various papers☆4,985Updated last week
- Train transformer language models with reinforcement learning.☆10,609Updated this week
- Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities☆20,584Updated last week
- LAVIS - A One-stop Library for Language-Vision Intelligence☆10,161Updated last month
- Fast and memory-efficient exact attention☆15,064Updated this week
- The largest collection of PyTorch image encoders / backbones. Including train, eval, inference, export scripts, and pretrained weights --…☆32,920Updated this week
- Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"☆11,119Updated last month
- An open source implementation of CLIP.☆10,804Updated last week
- The standard data-centric AI package for data quality and machine learning with messy, real-world data and labels.☆10,056Updated last week
- BertViz: Visualize Attention in NLP Models (BERT, GPT2, BART, etc.)☆7,075Updated last year
- An ultimately comprehensive paper list of Vision Transformer/Attention, including papers, codes, and related websites☆4,716Updated 5 months ago
- A playbook for systematically maximizing the performance of deep learning models.☆27,833Updated 7 months ago
- ☆10,780Updated last month
- Flexible and powerful tensor operations for readable and reliable code (for pytorch, jax, TF and others)☆8,655Updated last week
- Accessible large language models via k-bit quantization for PyTorch.☆6,522Updated this week
- Ongoing research training transformer models at scale☆11,109Updated this week
- Implementation of Vision Transformer, a simple way to achieve SOTA in vision classification with only a single transformer encoder, in Py…☆21,481Updated last week
- Notebooks using the Hugging Face libraries 🤗☆3,792Updated last week
- Hackable and optimized Transformers building blocks, supporting a composable construction.☆8,910Updated this week
- An annotated implementation of the Transformer paper.☆5,900Updated 9 months ago
- Official codebase used to develop Vision Transformer, SigLIP, MLP-Mixer, LiT and more.☆2,516Updated 3 weeks ago
- State-of-the-Art Text Embeddings☆15,772Updated last week
- Latest Advances on Multimodal Large Language Models☆13,556Updated this week
- A Unified Library for Parameter-Efficient and Modular Transfer Learning☆2,631Updated last week
- Mamba SSM architecture☆13,791Updated last week
- PyTorch code for BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation☆4,968Updated 5 months ago
- PyTorch native post-training library☆4,703Updated this week