NielsRogge / Transformers-TutorialsLinks
This repository contains demos I made with the Transformers library by HuggingFace.
☆11,461Updated 6 months ago
Alternatives and similar repositories for Transformers-Tutorials
Users that are interested in Transformers-Tutorials are comparing it to the libraries listed below
Sorting:
- Train transformer language models with reinforcement learning.☆17,005Updated this week
- A concise but complete full-attention transformer with a set of promising experimental features from various papers☆5,769Updated last week
- 🚀 A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (i…☆9,443Updated this week
- Jupyter notebooks for the Natural Language Processing with Transformers book☆4,683Updated last year
- Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities☆21,950Updated last month
- Notebooks using the Hugging Face libraries 🤗☆4,430Updated last week
- An open source implementation of CLIP.☆13,223Updated 2 months ago
- 🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.☆20,465Updated this week
- Accessible large language models via k-bit quantization for PyTorch.☆7,881Updated last week
- BertViz: Visualize Attention in Transformer Models☆7,863Updated last week
- Hackable and optimized Transformers building blocks, supporting a composable construction.☆10,266Updated last week
- Materials for the Hugging Face Diffusion Models Course☆4,254Updated 11 months ago
- An ultimately comprehensive paper list of Vision Transformer/Attention, including papers, codes, and related websites☆4,997Updated last year
- Fast and memory-efficient exact attention☆21,635Updated this week
- PyTorch code for Vision Transformers training with the Self-Supervised learning method DINO☆7,404Updated last year
- PyTorch native post-training library☆5,642Updated this week
- PyTorch code and models for the DINOv2 self-supervised learning method.☆12,206Updated 3 weeks ago
- Implementation of Vision Transformer, a simple way to achieve SOTA in vision classification with only a single transformer encoder, in Py…☆24,864Updated last week
- Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"☆13,142Updated last year
- LAVIS - A One-stop Library for Language-Vision Intelligence☆11,116Updated last year
- ☆12,215Updated this week
- Official codebase used to develop Vision Transformer, SigLIP, MLP-Mixer, LiT and more.☆3,324Updated 7 months ago
- PyTorch code for BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation☆5,633Updated last year
- Recipes for shrinking, optimizing, customizing cutting edge vision models. 💜☆1,853Updated last week
- The largest collection of PyTorch image encoders / backbones. Including train, eval, inference, export scripts, and pretrained weights --…☆36,184Updated last week
- A Unified Library for Parameter-Efficient and Modular Transfer Learning☆2,795Updated 3 months ago
- General technology for enabling AI capabilities w/ LLMs and MLLMs☆4,256Updated 3 weeks ago
- Easily turn large sets of image urls to an image dataset. Can download, resize and package 100M urls in 20h on one machine.☆4,341Updated 2 months ago
- Transformer: PyTorch Implementation of "Attention Is All You Need"☆4,369Updated 6 months ago
- Flexible and powerful tensor operations for readable and reliable code (for pytorch, jax, TF and others)☆9,354Updated last week