NielsRogge / Transformers-Tutorials
This repository contains demos I made with the Transformers library by HuggingFace.
☆9,050Updated last month
Related projects: ⓘ
- 🚀 A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (i…☆7,687Updated this week
- 🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.☆15,839Updated this week
- Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities☆19,545Updated 3 weeks ago
- Fast and memory-efficient exact attention☆13,401Updated this week
- A collection of resources and papers on Diffusion Models☆10,758Updated last month
- The standard data-centric AI package for data quality and machine learning with messy, real-world data and labels.☆9,410Updated last week
- Notebooks using the Hugging Face libraries 🤗☆3,564Updated last week
- Train transformer language models with reinforcement learning.☆9,288Updated this week
- LAVIS - A One-stop Library for Language-Vision Intelligence☆9,663Updated 3 weeks ago
- An open source implementation of CLIP.☆9,782Updated last month
- BertViz: Visualize Attention in NLP Models (BERT, GPT2, BART, etc.)☆6,786Updated last year
- Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"☆10,327Updated last month
- A playbook for systematically maximizing the performance of deep learning models.☆26,385Updated 3 months ago
- Jupyter notebooks for the Natural Language Processing with Transformers book☆3,821Updated 3 weeks ago
- Latest Advances on Multimodal Large Language Models☆11,722Updated this week
- Explanation to key concepts in ML☆7,029Updated this week
- The largest collection of PyTorch image encoders / backbones. Including train, eval, inference, export scripts, and pretrained weights --…☆31,479Updated this week
- Implementation of Vision Transformer, a simple way to achieve SOTA in vision classification with only a single transformer encoder, in Py…☆19,651Updated 3 weeks ago
- ☆10,072Updated 3 months ago
- CLIP (Contrastive Language-Image Pretraining), Predict the most relevant text snippet given an image☆24,723Updated last month
- A curated list of practical guide resources of LLMs (LLMs Tree, Examples, Papers)☆9,282Updated 3 months ago
- State-of-the-Art Text Embeddings☆14,844Updated last week
- Mamba SSM architecture☆12,542Updated last month
- An annotated implementation of the Transformer paper.☆5,569Updated 5 months ago
- An ultimately comprehensive paper list of Vision Transformer/Attention, including papers, codes, and related websites☆4,541Updated last month
- A concise but complete full-attention transformer with a set of promising experimental features from various papers☆4,573Updated last week
- Ongoing research training transformer models at scale☆9,949Updated this week
- Flexible and powerful tensor operations for readable and reliable code (for pytorch, jax, TF and others)☆8,362Updated this week
- Pretrain, finetune and deploy AI models on multiple GPUs, TPUs with zero code changes.☆27,963Updated this week
- A minimal PyTorch re-implementation of the OpenAI GPT (Generative Pretrained Transformer) training☆19,845Updated last month