NielsRogge / Transformers-Tutorials
This repository contains demos I made with the Transformers library by HuggingFace.
☆10,189Updated 2 months ago
Alternatives and similar repositories for Transformers-Tutorials:
Users that are interested in Transformers-Tutorials are comparing it to the libraries listed below
- 🚀 A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (i…☆8,497Updated last week
- 🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.☆17,795Updated this week
- The standard data-centric AI package for data quality and machine learning with messy, real-world data and labels.☆10,241Updated last week
- LAVIS - A One-stop Library for Language-Vision Intelligence☆10,357Updated 4 months ago
- Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities☆20,914Updated 2 weeks ago
- An annotated implementation of the Transformer paper.☆6,089Updated 11 months ago
- Fast and memory-efficient exact attention☆16,370Updated this week
- Train transformer language models with reinforcement learning.☆12,591Updated this week
- Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"☆11,536Updated 3 months ago
- Accessible large language models via k-bit quantization for PyTorch.☆6,818Updated this week
- The largest collection of PyTorch image encoders / backbones. Including train, eval, inference, export scripts, and pretrained weights --…☆33,511Updated 3 weeks ago
- An open source implementation of CLIP.☆11,272Updated this week
- The AI developer platform. Use Weights & Biases to train and fine-tune models, and manage models from experimentation to production.☆9,650Updated this week
- State-of-the-Art Text Embeddings☆16,265Updated this week
- Hackable and optimized Transformers building blocks, supporting a composable construction.☆9,187Updated this week
- An ultimately comprehensive paper list of Vision Transformer/Attention, including papers, codes, and related websites☆4,796Updated 7 months ago
- [NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.☆21,854Updated 7 months ago
- PyTorch code for BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation☆5,114Updated 7 months ago
- BertViz: Visualize Attention in NLP Models (BERT, GPT2, BART, etc.)☆7,237Updated last year
- Implementation of the LLaMA language model based on nanoGPT. Supports flash attention, Int8 and GPTQ 4bit quantization, LoRA and LLaMA-Ad…☆6,040Updated 6 months ago
- Official codebase used to develop Vision Transformer, SigLIP, MLP-Mixer, LiT and more.☆2,723Updated 2 weeks ago
- Unsupervised text tokenizer for Neural Network-based text generation.☆10,707Updated 2 weeks ago
- A collection of resources and papers on Diffusion Models☆11,553Updated 7 months ago
- A minimal PyTorch re-implementation of the OpenAI GPT (Generative Pretrained Transformer) training☆21,600Updated 7 months ago
- ☆11,044Updated 2 weeks ago
- Explanation to key concepts in ML☆7,496Updated this week
- Ongoing research training transformer models at scale☆11,837Updated this week
- A concise but complete full-attention transformer with a set of promising experimental features from various papers☆5,149Updated this week
- Reading list for research topics in multimodal machine learning☆6,331Updated 7 months ago
- PyTorch code and models for the DINOv2 self-supervised learning method.☆10,026Updated 7 months ago