huggingface / transformersLinks
π€ Transformers: the model-definition framework for state-of-the-art machine learning models in text, vision, audio, and multimodal models, for both inference and training.
β154,558Updated this week
Alternatives and similar repositories for transformers
Users that are interested in transformers are comparing it to the libraries listed below
Sorting:
- Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalitiesβ21,920Updated 3 weeks ago
- DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.β41,118Updated last week
- π€ PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.β20,382Updated 2 weeks ago
- A library for efficient similarity search and clustering of dense vectors.β38,613Updated this week
- State-of-the-Art Text Embeddingsβ18,057Updated 2 weeks ago
- Build and share delightful machine learning apps, all in Python. π Star to support our work!β41,190Updated 2 weeks ago
- Facebook AI Research Sequence-to-Sequence Toolkit written in Python.β32,053Updated 3 months ago
- Library of deep learning models and datasets designed to make deep learning more accessible and accelerate ML research.β16,854Updated 2 years ago
- Ongoing research training transformer models at scaleβ14,758Updated this week
- π€ The largest hub of ready-to-use datasets for AI models with fast, easy-to-use and efficient data manipulation toolsβ21,042Updated 2 weeks ago
- π A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (iβ¦β9,415Updated 2 weeks ago
- π₯ Fast State-of-the-Art Tokenizers optimized for Research and Productionβ10,349Updated 2 weeks ago
- Unsupervised text tokenizer for Neural Network-based text generation.β11,558Updated this week
- TensorFlow code and pre-trained models for BERTβ39,779Updated last year
- Visualizer for neural network, deep learning and machine learning modelsβ32,119Updated this week
- π€ Diffusers: State-of-the-art diffusion models for image, video, and audio generation in PyTorch.β32,300Updated this week
- Fast and memory-efficient exact attentionβ21,401Updated this week
- Pretrain, finetune ANY AI model of ANY size on 1 or 10,000+ GPUs with zero code changes.β30,657Updated last week
- BertViz: Visualize Attention in NLP Models (BERT, GPT2, BART, etc.)β7,859Updated 7 months ago
- A very simple framework for state-of-the-art Natural Language Processing (NLP)β14,339Updated 2 months ago
- A minimal PyTorch re-implementation of the OpenAI GPT (Generative Pretrained Transformer) trainingβ23,215Updated last year
- Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"β13,106Updated last year
- Train transformer language models with reinforcement learning.β16,844Updated this week
- The largest collection of PyTorch image encoders / backbones. Including train, eval, inference, export scripts, and pretrained weights --β¦β36,114Updated this week
- Composable transformations of Python+NumPy programs: differentiate, vectorize, JIT to GPU/TPU, and moreβ34,475Updated this week
- State-of-the-Art Deep Learning scripts organized by models - easy to train and deploy with reproducible accuracy and performance on enterβ¦β14,676Updated last year
- CLIP (Contrastive Language-Image Pretraining), Predict the most relevant text snippet given an imageβ32,148Updated last year
- Repository to track the progress in Natural Language Processing (NLP), including the datasets and the current state-of-the-art for the moβ¦β22,983Updated last year
- Making large AI models cheaper, faster and more accessibleβ41,314Updated last week
- A latent text-to-image diffusion modelβ72,112Updated last year