sayakpaul / count-tokens-hf-datasets
This project shows how to derive the total number of training tokens from a large text dataset from π€ datasets with Apache Beam and Dataflow.
β26Updated 2 years ago
Alternatives and similar repositories for count-tokens-hf-datasets:
Users that are interested in count-tokens-hf-datasets are comparing it to the libraries listed below
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.β93Updated 2 years ago
- Implementation of TableFormer, Robust Transformer Modeling for Table-Text Encoding, in Pytorchβ37Updated 3 years ago
- Helper scripts and notes that were used while porting various nlp modelsβ46Updated 3 years ago
- Embedding Recycling for Language modelsβ38Updated last year
- QAmeleon introduces synthetic multilingual QA data using PaLM, a 540B large language model. This dataset was generated by prompt tuning Pβ¦β34Updated last year
- PyTorch implementation of GLOMβ22Updated 3 years ago
- Contains my experiments with the `big_vision` repo to train ViTs on ImageNet-1k.β22Updated 2 years ago
- Implementation of Token Shift GPT - An autoregressive model that solely relies on shifting the sequence space for mixingβ48Updated 3 years ago
- An implementation of Transformer with Expire-Span, a circuit for learning which memories to retainβ33Updated 4 years ago
- A collection of Models, Datasets, DataModules, Callbacks, Metrics, Losses and Loggers to better integrate pytorch-lightning with transforβ¦β47Updated last year
- Tutorial to pretrain & fine-tune a π€ Flax T5 model on a TPUv3-8 with GCPβ58Updated 2 years ago
- This repository contains example code to build models on TPUsβ30Updated 2 years ago
- This repository hosts the code to port NumPy model weights of BiT-ResNets to TensorFlow SavedModel format.β14Updated 3 years ago
- β21Updated 3 years ago
- Implementation of COCO-LM, Correcting and Contrasting Text Sequences for Language Model Pretraining, in Pytorchβ45Updated 4 years ago
- A case study of efficient training of large language models using commodity hardware.β69Updated 2 years ago
- Google's BigBird (Jax/Flax & PyTorch) @ π€Transformersβ49Updated 2 years ago
- TPU support for the fastai libraryβ13Updated 4 years ago
- A minimal PyTorch Lightning OpenAI GPT w DeepSpeed Training!β111Updated last year
- NLP Examples using the π€ librariesβ41Updated 4 years ago
- A python library for highly configurable transformers - easing model architecture search and experimentation.β49Updated 3 years ago
- β28Updated 2 years ago
- Official code release for the paper Coder Reviewer Reranking for Code Generation.β43Updated 2 years ago
- My explorations into editing the knowledge and memories of an attention networkβ34Updated 2 years ago
- Companion Repo for the Vision Language Modelling YouTube series - https://bit.ly/3PsbsC2 - by Prithivi Da. Open to PRs and collaborationsβ14Updated 2 years ago
- Implementation of N-Grammer, augmenting Transformers with latent n-grams, in Pytorchβ73Updated 2 years ago
- Dense Passage Retrieval using tensorflow-keras on TPUβ15Updated 3 years ago
- Transformers at any scaleβ41Updated last year
- A library for squeakily cleaning and filtering language datasets.β47Updated last year
- Using short models to classify long textsβ21Updated 2 years ago