EleutherAI / best-downloadLinks
URL downloader supporting checkpointing and continuous checksumming.
β19Updated last year
Alternatives and similar repositories for best-download
Users that are interested in best-download are comparing it to the libraries listed below
Sorting:
- **ARCHIVED** Filesystem interface to π€ Hubβ58Updated 2 years ago
- GPT-jax based on the official huggingface libraryβ13Updated 4 years ago
- Developing tools to automatically analyze datasetsβ75Updated 11 months ago
- Few Shot Learning using EleutherAI's GPT-Neo an Open-source version of GPT-3β18Updated 4 years ago
- Efficiently computing & storing token n-grams from large corporaβ26Updated 11 months ago
- A library for squeakily cleaning and filtering language datasets.β47Updated 2 years ago
- π€ Disaggregators: Curated data labelers for in-depth analysis.β67Updated 2 years ago
- One stop shop for all things carpβ59Updated 3 years ago
- Tutorial to pretrain & fine-tune a π€ Flax T5 model on a TPUv3-8 with GCPβ58Updated 3 years ago
- β91Updated 3 years ago
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.β95Updated 2 years ago
- Helper scripts and notes that were used while porting various nlp modelsβ47Updated 3 years ago
- Scripts to convert datasets from various sources to Hugging Face Datasets.β57Updated 2 years ago
- Experiments with generating opensource language model assistantsβ97Updated 2 years ago
- This repository contains code for cleaning your training data of benchmark data to help combat data snooping.β27Updated 2 years ago
- QAmeleon introduces synthetic multilingual QA data using PaLM, a 540B large language model. This dataset was generated by prompt tuning Pβ¦β34Updated 2 years ago
- β79Updated last year
- β33Updated 2 years ago
- Training and evaluation code for the paper "Headless Language Models: Learning without Predicting with Contrastive Weight Tying" (https:/β¦β27Updated last year
- See https://github.com/cuda-mode/triton-index/ instead!β11Updated last year
- A diff tool for language modelsβ44Updated last year
- β22Updated 8 months ago
- This project shows how to derive the total number of training tokens from a large text dataset from π€ datasets with Apache Beam and Dataβ¦β27Updated 2 years ago
- Our open source implementation of MiniLMv2 (https://aclanthology.org/2021.findings-acl.188)β61Updated 2 years ago
- Plug-and-play Search Interfaces with Pyserini and Hugging Faceβ32Updated 2 years ago
- Embedding Recycling for Language modelsβ39Updated 2 years ago
- Streamlit demo app to demonstrate the features of transformers interpret with multiple models.β25Updated 4 years ago
- Hugging Face and Pyserini interoperabilityβ19Updated 2 years ago
- Using short models to classify long textsβ21Updated 2 years ago
- Ongoing research training transformer language models at scale, including: BERT & GPT-2β18Updated 2 years ago