thesephist / hfmLinks
Hugging Face Download (Cache) Manager
☆21Updated 3 years ago
Alternatives and similar repositories for hfm
Users that are interested in hfm are comparing it to the libraries listed below
Sorting:
- Experiments with generating opensource language model assistants☆97Updated 2 years ago
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.☆96Updated 2 years ago
- A library for squeakily cleaning and filtering language datasets.☆48Updated 2 years ago
- Pipeline for pulling and processing online language model pretraining data from the web☆178Updated 2 years ago
- ☆19Updated 2 years ago
- ☆44Updated last year
- A diff tool for language models☆44Updated last year
- My explorations into editing the knowledge and memories of an attention network☆35Updated 2 years ago
- One stop shop for all things carp☆59Updated 3 years ago
- ☆67Updated 3 years ago
- Tutorial to pretrain & fine-tune a 🤗 Flax T5 model on a TPUv3-8 with GCP☆57Updated 3 years ago
- minimal pytorch implementation of bm25 (with sparse tensors)☆104Updated 3 weeks ago
- Training and evaluation code for the paper "Headless Language Models: Learning without Predicting with Contrastive Weight Tying" (https:/…☆28Updated last year
- [WIP] A 🔥 interface for running code in the cloud☆85Updated 2 years ago
- Exploring finetuning public checkpoints on filter 8K sequences on Pile☆115Updated 2 years ago
- ☆62Updated 3 years ago
- Scripts to convert datasets from various sources to Hugging Face Datasets.☆57Updated 3 years ago
- **ARCHIVED** Filesystem interface to 🤗 Hub☆58Updated 2 years ago
- Plug-and-play Search Interfaces with Pyserini and Hugging Face☆32Updated 2 years ago
- A case study of efficient training of large language models using commodity hardware.☆68Updated 3 years ago
- A minimal PyTorch Lightning OpenAI GPT w DeepSpeed Training!☆113Updated 2 years ago
- QAmeleon introduces synthetic multilingual QA data using PaLM, a 540B large language model. This dataset was generated by prompt tuning P…☆35Updated 2 years ago
- This repository contains code for cleaning your training data of benchmark data to help combat data snooping.☆27Updated 2 years ago
- Experiments for efforts to train a new and improved t5☆76Updated last year
- No Parameter Left Behind: How Distillation and Model Size Affect Zero-Shot Retrieval☆29Updated 3 years ago
- Our open source implementation of MiniLMv2 (https://aclanthology.org/2021.findings-acl.188)☆61Updated 2 years ago
- HomebrewNLP in JAX flavour for maintable TPU-Training☆51Updated last year
- ☆39Updated last year
- Embedding Recycling for Language models☆37Updated 2 years ago
- Helper scripts and notes that were used while porting various nlp models☆48Updated 3 years ago