pchizhov / picky_bpeLinks
BPE modification that implements removing of the intermediate tokens during tokenizer training.
☆24Updated 10 months ago
Alternatives and similar repositories for picky_bpe
Users that are interested in picky_bpe are comparing it to the libraries listed below
Sorting:
- A toolkit implementing advanced methods to transfer models and model knowledge across tokenizers.☆46Updated 3 months ago
- Code for SaGe subword tokenizer (EACL 2023)☆26Updated 10 months ago
- Code for Zero-Shot Tokenizer Transfer☆138Updated 8 months ago
- Trully flash implementation of DeBERTa disentangled attention mechanism.☆66Updated 2 weeks ago
- Using open source LLMs to build synthetic datasets for direct preference optimization☆66Updated last year
- ☆57Updated last week
- Datasets collection and preprocessings framework for NLP extreme multitask learning☆188Updated 3 months ago
- minimal pytorch implementation of bm25 (with sparse tensors)☆104Updated last year
- ☆52Updated 8 months ago
- ☆57Updated last year
- Supercharge huggingface transformers with model parallelism.☆77Updated 2 months ago
- ☆80Updated this week
- ☆49Updated 8 months ago
- Experiments for efforts to train a new and improved t5☆75Updated last year
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆60Updated last year
- ☆39Updated last year
- Repository containing the SPIN experiments on the DIBT 10k ranked prompts☆24Updated last year
- Code repository for the c-BTM paper☆107Updated 2 years ago
- An introduction to LLM Sampling☆79Updated 9 months ago
- Plug-and-play Search Interfaces with Pyserini and Hugging Face☆32Updated 2 years ago
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.☆95Updated 2 years ago
- ☆77Updated 3 months ago
- A collection of datasets for language model pretraining including scripts for downloading, preprocesssing, and sampling.☆61Updated last year
- ☆48Updated last year
- ☆69Updated last year
- Simple replication of [ColBERT-v1](https://arxiv.org/abs/2004.12832).☆80Updated last year
- This is a new metric that can be used to evaluate faithfulness of text generated by LLMs. The work behind this repository can be found he…☆31Updated 2 years ago
- some common Huggingface transformers in maximal update parametrization (µP)☆82Updated 3 years ago
- Minimum Bayes Risk Decoding for Hugging Face Transformers☆60Updated last year
- Large language models (LLMs) made easy, EasyLM is a one stop solution for pre-training, finetuning, evaluating and serving LLMs in JAX/Fl…☆75Updated last year