pchizhov / picky_bpe
BPE modification that implements removing of the intermediate tokens during tokenizer training.
☆25Updated 2 months ago
Alternatives and similar repositories for picky_bpe:
Users that are interested in picky_bpe are comparing it to the libraries listed below
- Repository containing the SPIN experiments on the DIBT 10k ranked prompts☆24Updated 11 months ago
- Plug-and-play Search Interfaces with Pyserini and Hugging Face☆32Updated last year
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆54Updated 5 months ago
- ☆51Updated 5 months ago
- Code for the examples presented in the talk "Training a Llama in your backyard: fine-tuning very large models on consumer hardware" given…☆14Updated last year
- Code for SaGe subword tokenizer (EACL 2023)☆22Updated 2 months ago
- ☆25Updated last year
- Using open source LLMs to build synthetic datasets for direct preference optimization☆57Updated 11 months ago
- This is a new metric that can be used to evaluate faithfulness of text generated by LLMs. The work behind this repository can be found he…☆31Updated last year
- ☆57Updated 4 months ago
- SWIM-IR is a Synthetic Wikipedia-based Multilingual Information Retrieval training set with 28 million query-passage pairs spanning 33 la…☆45Updated last year
- ☆48Updated 3 months ago
- ReBase: Training Task Experts through Retrieval Based Distillation☆28Updated last week
- ☆38Updated 9 months ago
- Code repository for the c-BTM paper☆105Updated last year
- ☆24Updated last year
- QAmeleon introduces synthetic multilingual QA data using PaLM, a 540B large language model. This dataset was generated by prompt tuning P…☆34Updated last year
- ☆41Updated 2 weeks ago
- Codes and files for the paper Are Emergent Abilities in Large Language Models just In-Context Learning☆33Updated last month
- Large language models (LLMs) made easy, EasyLM is a one stop solution for pre-training, finetuning, evaluating and serving LLMs in JAX/Fl…☆65Updated 5 months ago
- Embedding Recycling for Language models☆38Updated last year
- Experiments for efforts to train a new and improved t5☆77Updated 10 months ago
- ☆27Updated this week
- Code for Zero-Shot Tokenizer Transfer☆120Updated last month
- Aioli: A unified optimization framework for language model data mixing☆20Updated 3 weeks ago
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Updated last year
- Reference implementation for Reward-Augmented Decoding: Efficient Controlled Text Generation With a Unidirectional Reward Model☆42Updated last year
- Training and evaluation code for the paper "Headless Language Models: Learning without Predicting with Contrastive Weight Tying" (https:/…☆25Updated 9 months ago
- Simple replication of [ColBERT-v1](https://arxiv.org/abs/2004.12832).☆79Updated 10 months ago
- QLoRA for Masked Language Modeling☆21Updated last year