cisnlp / GlotCCLinks
πΈ GlotCC Dataset and Pipline -- NeurIPS 2024
β20Updated 4 months ago
Alternatives and similar repositories for GlotCC
Users that are interested in GlotCC are comparing it to the libraries listed below
Sorting:
- A toolkit implementing advanced methods to transfer models and model knowledge across tokenizers.β40Updated last month
- β14Updated 2 months ago
- BPE modification that implements removing of the intermediate tokens during tokenizer training.β24Updated 8 months ago
- QAmeleon introduces synthetic multilingual QA data using PaLM, a 540B large language model. This dataset was generated by prompt tuning Pβ¦β34Updated last year
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignmentβ60Updated 11 months ago
- π Multilingual Evaluation of English-Centric LLMs via Cross-Lingual Alignmentβ12Updated 4 months ago
- Code for Zero-Shot Tokenizer Transferβ135Updated 6 months ago
- MEXMA: Token-level objectives improve sentence representationsβ41Updated 7 months ago
- Embedding Recycling for Language modelsβ39Updated 2 years ago
- Training and evaluation code for the paper "Headless Language Models: Learning without Predicting with Contrastive Weight Tying" (https:/β¦β27Updated last year
- β22Updated 6 months ago
- Using short models to classify long textsβ21Updated 2 years ago
- Official implementation of "GPT or BERT: why not both?"β56Updated last week
- BLOOM+1: Adapting BLOOM model to support a new unseen languageβ73Updated last year
- Trully flash implementation of DeBERTa disentangled attention mechanism.β62Updated 2 months ago
- β50Updated 6 months ago
- β39Updated last year
- Plug-and-play Search Interfaces with Pyserini and Hugging Faceβ32Updated 2 years ago
- Using open source LLMs to build synthetic datasets for direct preference optimizationβ65Updated last year
- Repository containing the SPIN experiments on the DIBT 10k ranked promptsβ24Updated last year
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.β93Updated 2 years ago
- Datasets collection and preprocessings framework for NLP extreme multitask learningβ185Updated 3 weeks ago
- A collection of datasets for language model pretraining including scripts for downloading, preprocesssing, and sampling.β59Updated last year
- Starbucks: Improved Training for 2D Matryoshka Embeddingsβ21Updated last month
- β44Updated 8 months ago
- Minimum Bayes Risk Decoding for Hugging Face Transformersβ58Updated last year
- This is a new metric that can be used to evaluate faithfulness of text generated by LLMs. The work behind this repository can be found heβ¦β31Updated last year
- β13Updated 8 months ago
- β37Updated last year
- Official repository for "BLEUBERI: BLEU is a surprisingly effective reward for instruction following"β25Updated 2 months ago