NathanGodey / headless-lm
Training and evaluation code for the paper "Headless Language Models: Learning without Predicting with Contrastive Weight Tying" (https://arxiv.org/abs/2309.08351)
☆23Updated 7 months ago
Related projects ⓘ
Alternatives and complementary repositories for headless-lm
- QAmeleon introduces synthetic multilingual QA data using PaLM, a 540B large language model. This dataset was generated by prompt tuning P…☆34Updated last year
- LTG-Bert☆29Updated 10 months ago
- My explorations into editing the knowledge and memories of an attention network☆34Updated last year
- Embedding Recycling for Language models☆38Updated last year
- Engineering the state of RNN language models (Mamba, RWKV, etc.)☆32Updated 5 months ago
- Using short models to classify long texts☆20Updated last year
- Codes and files for the paper Are Emergent Abilities in Large Language Models just In-Context Learning☆34Updated 8 months ago
- Official implementation of "GPT or BERT: why not both?"☆35Updated last week
- Tutorial to pretrain & fine-tune a 🤗 Flax T5 model on a TPUv3-8 with GCP☆58Updated 2 years ago
- Library for fast text representation and classification.☆28Updated 10 months ago
- QLoRA for Masked Language Modeling☆20Updated last year
- Starbucks: Improved Training for 2D Matryoshka Embeddings☆17Updated last month
- This repository contains code for cleaning your training data of benchmark data to help combat data snooping.☆25Updated last year
- HomebrewNLP in JAX flavour for maintable TPU-Training☆46Updated 10 months ago
- ☆16Updated last year
- Experiments for XLM-V Transformers Integeration☆13Updated last year
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.☆92Updated last year
- Repository for fine-tuning Transformers 🤗 based seq2seq speech models in JAX/Flax.☆34Updated last year
- SWIM-IR is a Synthetic Wikipedia-based Multilingual Information Retrieval training set with 28 million query-passage pairs spanning 33 la…☆44Updated last year
- ☆14Updated last month
- A package for fine tuning of pretrained NLP transformers using Semi Supervised Learning☆15Updated 3 years ago
- Ranking of fine-tuned HF models as base models.☆35Updated last year
- BPE modification that implements removing of the intermediate tokens during tokenizer training.☆23Updated 2 months ago
- ☆46Updated this week
- ☆18Updated 7 months ago
- See https://github.com/cuda-mode/triton-index/ instead!☆11Updated 6 months ago
- Consists of the largest (10K) human annotated code-switched semantic parsing dataset & 170K generated utterance using the CST5 augmentati…☆33Updated last year