ChrisHayduk / QLoRA-for-MLMLinks
QLoRA for Masked Language Modeling
☆22Updated last year
Alternatives and similar repositories for QLoRA-for-MLM
Users that are interested in QLoRA-for-MLM are comparing it to the libraries listed below
Sorting:
- QLoRA with Enhanced Multi GPU Support☆37Updated 2 years ago
- ☆49Updated 6 months ago
- QAmeleon introduces synthetic multilingual QA data using PaLM, a 540B large language model. This dataset was generated by prompt tuning P…☆34Updated 2 years ago
- Simple GRPO scripts and configurations.☆59Updated 6 months ago
- Genalog is an open source, cross-platform python package allowing generation of synthetic document images with custom degradations and te…☆42Updated last year
- Trully flash implementation of DeBERTa disentangled attention mechanism.☆63Updated 2 weeks ago
- ☆47Updated last year
- Using multiple LLMs for ensemble Forecasting☆16Updated last year
- ☆22Updated last year
- ☆48Updated last year
- ☆23Updated 2 years ago
- An introduction to LLM Sampling☆79Updated 8 months ago
- PyTorch implementation for MRL☆19Updated last year
- ☆54Updated 9 months ago
- ☆61Updated last year
- Embedding Recycling for Language models☆39Updated 2 years ago
- Repository containing the SPIN experiments on the DIBT 10k ranked prompts☆24Updated last year
- NeurIPS 2023 - Cappy: Outperforming and Boosting Large Multi-Task LMs with a Small Scorer☆43Updated last year
- A library for squeakily cleaning and filtering language datasets.☆47Updated 2 years ago
- Training and evaluation code for the paper "Headless Language Models: Learning without Predicting with Contrastive Weight Tying" (https:/…☆27Updated last year
- ☆69Updated last year
- Project code for training LLMs to write better unit tests + code☆21Updated 3 months ago
- Code for NeurIPS LLM Efficiency Challenge☆59Updated last year
- Mixing Language Models with Self-Verification and Meta-Verification☆105Updated 8 months ago
- ☆38Updated last year
- [WIP] Transformer to embed Danbooru labelsets☆13Updated last year
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Updated 2 years ago
- Karpathy's llama2.c transpiled to MLX for Apple Silicon☆15Updated last year
- Zeus LLM Trainer is a rewrite of Stanford Alpaca aiming to be the trainer for all Large Language Models☆70Updated last year
- Exploring finetuning public checkpoints on filter 8K sequences on Pile☆116Updated 2 years ago