huu4ontocord / MDELLinks
Multi-Domain Expert Learning
☆66Updated last year
Alternatives and similar repositories for MDEL
Users that are interested in MDEL are comparing it to the libraries listed below
Sorting:
- Comprehensive analysis of difference in performance of QLora, Lora, and Full Finetunes.☆81Updated last year
- A library for squeakily cleaning and filtering language datasets.☆46Updated last year
- Experiments with generating opensource language model assistants☆97Updated 2 years ago
- QLoRA: Efficient Finetuning of Quantized LLMs☆78Updated last year
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆62Updated last year
- ☆22Updated last year
- Exploring finetuning public checkpoints on filter 8K sequences on Pile☆114Updated 2 years ago
- Code repository for the c-BTM paper☆106Updated last year
- QLoRA with Enhanced Multi GPU Support☆37Updated last year
- ☆44Updated 6 months ago
- ☆95Updated last year
- Adversarial Training and SFT for Bot Safety Models☆40Updated 2 years ago
- Official code for ACL 2023 (short, findings) paper "Recursion of Thought: A Divide and Conquer Approach to Multi-Context Reasoning with L…☆43Updated last year
- Small and Efficient Mathematical Reasoning LLMs☆71Updated last year
- Code for NeurIPS LLM Efficiency Challenge☆58Updated last year
- Finetune Falcon, LLaMA, MPT, and RedPajama on consumer hardware using PEFT LoRA☆103Updated 2 weeks ago
- This repository contains code for cleaning your training data of benchmark data to help combat data snooping.☆25Updated 2 years ago
- ☆23Updated last year
- Full finetuning of large language models without large memory requirements☆93Updated last year
- Evaluating LLMs with CommonGen-Lite☆90Updated last year
- ☆49Updated 6 months ago
- ☆72Updated last year
- ☆34Updated 11 months ago
- Spherical Merge Pytorch/HF format Language Models with minimal feature loss.☆123Updated last year
- ☆20Updated last year
- ☆49Updated last year
- Repository containing the SPIN experiments on the DIBT 10k ranked prompts☆24Updated last year
- Official repo for NAACL 2024 Findings paper "LeTI: Learning to Generate from Textual Interactions."☆65Updated last year
- Zeus LLM Trainer is a rewrite of Stanford Alpaca aiming to be the trainer for all Large Language Models☆68Updated last year
- A repository for transformer critique learning and generation☆89Updated last year