huu4ontocord / MDEL
Multi-Domain Expert Learning
☆67Updated last year
Alternatives and similar repositories for MDEL:
Users that are interested in MDEL are comparing it to the libraries listed below
- Code repository for the c-BTM paper☆106Updated last year
- Comprehensive analysis of difference in performance of QLora, Lora, and Full Finetunes.☆82Updated last year
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Updated last year
- QLoRA with Enhanced Multi GPU Support☆37Updated last year
- ☆94Updated last year
- Experiments with generating opensource language model assistants☆97Updated last year
- ☆24Updated last year
- QLoRA: Efficient Finetuning of Quantized LLMs☆77Updated last year
- Small and Efficient Mathematical Reasoning LLMs☆71Updated last year
- ☆73Updated last year
- ☆48Updated 5 months ago
- A library for squeakily cleaning and filtering language datasets.☆46Updated last year
- Adversarial Training and SFT for Bot Safety Models☆39Updated last year
- Exploring finetuning public checkpoints on filter 8K sequences on Pile☆115Updated 2 years ago
- Official code for ACL 2023 (short, findings) paper "Recursion of Thought: A Divide and Conquer Approach to Multi-Context Reasoning with L…☆43Updated last year
- Official repo for NAACL 2024 Findings paper "LeTI: Learning to Generate from Textual Interactions."☆63Updated last year
- Zeus LLM Trainer is a rewrite of Stanford Alpaca aiming to be the trainer for all Large Language Models☆69Updated last year
- Spherical Merge Pytorch/HF format Language Models with minimal feature loss.☆119Updated last year
- A repository for transformer critique learning and generation☆89Updated last year
- ☆22Updated last year
- ☆49Updated last year
- Full finetuning of large language models without large memory requirements☆93Updated last year
- Public Inflection Benchmarks☆68Updated last year
- Finetune Falcon, LLaMA, MPT, and RedPajama on consumer hardware using PEFT LoRA☆102Updated 8 months ago
- ☆27Updated 2 weeks ago
- Script for processing OpenAI's PRM800K process supervision dataset into an Alpaca-style instruction-response format☆27Updated last year
- ☆20Updated last year
- Patch for MPT-7B which allows using and training a LoRA☆58Updated last year
- An experiment to see if chatgpt can improve the output of the stanford alpaca dataset☆12Updated 2 years ago
- Evaluating LLMs with CommonGen-Lite☆89Updated last year