serp-ai / Parameter-Efficient-MoELinks
Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks
☆31Updated last year
Alternatives and similar repositories for Parameter-Efficient-MoE
Users that are interested in Parameter-Efficient-MoE are comparing it to the libraries listed below
Sorting:
- entropix style sampling + GUI☆26Updated 7 months ago
- ☆66Updated last year
- 5X faster 60% less memory QLoRA finetuning☆21Updated last year
- ☆49Updated 6 months ago
- ☆72Updated last year
- A public implementation of the ReLoRA pretraining method, built on Lightning-AI's Pytorch Lightning suite.☆33Updated last year
- ☆19Updated last year
- Data preparation code for CrystalCoder 7B LLM☆44Updated last year
- Optimizing Causal LMs through GRPO with weighted reward functions and automated hyperparameter tuning using Optuna☆53Updated 4 months ago
- Using multiple LLMs for ensemble Forecasting☆16Updated last year
- Simple GRPO scripts and configurations.☆58Updated 4 months ago
- ☆53Updated last year
- ☆22Updated last year
- Zeus LLM Trainer is a rewrite of Stanford Alpaca aiming to be the trainer for all Large Language Models☆69Updated last year
- ☆48Updated last year
- Repo hosting codes and materials related to speeding LLMs' inference using token merging.☆36Updated last year
- Lego for GRPO☆28Updated last week
- GPT-4 Level Conversational QA Trained In a Few Hours☆61Updated 9 months ago
- Model REVOLVER, a human in the loop model mixing system.☆32Updated last year
- Nexusflow function call, tool use, and agent benchmarks.☆19Updated 5 months ago
- The simplest, fastest repository for training/finetuning medium-sized xLSTMs.☆41Updated last year
- look how they massacred my boy☆63Updated 7 months ago
- GPT-2 small trained on phi-like data☆66Updated last year
- Just a bunch of benchmark logs for different LLMs☆119Updated 10 months ago
- Modeling code for a BitNet b1.58 Llama-style model.☆25Updated last year
- ☆27Updated last year
- Lightweight toolkit package to train and fine-tune 1.58bit Language models☆69Updated 2 weeks ago
- ☆24Updated last year
- an implementation of Self-Extend, to expand the context window via grouped attention☆119Updated last year
- Comprehensive analysis of difference in performance of QLora, Lora, and Full Finetunes.☆81Updated last year